text
stringlengths
83
79.5k
H: Roadmap to learn CNN in tensorflow from scratch I'm working in the medical field and I'd like to learn applications of CNN for image recognition and classification. All the (few) things I learned come from self-learning on the web or sparse books. I'm studying now Tensorflow for CNN implementation but I'm having trouble finding clear documentation for my actual level, so I think I'm missing the basic knowledge in order to understand this. I'm at a basic level of python programming, I have better understanding of classical machine learning algorithms, which resources should I learn in order to get a good grasp of the argument? Is there such an ideal pathway to this? AI: I'm assuming you're talking about medical images classification, rather than localization. Personally I recommend Kaggle, it has an awesome forum and people share their codes and opinions there. You can start at Digit Recognizer, it's actually the well-known MNIST dataset(hand-written numbers). There's no relationship between MNIST and medical field. However there are some common techniques and tricks as they are both image recognition/classification problems. If you encounter problems, read the other ones' codes at the kernel section, especially those with upvotes. In the meanwhile, comment section is a good place to learn. I learned a lot there when I started to learn CNN.
H: What's a good method for combining data of different deep learning models? Suppose I want to predict the probability of a person to buy something. I want to analyze the person image and I can use a convolutional neural network, but I also want to input in my predictive model how many times he bought something, where he goes more frequently and so on. What is a correct way to input in a deep learning model informations from different deep learning models? AI: Actually you can merge two different DL model into a big one. I'm using Keras+Tensorflow for deep learning, and there's an example about models with multiple inputs(outputs). In this example, tweets are fed into a RNN, while the extra data are fed into a fully-connected layer along with the output of this RNN.
H: Could not convert string to float error on KDDCup99 dataset I am trying to perform a comparison between 5 algorithms against the KDD Cup 99 dataset and the NSL-KDD datasets using Python and I am having an issue when trying to build and evaluate the models against the KDDCup99 dataset and the NSL-KDD dataset. Whenever I try to run the algorithms on the datasets I get the following error 'could not convert string to float: S0' This error is produced during the during the evaluation of the 5 models; Logistic Regression, Linear Discriminant Analysis, K-Nearest Neighbors, Classification and Regression Trees, Gaussian Naive Bayes and Support Vector Machines. Here is the code that I am using to evaluate the datasets: #Load KDD dataset dataset = pandas.read_csv('Datasets/KDDCUP 99/kddcup.csv', names = ['duration','protocol_type','service','src_bytes','dst_bytes','flag','land','wrong_fragment','urgent', 'hot','num_failed_logins','logged_in','num_compromised','root_shell','su_attempted','num_root','num_file_creations', 'num_shells','num_access_files','num_outbound_cmds','is_host_login','is_guest_login','count','serror_rate', 'rerror_rate','same_srv_rate','diff_srv_rate','srv_count','srv_serror_rate','srv_rerror_rate','srv_diff_host_rate', 'dst_host_count','dst_host_srv_count','dst_host_same_srv_rate','dst_host_diff_srv_rate','dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate','dst_host_serror_rate','dst_host_srv_serror_rate','dst_host_rerror_rate','dst_host_srv_rerror_rate','class']) # split data into X and y array = dataset.values X = array[:,0:41] Y = array[:,41] # Split-out validation dataset validation_size = 0.20 seed = 7 X_train, X_validation, Y_train, Y_validation = cross_validation.train_test_split(X, Y, test_size=validation_size, random_state=seed) # Test options and evaluation metric num_folds = 7 num_instances = len(X_train) seed = 7 scoring = 'accuracy' # Algorithms models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) # evaluate each model in turn results = [] names = [] for name, model in models: kfold = cross_validation.KFold(n=num_instances, n_folds=num_folds, random_state=seed) #Here is where the error is spit out { cv_results = cross_validation.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring) # Could not convert string to float happens here. Scoring uses string. results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean()*100, cv_results.std()*100)#multiplying by 100 to show percentage print(msg) } # Compare Algorithms fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(Y) plt.show() Here is a 3 line sample from the KDDcup99 datatset: 0 tcp http SF 215 45076 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 normal. 0 tcp http SF 162 4528 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 0 1 0 0 1 1 1 0 1 0 0 0 0 0 normal. 0 tcp http SF 236 1228 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 2 2 1 0 0.5 0 0 0 0 0 normal. I have tried using label encoding and it still spits out the same error and when I was looking through the sklearn websites, I noticed that the scoring value was for the string type, is this the cause of the issue? and if not, is there a problem with the way I have loaded the dataset? EDIT I tried removing scoring value from the code and still got the same error. AI: I notice you mentioned that you used Label encoding but I did it myself and the code runs just fine. I used the 10 percent version of the dataset . Just put this piece of code after you load the dataset: for column in dataset.columns: if dataset[column].dtype == type(object): le = LabelEncoder() dataset[column] = le.fit_transform(dataset[column]) After label encoding you should use a One Hot Encoder to improve the performance of some algorithms. You should also avoid using cross_validation module as it is deprecated, it will be removed in version 0.20.
H: Looking for a data set that shows hospital patients' illnesses and their vital signs, during their stay I'm looking for a data set that shows hospital patients' vital signs (body temperature and/or heart rate, etc.) and their illnesses, over time. Is there such a data set? For example: Patient #1, Male, age:35, ..., Diagnosed w/: Heart failure 22:32:10: Temperature:32.1 22:32:20: Temperature:34.2 22:32:30: Temperature:33 22:32:40: Temperature:35 Or: Patient #2, Female, age:35, ..., Diagnosed w/: Migraine Attack 22:32: Heart Rate:68 22:33: Heart Rate:75 22:34: Heart Rate:69 22:35: Heart Rate:70 I don't have any insistence for the data to come from a hospital. I already have searched some sources but with no luck. AI: This is not exactly what you are looking for, but here is a collection of clinical data sets that might provide a good alternative: https://dgroppe.com/2016/11/24/public-clinical-data-science-data-sets/ In particular, you might want to look into the Physionet archive or the Sepsis Patient Event Log.
H: Learning with groups of sequential data Say I have a data set such as the following: person, Time, Value, Event person1, 2010-07-02 00:00:00, 5.4, 0 person2, 2010-07-02 10:00:00, 12.7, 0 We have a current model in place at work that doesn't take into account the temporal aspect of our data. In that implementation, the model was trained with only unique values for 'person', and it throws away the time variable. However, it has come to our attention that we can look at our data as a sequence instead. This starting time is unique for each person, and clearly associated with only that person, so merely pretending each person is independent and just treating each row as an individual data point wouldnt make any sense. The following is what I've restructured the data as: person, Time, Value, Event person1, 2010-07-02 00:00:00, 5.4, 0 person1, 2010-07-02 00:00:15, 3.6, 0 person1, 2010-07-02 00:00:30, 2.4, 0 person2, 2010-07-02 10:00:00, 12.7, 0 person2, 2010-07-02 10:01:15, 12.8, 0 person2, 2010-07-02 10:01:30, 13.1, 1 This sequence for each person would continue until and 'event' or 'non-event'. I'm totally unfamiliar with machine learning on time series data. All of the examples I've read with different models treat the data as one big sequence corresponding to one entity, while our data clearly doesn't work like that. Is the way I've structured the data the right way to approach a time series model? And if so, what would be an appropriate model to consider? AI: The data that you are showing is typical survival data. If you want to model the event depending on time and value you should look into survival models with time dependent covariates. If you want to do this without any assumptions on the distribution you can start with a Kaplan Meier estimate as explained here. If you want to use parametric models you can look at Weibull or Gamma regression. If you are new to this topic I can highly recommend to browse the examples of the packages in the CRAN Survival Task View.
H: Price prediction based on historic data Im new to ML. I'm trying to predict if a new Music Album will exceed X amount of dollars in Sales. I'm looking to build a model to go only after potential best sellers. I do have historic data for Music Sales from 2010 till 2016. I have many signals: Music Genre Music Band/Artist name Label Year released Country of origin Part of a Series/Volume... etc. Sales per month What type of ML problem is this one? AI: There are two broad classes of problems in machine learning, classification and regression. As in this answer, Regression involves estimating or predicting a response (the dependent variable is continuous). Classification is identifying group membership (the dependent variable is discrete). Your problem is a regression problem, you must try to estimate the real number of sales. You can look here for a similar problem and techniques to solve it.
H: Why `max_features=n_features` does not make the Random Forest independent of number of trees? Consider the following simple classification problem (Python, scikit-learn) import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score def get_product_data(size): ''' Given a size(int), sets `log10(size)` features to be uniform random variables `Xi` in [-1,1] and an target `y` given by 1 if their product `P` is larger than 0.0 and zero otherwise. Returns a pandas DataFrame. ''' n_features = int(max(2, np.log10(size))) features = dict(('x%d' % i, 2*np.random.rand(size) - 1) for i in range(n_features)) y = np.prod(list(features.values()), axis=0) y = y > 0.0 features.update({'y': y.astype(int)}) return pd.DataFrame(features) # create random data df = get_product_data(1000) X = np.array(df.drop(df.columns[-1], axis=1)) y = df['y'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1) def predict(clf): ''' Splits train/test with a fixed seed, fits, and returns the accuracy ''' clf.fit(X_train, y_train) return accuracy_score(y_test, clf.predict(X_test)) and the following classifiers: foo10 = RandomForestClassifier(10, max_features=None, bootstrap=False) foo100 = RandomForestClassifier(100, max_features=None, bootstrap=False) foo200 = RandomForestClassifier(200, max_features=None, bootstrap=False) Why does predict(foo10) # 0.906060606061 predict(foo100) # 0.933333333333 predict(foo200) # 0.915151515152 give different scores? Specifically, with max_features=None, all features are selected for each tree bootstrap=False, there is no bootstrap of samples max_depth=None (default), all trees reach the maximum depth I would expect each tree to be exactly the same. Thus, regardless of how many trees the forest has, the predictions should be equal. Where is the tree's variability coming from in this example? What further parameters would I have to introduce in the RandomForestClassifier.__init__ in such a way that foo* have all the same score? AI: Interesting puzzle indeed. First things first. The DecisionTreeClassifier has some stochastic behavior. For instance, the splitter code iterates through the features at random: f_j = rand_int(n_drawn_constants, f_i - n_found_constants, random_state) Your data is small and comes from the same distribution. What this means is that you'll have a lot of identical purity scores depending on how iteration is done. If you (a) increase your data, or (b) make it more separable, you'll see the problem should ameliorate. To clarify: if the algorithm computes the score for feature A and then computes the score for feature B and it gets score N. Or if it computes first the score for feature B and then for feature A and it gets the same score N, you can see how each decision tree will be different, and have different scores during test, even if the train test is the same (100% if max_depth=None of course). (You can confirm this.) During my exploration of your question, I have produced the following code with my own implementation of a random forest. Since it took me some time, I figured I might as well paste it here. :) Seriously, it can be useful. You can try to disable random_state from my implementation to see what I mean. from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score import numpy as np class MyRandomForestClassifier: def __init__(self, n_estimators): self.n_estimators = n_estimators def fit(self, X, y): self.trees = [DecisionTreeClassifier(random_state=1).fit(X, y) for _ in range(self.n_estimators)] return self def predict(self, X): yp = [tree.predict(X) for tree in self.trees] return ((np.sum(yp, 0) / len(self.trees)) > 0.5).astype(int) def score(self, X, y): return accuracy_score(y, self.predict(X)) for alpha in (1, 0.1, 0.01): np.random.seed(1) print('# alpha: %s' % str(alpha)) N = 1000 X = np.random.random((N, 10)) y = np.r_[np.zeros(N//2, int), np.ones(N//2, int)] X[y == 1] = X[y == 1]*alpha Xtr, Xts, ytr, yts = train_test_split(X, y) print('## sklearn forest') for n_estimators in (1, 10, 100, 200, 500): m = RandomForestClassifier( n_estimators, max_features=None, bootstrap=False) m.fit(Xtr, ytr) print('%3d: %.4f' % (n_estimators, m.score(Xts, yts))) print('## my forest') for n_estimators in (1, 10, 100, 200, 500): m = MyRandomForestClassifier(n_estimators) m.fit(Xtr, ytr) print('%3d: %.4f' % (n_estimators, m.score(Xts, yts))) print() Summary: Each DecisionTreeClassifier is stochastic, data such as yours, which is small and comes from the same distribution, are bound to produce slightly different trees, even if the random forest itself is deterministic. You can fix this by passing the same seed to each DecisionTreeClassifier which you can do using random_state=something. RandomForestClassifier also has a random_state parameter which it passes along each DecisionTreeClassifier. (This is slightly incorrect, see the edit.) EDIT2: While this removes the stochasticity component of the training, the decision trees would still be different. The thing is that sklearn ensembles generate a new random seed for each child based on the random state they are given. They do not pass along the same random_state. You can see this is the case by checking the _set_random_states method from the ensemble base module, in particular this line, which propagates the random_state across the ensembles' children.
H: What does the notation mAP@[.5:.95] mean? For detection, a common way to determine if one object proposal was right is Intersection over Union (IoU, IU). This takes the set $A$ of proposed object pixels and the set of true object pixels $B$ and calculates: $$IoU(A, B) = \frac{A \cap B}{A \cup B}$$ Commonly, IoU > 0.5 means that it was a hit, otherwise it was a fail. For each class, one can calculate the True Positive ($TP(c)$): a proposal was made for class $c$ and there actually was an object of class $c$ False Positive ($FP(c)$): a proposal was made for class $c$, but there is no object of class $c$ Average Precision for class $c$: $\frac{\#TP(c)}{\#TP(c) + \#FP(c)}$ The mAP (mean average precision) = $\frac{1}{|classes|}\sum_{c \in classes} \frac{\#TP(c)}{\#TP(c) + \#FP(c)}$ If one wants better proposals, one does increase the IoU from 0.5 to a higher value (up to 1.0 which would be perfect). One can denote this with mAP@p, where $p \in (0, 1)$ is the IoU. But what does mAP@[.5:.95] (as found in this paper) mean? AI: mAP@[.5:.95](someone denoted mAP@[.5,.95]) means average mAP over different IoU thresholds, from 0.5 to 0.95, step 0.05 (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95). There is an associated MS COCO challenge with a new evaluation metric, that averages mAP over different IoU thresholds, from 0.5 to 0.95 (written as “0.5:0.95”). [Ref] We evaluate the mAP averaged for IoU ∈ [0.5 : 0.05 : 0.95] (COCO’s standard metric, simply denoted as mAP@[.5, .95]) and [email protected] (PASCAL VOC’s metric). [Ref] To evaluate our final detections, we use the official COCO API [20], which measures mAP averaged over IOU thresholds in [0.5 : 0.05 : 0.95], amongst other metrics. [Ref] BTW, the source code of coco shows exactly what mAP@[.5:.95] is doing: self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) References cocoapi Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks Speed/accuracy trade-offs for modern convolutional object detectors
H: Why mini batch size is better than one single "batch" with all training data? I often read that in case of Deep Learning models the usual practice is to apply mini batches (generally a small one, 32/64) over several training epochs. I cannot really fathom the reason behind this. Unless I'm mistaken, the batch size is the number of training instances let seen by the model during a training iteration; and epoch is a full turn when each of the training instances have been seen by the model. If so, I cannot see the advantage of iterate over an almost insignificant subset of the training instances several times in contrast with applying a "max batch" by expose all the available training instances in each turn to the model (assuming, of course, enough the memory). What is the advantage of this approach? AI: The key advantage of using minibatch as opposed to the full dataset goes back to the fundamental idea of stochastic gradient descent1. In batch gradient descent, you compute the gradient over the entire dataset, averaging over potentially a vast amount of information. It takes lots of memory to do that. But the real handicap is the batch gradient trajectory land you in a bad spot (saddle point). In pure SGD, on the other hand, you update your parameters by adding (minus sign) the gradient computed on a single instance of the dataset. Since it's based on one random data point, it's very noisy and may go off in a direction far from the batch gradient. However, the noisiness is exactly what you want in non-convex optimization, because it helps you escape from saddle points or local minima(Theorem 6 in [2]). The disadvantage is it's terribly inefficient and you need to loop over the entire dataset many times to find a good solution. The minibatch methodology is a compromise that injects enough noise to each gradient update, while achieving a relative speedy convergence. 1 Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010 (pp. 177-186). Physica-Verlag HD. [2] Ge, R., Huang, F., Jin, C., & Yuan, Y. (2015, June). Escaping From Saddle Points-Online Stochastic Gradient for Tensor Decomposition. In COLT (pp. 797-842). EDIT : I just saw this comment on Yann LeCun's facebook, which gives a fresh perspective on this question (sorry don't know how to link to fb.) Training with large minibatches is bad for your health. More importantly, it's bad for your test error. Friends dont let friends use minibatches larger than 32. Let's face it: the only people have switched to minibatch sizes larger than one since 2012 is because GPUs are inefficient for batch sizes smaller than 32. That's a terrible reason. It just means our hardware sucks. He cited this paper which has just been posted on arXiv few days ago (Apr 2018), which is worth reading, Dominic Masters, Carlo Luschi, Revisiting Small Batch Training for Deep Neural Networks, arXiv:1804.07612v1 From the abstract, While the use of large mini-batches increases the available computational parallelism, small batch training has been shown to provide improved generalization performance ... The best performance has been consistently obtained for mini-batch sizes between m=2 and m=32, which contrasts with recent work advocating the use of mini-batch sizes in the thousands.
H: Back propagation and Structure of a Neural Network in scikit-neuralnetwork I am trying to learn Neural Networks using scikit-neuralnetwork framework and I know basics about Neural Networks and now trying to implement it with scikit-learn. but I am confused on 2 points. 1- what is the structure of this NN given below? Somehow, in some examples felt to me, some people don't put input layer as a layer. Otherwise, I am thinking this as a 2 layer NN has input layer with 100 nodes and 1 node at the ouput layer. from sknn.mlp import Classifier, Layer nn = Classifier( layers=[ Layer("Maxout", units=100, pieces=2), Layer("Softmax")], learning_rate=0.001, n_iter=25) nn.fit(X_train, y_train) 2- Does scikit-neuralnetwork do back propagation within the code that I put above? Thank you! AI: 1) From what I understand, scikit-neuralnetwork tries to automatically determine the correct input and output sizes by the X and y data you give it when calling nn.fit. Therefore structure should be: Input layer with shape determined by X_train Dense layer with 100 units and maxout activation with 2 linear pieces Softmax classification layer with as many units as needed for y_train Seems to use input shape from data here: https://github.com/aigamedev/scikit-neuralnetwork/blob/b7fd0c089bd7c721c4d9cf9ca71eed74c6bafc5e/sknn/backend/lasagne/mlp.py#L183 And output shape from data here: https://github.com/aigamedev/scikit-neuralnetwork/blob/b7fd0c089bd7c721c4d9cf9ca71eed74c6bafc5e/sknn/mlp.py#L62 However, note that maxout seems no longer supported: https://github.com/aigamedev/scikit-neuralnetwork/issues/142 2) Yes it uses backpropagation by calling appropriate lasagne/theano functions to create/compile the backpropagation training function: https://github.com/aigamedev/scikit-neuralnetwork/blob/b7fd0c089bd7c721c4d9cf9ca71eed74c6bafc5e/sknn/backend/lasagne/mlp.py#L50-L103 (Actual training seems to happen here: https://github.com/aigamedev/scikit-neuralnetwork/blob/b7fd0c089bd7c721c4d9cf9ca71eed74c6bafc5e/sknn/backend/lasagne/mlp.py#L316-L335)
H: How to reload all attributes in WEKA Is there a way to reload all attributes after having removed someones without reopening the data file ? Any help please ? AI: Judging from the screenshot, you are currently looking at the data in the preproces tab from the explorer module. In the menu above the top menu in your screenshot should be an undo option (5th option from the left).
H: Is there a C library for machine learning algorithms? Are there any machine learning libraries for C. Specifically interested in unsupervised learning. AI: Here's a detailed table about different machine learning libraries on different languages: https://github.com/josephmisiti/awesome-machine-learning Checkout C version here and C++ version here. Personally speaking, try OpenCV ! OpenCV provides multiple machine learning implementations including KMeans, kNN, SVM, etc.
H: ideas for variable with branching data? I had no idea how to explain this in the title but anyway... Let's say I have a data points like this: John, Happy | Greedy | Smart | Funny, 0.8 Ann, Smart | Sad | Funny, 0.6 Joel, Greedy | Prideful | Stupid, 0.2 Where the first part is the name the second is there characteristics and the third is their overall character score (how nice they are to be around or something). Is there a good way to work with this data so that I can work out what the best possible combinations are? Assume I have a large enough data set. There may be say 30 of those characteristics and any person can have any number of them and every characteristic is equally valid to every person. Hopefully that explains it. Essentially I am want a way to organise the traits so that I can say "smart and happy" make a better combination than "sad and greedy". I also need to be able to asses the ultimate combination and be able to compare any two possible combinations. AI: It's a regression machine learning problem. Assuming you have 30 characteristics, one-hot encoded into 30 columns. And your target is the character score, min-max scaled into $[0, 1]$. So we have X.shape=(None, 30), Y.shape=(None,) (just like what ncasas has stated), thus we can train a regression model using your favorite algorithm (linear regression, random forest, even neural network). After we have this model working, as each person has and only has 3 characters, we can predict the character score for each character combination one by one. The time complexity is roughly $O(n^3)$. However, $n$ is small in your case, so maybe we can just brute-force every score on every combination. That's what you want.
H: Understanding Locally Weighted Linear Regression I'm having problem understanding how we choose the weight function. In Andrew Ng's notes, a method for calculating a local weight, a standard choice of weights is given by: What I don't understand is, what exactly is the x here? Apparently Note that the weights depend on the particular point x at which we’re trying to evaluate x. But I don't get it. Take the example of housing prices predicted by number of rooms and size in square feet. So each x^(i) is a [roomnum, size] array. So what's in x? I guess that also should be a [roomnum, size] array, but what's in it? Is it even a vector? Or is it the target variable? If so, why isn't it marked with y? I don't get it, please help! EDIT Ok so what I want is to make create a regression line like this: How would I choose the x-es? What would they be in an algorithm? Do I have to make guesses for each x? How can I produce a line like this? AI: Locally weighted linear regression is a non-parametric method for fitting data points. What does that mean? Instead of fitting a single regression line, you fit many linear regression models. The final resulting smooth curve is the product of all those regression models. Obviously, we can't fit the same linear model again and again. Instead, for each linear model we want to fit, we find a point x and use that for fitting a local regression model. We find points that closest to x to fit each of our local regression model. That's why you'll see the algorithm is also known as nearest neighbours algorithm in the literature. Now, if your data points have the x-values from 1 to 100: [1,2,3 ... 98, 99, 100]. The algorithm would fit a linear model for 1,2,3...,98,99,100. That means, you'll have 100 regression models. Again, when we fit each of the model, we can't just use all the data points in the sample. For each of the model, we find the closest points and use that for fitting. For example, if the algorithm wants to fit for x=50, it will put higher weight on [48,49,50,51,52] and less weight on [45,46,47,53,54,55]. When it tries to fit for x=95, the points [92,93,95,96,97] will have higher weight than any other data points. Do you see the pattern? The points closer the where you want to fit have higher weight and the points further have lower weight (zero if too far). That's what the weight function is for. To answer your question directly: x are the data points for each local regression model. They are usually (but not always) the data points in your sample.
H: Generating data that look alike my original data If I have a set of N data, each individual data has 4 features. I do not know a priori the relations that could exist (or don't exist) between the features. Is it possible to generate new data from my initial set that will respect the implicit relations that could exist between the features without having to find explicitly these relations ? AI: What you want is a generative model. A simple generative model from the deep learning family are autoencoders: neural networks that receive as input your data and are trained to output that very same data. There are different types of autoencoders. One of the most simple are contractive autoencoders, that have a bottle neck layer, that is, a layer with very few units. For your case, with only 4 input features, you may have a 2 unit (or even 1 unit, you may try to tune this hyperparameter) hidden layer as bottleneck. Once fully trained, you only take the part of the autoencoder from the bottleneck to the output, and feed it with random numbers as input, and expect to get as output data that follows the same distribution as the original inputs. The idea is that the training has allowed the net to learn representations of the input data distributions in the form of latent variables. Depending on the distribution of your input data, a simple contractive autoencoder may not be able to properly learn good representations. More advanced variants include denoising autoencoders, sparse autoencoders and variational autoencoders. Other generative models that are currently very trendy are Generative Adversarial Networks.
H: Scikit-Learn - Learned model description? Is there a way I can "look inside" a model once it's trained? For example, if I train a spam filter with a multinomialNB, is there a way I can extract which words are most likely to make an email classify as spam? I'd like to see how the models determine the outcome once fitted. AI: For the particular case of the MultinomialNB you can look here. However if want you want is to determine which features are the most important, the SelectFromModel for selecting the most important features for the model.
H: Is it possible to train a neural network to solve polynomial equations? I randomly generate millions groups of triplet $\lbrace x_0, x_1, x_2 \rbrace$ within range $(0,1)$, then calculate the corresponding coefficients of the polynomial $(x-x_0)(x-x_1)(x-x_2)$, which result in triplet groups normalized in a form of $\lbrace { {x_0+x_1+x_2 \over 3} , {\sqrt{x_0x_1+x_1x_2+x_0x_2 \over 3}} , {\sqrt[3]{x_0x_1x_2}}} \rbrace$; After that, I feed the coefficient triplets in to a 5-layered neural network $\lbrace 3,4,5,4,3 \rbrace$, in which all the activation function is set to sigmoid and the learning rate is set to 0.1; However, I only get a very poor cross validation, around 20%. How can I fix this? BackGround My original problem is a dynamic inverse problem. In that problem, I have hundreds of thousands of observations $O$, from these observations, I need to recover several hundred parameters $P$. The simulation process from $P$ to $O$ is very easy and cheap to calculate, but the inversion from $O$ to $P$ is highly nonlinear and nearly impossible. My idea is to train a neural network getting $O$ as inputs and $P$ as outputs. To check the feasibility of this idea, I employ a 3-ordered polynomial equation to do the validation. update half a year later With more nodes per layer, I have successfully trained a neural network. The topology is set to $\lbrace 3, 64, 64, 64 \rbrace$. And the most important trick is, sorting the generated triplet $\lbrace x_0, x_1, x_2 \rbrace$, ensuring $x_0 <= x_1 <= x_2$ always holds. AI: You're trying to fit a very complicated function. There is no reason to expect that neural networks will be very good at this. Neural networks aren't magic pixie dust. They can do some things well, but don't expect a silver bullet. You're trying to get the neural network to learn to compute some complicated function. We do know that given sufficiently nodes and sufficiently many layers, you can get an arbitrarily good approximation to this function, but there's no a priori reason to expect it should be possible with the particular (small) number of nodes and layers you chose. In fact, two layers suffices, if you have sufficiently many nodes -- but we have no way to compute how many nodes are needed. In particular, the function you are trying to compute amounts to computing $x_0,x_1,x_2$ from $a,b,c,d$, where $$\begin{align*} x_k &= - {1 \over 3a} \left(b + \eta^k C + {\Delta_0 \over \eta^k C}\right)\\ \Delta_0 &= b^2 - 3ac\\ C &= \sqrt[3]{\Delta_1 \pm \sqrt{\Delta_1^2 - 4\Delta_0^3} \over 2}\\ \Delta_1 &= 2b^3 - 9abc + 27a^2d\\ \eta &= -{1 \over 2} + {1 \over 2} \sqrt{3} i \end{align*}$$ (This comes from the general formula for solving a cubic equation.) That's an extremely messy function, and thus it might be challenging to approximate using a neural network of the architecture you present. You can always try increasing the number of nodes and/or number of layers, but there's no a priori theory to tell you what the right number is.
H: Integer programming formulation: which algorithms I have a complex problem that I have simplified coming to a simple integer linear programming formulation. Given the scalar $K > 0$, the vectors $v(t) \in R^n$ and $b_i \in R^n, \forall i=1,\ldots,K$ are known. In particular $v(t)$ is the measured value (ex: every 10 minutes) while the $b_i$ are fixed (computed ones, previously). Consider that I have: $$K >> n,$$ i.e., I have to identify the presence of $K$ (ex: $K=20$) elements, each one of dimension $n$ (ex: $n=2$). So, for every fixed $t$, the problem that I have to solve has the following formulation: $$ \newcommand\norm[1]{\left\lVert#1\right\rVert} \begin{equation*}\begin{aligned} & \underset{x_i}{\text{minimize}} & & \norm{v - \sum_{i=1}^{K} b_i x_i}_{2} \\ & \text{subject to} & & x_i \in \{0,1\},\ \forall i = 1, \ldots, K. \end{aligned} \end{equation*}$$ Do you know which algorithms solve that formulation? In particular I am looking for an algorithm implemented in Python. AI: Contrary to what you wrote in the first sentence of the question, your problem is not an instance of integer linear programming (ILP) and cannot be formulated as an ILP problem. If you used the $L_1$ norm (instead of the $L_2$ norm), it could be formulated as an ILP problem. You'd introduce additional variables $t_1,\dots,t_n$, add the linear inequalities $$-t_j \le \left(v - \sum_{i=1}^K b_ix_i \right)_j \le t_j$$ where $(\cdots)_j$ denotes the $j$th coefficient of the vector, and then minimize the objective function $t_1+\dots+t_n$. With the $L_2$ norm, this is no longer an ILP problem. It is an instance of integer quadratic programming. You could try applying an off-the-shelf solver for mixed-integer quadratic programming (MIQP). However, MIQP is pretty tough in general, so I don't know whether this will be effective. As another alternative, you could relax your MIQP instance to an instance of quadratic programming or semidefinitive programming, solve for a solution in the reals, and then round each real number to the nearest integer (or use randomized rounding), and hope that the resulting solution is "pretty good". This might be more computationally feasible (as quadratic programming / semidefinitive programming over the reals is easier than MIQP) but there are no guarantees on the quality of the resulting solution; it might be arbitrarily bad. Your problem seems related to the Closest Vector Problem (CVP) in lattices, which is believed to be hard. Here you have the additional constraint that the coefficients be 0 or 1 (instead of them being arbitrary integers). If $n$ is not too large, you might be able to use existing algorithms, like LLL basis reduction. I don't know whether this will work or not.
H: Creating a neural network for predicting next vote in a series of votes We are working on a project for creating music based on crowd sourcing. People vote for every note until the vote is closed, and then move on to the next vote until the canvas for the music is filled. A similar project is crowdsound, if you want to get an idea of what it looks like. Now the fun part is, based on all the votes we get from various people, we would like to be able to build a Neural Network that can build an entire song on its own. The idea is for it to take in account every preceding vote and predict the one that will follow. That way, when trained, we could give it one note and let it predict the rest of the votes on its own and thus create a song on its own. So I've read a few things here and there about neural networks, but there are two things I don't understand: How to build one that takes into account a dynamic number of inputs (all preceding votes). How exactly should I decide the number of hidden layers (I still only vaguely understand what those hidden layers represent) I need for it to work well. We are using Java for the project and we were planning on using Neuroph for the neural network. AI: Recurrent neural network may fit your needs. Read about LSTM & GRU , which has been implemented into various NN libraires. Here is the link to the keras documentayion of RNN : https://keras.io/layers/recurrent/ Some interesting music project that takes advantage of RNN : https://github.com/tensorflow/magenta https://github.com/jisungk/deepjazz
H: Why is stochastic gradient descent so much worse than batch GD for MNIST task? Here the code from Tensorflow tutorial: A Multilayer Perceptron implementation example With batch size = 100 we quickly got Accuracy: 94.59%. If I set the batch size to be one, the training takes ten times more time but the accuracy is only nine percent. I have tested different learning rates with no luck. SGD performance is terrible for small batch sizes. We can expect that SGD performance will be lower, but not ten times less! What is the reason for this performance loss? AI: Why is stochastic gradient descent so much worse then batch GD for MNIST task? It isn't inherently worse. Instead, by changing just one parameter on its own you have adjusted the example outside of where it has been "tuned" to work, because it is a simplified example for learning purposes, and it is missing some features that most users of NNs would consider standard. The batch size of 1 is performing just fine. In fact, although it takes longer to process the same number of epochs, each epoch actually has more weight updates. You get 100 times as many weight updates, although each one has far more noise in it than a batch size of 100. It is these extra weight updates, plus extra time spent running interpreted Python code for 100 times as many batches, which adds the large amount of time. The problem with accuracy is that the example network has no protection from overfitting. By running so many more weight updates, the training starts to learn the precise image data to match each digit, in order to get the best score. And by doing that exclusively it learns rules that work really well on the training data but generalise really badly to new data in the test set. Try batch size of 1 and number of epochs = 3 (I tried this and got accuracy of 94.32%). Basically that is using early stopping as a form of regularisation. It's not the best form of regularisation, but it is quick to try and often effective - the problem is how to tell when to stop, so you need to measure a test set (often separate to final test set, called cross-validation set) at any potential stopping point, and save the best model so far. That will obviously involve adjusting the example code. Probably the 15 epochs in the original example has been chosen carefully so as to not make overfitting a problem with batch size of 100, but as soon as you change batch size, without any other form of regularisation, the network is very likely to over-fit. In general neural networks strongly tend to over-fit, and you have to spend time and effort to understand and defend against this. Have a look at regularisation in TensorFlow for other options. For this kind of problem, dropout (explained lower down page in the link) is highly recommended, which is not purely regularisation, but works to improve generalisation for many neural network problems.
H: Info obtained from a confusion matrix I am new to data science and I am trying to understand the use/importance of accuracy, precision, recall, sensitivity and f1-score when I have a confusion matrix. I know how to compute all of them but I cannot really understand which of them to use each time. Could you give examples where for instance precision is a better metric that recall or where the f1-score gives essential information that I cannot get from the other terms ? In other words, in which cases should I use each of the aforementioned terms ? AI: First, let's be clear about the fact that all these measures are only for evaluating binary classification tasks. The way to understand the differences is to look at examples where the number of instances is (very) different in the two classes, either the true classes (gold) or predicted classes. For instance imagine a task to detect cities names among the words in a text. It's not very common, so in your test set you may have 1000 words, only 5 of them are cities names (positive). Now imagine two systems: Dummy system A which always says "negative" for any word Real system B (e.g. which works with a dictionary of cities names). Let's say that B misses 2 real cities and mistakenly identifies 8 other words as cities. System A gets an accuracy of 995/1000 = 99.5%, even though it does nothing. System B has 990/1000=99.0%. It looks like A is better, that's why accuracy rarely gives the full picture. Precision represents how correct a system is in its positive predictions: system A always says negative so it has 0% precision. System B has 3/11 = 27%. Recall represents the proportion of true positive instances which are retrieved by a system: system A doesn't retrieve anything so it has 0% recall. System B has 3/5 = 60%. F1-score is a way to have a single value which represents the harmonic mean of the precision and recall. It's used as a "summary" of these two values, which is convenient when one needs to order different systems by their performance. The choice of an evaluation measure depends on the task: for instance, if predicting a FN has life-threatening consequences (e.g. cancer detection), then recall is crucial. If on the contrary it's very important to avoid FP cases, then precision makes more sense (say for instance if an automatic missile system would mistaken identify a commercial flight as a threat). The most common case though is certainly F1-score (or more generally F$\alpha$-score), which is suited to most binary classification tasks.
H: PCA shows overlapping boundaries, then why SVM performs best I am trying to understand which model might work for a given problem before trying the models, I find this case against my knowledge. Please guide what I am missing. I am new to Data Science. Here is the graph which I got through PCA : Now you can see the boundaries are very much overlapping. The theory for SVM says that this model might work best with overlapping non linear data, which does not seems to be this case. But still its able to identify all data in test set. So can you provide some clarity on why SVM performing good in this. So my final results it is below order: Logistic Regression and SVM are same (Accuracy Score : 1.0) Random Forest (Accuracy Score : 0.9680851063829787) KNN (Accuracy Score : 0.925531914893617) other details : feature set : 40 sample data : around 500 AI: I assume you applied SVM to your initial data and use PCA only for visualization. I this case: I guess your projection via the PCA is not showing the real picture. You should check first how much of your data is explained with the first two principal components of the PCA. Your projection might change to much the structure of your data so that its not seperable anymore. In case the projection to the selected principal components is not changing your data too much, it could be that you maintain the seperability. Finally, note: If the projection is linearly seperable, then so is your data. If the projection is not linearly seperable, you cannot conclude about seperability of your data.
H: Keras Model Predict is not predicting all images flowing from directory? I have the following code where I have done all the training and passed the testing set as a flow from directory. After that when I pass that object into the model.predict option, the array received is not of the same length as the test set length. Code: PATH = '/content/testing' testGen.reset() testGen = valAug.flow_from_directory( PATH, class_mode="categorical", target_size=(75, 75), color_mode="rgb", shuffle=False, batch_size=BS) predIdxs = model1.predict_generator(testGen, steps=(totalTest // 32)) print(len(predIdxs)) # for each image in the testing set we need to find the index of the # label with corresponding largest predicted probability predIdxs = np.argmax(predIdxs, axis=1) print(len(predIdxs)) import pandas as pd from glob import glob test_df = pd.DataFrame() id = [] for x in glob('/content/testing/0/*'): id.append(x) for x in glob('/content/testing/1/*'): id.append(x) test_df['id'] = id test_df['category'] = predIdxs print(test_df) test_df.to_csv('submission.csv', index=False) After that the output I got is as follows: Found 55505 images belonging to 2 classes. 55488 55488 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-81-09c1488e5ac6> in <module>() 25 id.append(x) 26 test_df['id'] = id ---> 27 test_df['category'] = predIdxs 28 print(test_df) 29 test_df.to_csv('submission.csv', index=False) 3 frames /usr/local/lib/python3.6/dist-packages/pandas/core/internals/construction.py in sanitize_index(data, index, copy) 609 610 if len(data) != len(index): --> 611 raise ValueError("Length of values does not match length of index") 612 613 if isinstance(data, ABCIndexClass) and not copy: ValueError: Length of values does not match length of index Lenght of test set(totalTest = 55505) but only 55488 data is predicted. Why is data lost here? P.S: The model I have used is a pretrained Inception V3 model where I have downloaded the weights beforehand and run the model. I got about 85% accuracy. And I have tried the same method using Resnet block also and I have received the results without error. Why am I getting an error here? Any help would be appreciated. AI: I presume that your generator is at fault here : predIdxs = model1.predict_generator(testGen, steps=(totalTest // 32)) You do an integer division on the size of your test set, but the result is not an integer and thus truncated (floored). Later on you (presumably) use original size for one column, and want to assign the data from your predict_generator (which is shorter) to another column of that same dataframe, creating a mismatch.
H: How do we implement a custom loss that backpropagates with PyTorch? In a neural network code written in PyTorch, we have defined and used this custom loss, that should replicate the behavior of the Cross Entropy loss: def my_loss(output, target): global classes v = torch.empty(batchSize) xi = torch.empty(batchSize) for j in range(0, batchSize): v[j] = 0 for k in range(0, len(classes)): v[j] += math.exp(output[j][k]) for j in range(0, batchSize): xi[j] = -math.log( math.exp( output[j][target[j]] ) / v[j] ) loss = torch.mean(xi) print(loss) loss.requires_grad = True return loss but it doesn't converge to accetable accuracies. AI: You should only use pytorch's implementation of math functions, otherwise, torch does not know how to differentiate them. Replace math.exp with torch.exp, math.log with torch.log. Also, try to use vectorised operations instead of loops as often as you can, because this will be much faster. Finally, as far as I can see, you are merely reimplementing a log loss in pytorch, any reason why you don't use one that has been implemented by default? (see here or here) [EDIT]: If after having removed math operations and implemented a vectorised version of the loss, it still does not converge, here are a few pointers on how to debug it: Check that the loss is correct by calculating the value manually and compare it with what the function outputs Compute the gradient manually and check that it is the same as the values in loss.grad, after running loss.backward() (more info here) Monitor the loss and the gradient after a few iterations to check that everything goes right during the training
H: Handling a Pandas Data Frame containing multiple non-ordinal categorical features I'm currently trying to analyse a dataset containing multiple non-ordinal categorical features and a binary target variable. The table looks something like this: +------------+---------+------------+--------+ | Col1 | .... | Col14 | Target | +------------+---------+------------+--------+ | cat 1 | cat 1 | cat 1 | 0 | | ... | ... | ... | ... | | cat 9 | cat 50 | cat 450 | 1 | +------------+---------+------------+--------+ The entire table is 400.000 rows x 15 columns, from which the last column is the target variable. Each feature has multiple non-ordinal categories ranging from 9 categories to multiple hundreds of categories. My first instinct would be to one hot encode all the categorical variables. However, I'm scared that doing so will make any model prone to overfitting. How could I handle/encode the features variables to analyse their effect on the target variable, using Python? AI: It looks like a case where target encoding will shine. Target encoding replaces a category for the mean target in that category. You have to be careful not to overfit, but it is a very effective method to use categorical features with many levels. There's a python package that implements target encoding using the syntax of Scikit Learn.
H: How much data should I allocate for my training and and test sets? (in R) I have a matrix of 358.367 data. Each row is a DNA sequence from the human genome. I want to build a classification model in R, using XGBoost algorithm and 83 features (dinucleotides, trinucleotides, etc.). How should I split the data for the train and test set? For example 70% for the train set and 30% for the test set? 30% for the train set and 70% for the test set? AI: There is no "golden rule" here. Your data set is very handy - neither too large nor too small. Sounds like a very exciting project! Here is how I often proceed in comparable settings. Do all splits stratified by response or, if the rows are not independent but rather clustered by some grouping variable (e.g. the family etc), grouped sampling. Important rule: avoid any leakage across splits. Set aside 10%-15% of rows for testing. Don't touch them until the analysis is complete. Act as you would never utilize this test set. Select loss function and relevant performance measure. Fit a random forest without tuning and use its OOB error as benchmark. Choose parameters of XGB by 5-fold cross-validation iteratively by grid search, first starting with very wide parameter ranges and then making those ranges smaller and smaller. The number of boosting rounds are automatically optimized by early stopping. Choose model and present cross-validation performance. At the very last, reveal test performance.
H: How to determine what algorithm to apply in the following dataset (included) I have a dataset that I will be using to build a classifier on. Below I have plotted the First and the Second Principal Component of the data using sklearn.preprocessing.PCA. Since the two different classes are not well separated a linear classifier will not work here. My question is which classifier would be best for this scenario. My research brought be to KNN. But My intuition says the class ratio is highly imbalanced a large value of k in KNN would always tend towards the larger class count. It will be a nightmare to train it on SVM since therw are to many observations in the dataset and it will take too long. AI: Note that doing a dimensionality reduction with the target can lead you to the manifold problem. You can see in the image. What ends up happening is that the target information is lost. The information that you provide is not enough to make a guess of what algorithm will be better. Normally reducing the dimensionality of the problem to a lower dimension space in order to plot where you want to have the decision boundaries is not a good idea. It's really hard to have a human understanding of how do these algorithms do the decision boundaries in a large scale dimension, that is why it is just better to have an empirical approach. Try a few of them, select a metric, and choose the one that has a higher score.
H: What are some good loss functions used to minimize extreme errors in regression and time series forecasting? E.g. In detriment of a smaller mean error, I want to have fewer big mistakes I'm working on a time series forecasting task and in some specific cases I don't need perfect accuracy, but the network cannot by any mean miss by a lot. Any suggestions of loss functions or other methods to solve this issue? AI: As you increase the harshness of big misses, you make the model less willing to miss big. For instance, absolute loss considers missing by $2$ to be twice as bad as missing by $1$, but square loss considers missing by $2$ to be four times as bad as missing by $1$! Imagine if you use cubic loss: $$L_3(y,\hat{y}) = \sqrt[3]{\sum\big\vert y_i - \hat{y}_i \big\vert^3}$$ Imagine if you use quintic loss: $$L_5(y,\hat{y}) = \sqrt[5]{\sum\big\vert y_i - \hat{y}_i \big\vert^5}$$ This gets at the general form of $L_p$ loss, which I'm guessing you can guess before reading what I type. $$L_p(y,\hat{y}) = \sqrt[p]{\sum\big\vert y_i - \hat{y}_i \big\vert^p}$$ This relates to the idea of $L^p$ norms. At the extreme of $L^p$ norms, you can set $p=\infty$, which is equivalent to just looking at the maximum. If you use $L_{\infty}$ loss, you minimize the greatest error. I've wanted to play around with $L_{\infty}$ loss for a while and would be very curious what happens if you choose this loss function. I do, however, think this would be a very extreme loss function. (Remember that $RMSE$, $MSE$, and $SSE$ all have the same $argmin$; by similar logic we are free to take the $p^{th}$ roots to get the cubic and qunitic loss functions I gave.)
H: Training models from sklearn using tf.distribute.MirroredStrategy I want to distribute the training of a simple model, such as a support vector classifier like sklearn.svm.SVC() across some or all CPUs and GPUs on a single device. I have never utilized a GPU before and I'm confused as to how this works or if using tensorflow is even the right choice for this simple task. What i think I need to do is something like this: import tensorflow as tf from sklearn.svm import SVC from sklearn import datasets from sklearn.model_selection import train_test_split strategy = tf.distribute.MirroredStrategy() with strategy.scope(): iris = datasets.load_iris() X = iris.data y = iris.target class_names = iris.target_names X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) classifier = svm.SVC(kernel='linear', C=0.01).fit(X_train, y_train) The actual dataset I'm using is $\approx 3 \times 10^8$ training examples with 11 features. Does this code do what I think it does? If not, what would be the best way to go about this task? If so, is there anything that can be improved? EDIT: After doing some more googling I discovered that sklearn does not support GPU utilization. See this post: https://stackoverflow.com/questions/41567895/will-scikit-learn-utilize-gpu I'm still not sure how I can go about utilizing a GPU for simple ML models. AI: There are a few approaches that allow you to do basic ML modelling using a GPU. First of all, in the code as you presented it, the tensorflow MirroredStrategy unfortunately has no effect. It will only work with tensorflow models themselves, not those from sklearn. In fact, sklearn does not offer any GPU support at all. 1. CUML An Nvidia library that provides some basic ML model types and other things, often offering the exact same API (classes and functions) as SciKit-Learn. Coming from Nvidia, this of course means everything is built with the GPU in mind. This is part of the larger RAPIDS toolset (incubated by Nvidia). Maybe there are other tools there that can be helpful, like their XGBoost library. 2. Tensorflow / PyTorch + NumPy These framework are not just for complicated Deep Learning, you can really use them to perform any basic modelling and leverage their GPU support. Their documentation contains examples, otherwise something like Hands On Machine Learning (a book with an accompanying set of Jupyter notebooks) is a nice way to dig in. These frameworks work well with the normal scientific stack in Python (such as NumPy, Scipy, Pandas) because numpy arrays and the frameworks' Tensor objects are plug-and-play for most cases. 3. Another option: Stick to sklearn while you are learning about the models, how they work and so on. If you want to just do anything with the goal of learning about about GPU usage, the two options above are the most modern ways to get started.
H: Generate dataframe series from current series which is a list of objects I currently have a JSON object that looks like this {"submissionTime":"2019-02-25T09:26:00","b_data":{"bName":"Masato","b_Acc":[{"id":0,"transactions":[{"date":"2019-12-19","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-12-03","text":"LINE FEE","amount":-460.21,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-12-31","text":"INTEREST","amount":-871.62,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-12-31","text":"LOAN SERVICE FEE","amount":-120,"type":"Loan Related Fees","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-12-18","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-12-02","text":"LINE FEE","amount":-498.34,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-11-29","text":"INTEREST","amount":-794.4,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-11-19","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-11-01","text":"LINE FEE","amount":-484.87,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-10-31","text":"INTEREST","amount":-882.04,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-10-21","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-10-01","text":"LINE FEE","amount":-503.59,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-09-30","text":"INTEREST","amount":-916.98,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-09-30","text":"LOAN SERVICE FEE","amount":-120,"type":"Loan Related Fees","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-09-19","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-09-02","text":"LINE FEE","amount":-489.65,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-08-30","text":"INTEREST","amount":-892.13,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]}]}]}} I am trying to create a dataframe and add a new series in it called category, the value from this series comes from the tags series. The tag series is a list of key value objects I need to retrieve the category of each row, and if the list of each row doesnt have a category, then the value should be unknown, making the end result of the dataframe to look like this I havent been able to do much progress, as I dont know how to and what will be the best practice to go through each cell in the tags column import json import numpy as np import pandas as pd with open('question.json') as json_data: d = json.load(json_data) df = pd.json_normalize(d['b_data']['b_Acc']) frames = [] for index, row in df.iterrows(): frames = frames + row['transactions'] df = pd.DataFrame(frames) df['category'] = ? AI: One way of doing it is by creating an auxiliary function to extract the category from your tag and return it when found or 'unknown' otherwise. Then using .apply() with that function will do the trick: json_string = '''{"submissionTime":"2019-02-25T09:26:00","b_data":{"bName":"Masato","b_Acc":[{"id":0,"transactions":[{"date":"2019-12-19","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-12-03","text":"LINE FEE","amount":-460.21,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-12-31","text":"INTEREST","amount":-871.62,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-12-31","text":"LOAN SERVICE FEE","amount":-120,"type":"Loan Related Fees","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-12-18","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-12-02","text":"LINE FEE","amount":-498.34,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-11-29","text":"INTEREST","amount":-794.4,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-11-19","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-11-01","text":"LINE FEE","amount":-484.87,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-10-31","text":"INTEREST","amount":-882.04,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-10-21","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-10-01","text":"LINE FEE","amount":-503.59,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-09-30","text":"INTEREST","amount":-916.98,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-09-30","text":"LOAN SERVICE FEE","amount":-120,"type":"Loan Related Fees","tags":[{"category":"Fees"},{"creditDebit":"debit"}]},{"date":"2019-09-19","text":"PERIODICAL PAYMENT","amount":3397,"type":"","tags":[{"institution":"University of MC"},{"lenderType":"private"},{"category":"birdy"},{"creditDebit":"credit"}]},{"date":"2019-09-02","text":"LINE FEE","amount":-489.65,"type":"Overdrawn Fees","tags":[{"category":"Overdrawn"},{"creditDebit":"debit"}]},{"date":"2019-08-30","text":"INTEREST","amount":-892.13,"type":"Interest Charge","tags":[{"category":"Fees"},{"creditDebit":"debit"}]}]}]}}''' js = json.loads(json_string) df = pd.DataFrame(js['b_data']['b_Acc'][0]['transactions']) def extract_category(tag): dall = {} # we create a new unique dict with all the items in the tag for d in tag: dall.update(d) # if category is in our new dict, return it else return unknown if 'category' in dall.keys(): return dall['category'] else: return 'unknown' df['category'] = df.tags.apply(lambda x: extract_category(x))
H: Using Random Forest Regression correctly I have run Random forest regression with python and I have the fear that I haven't done it correct. I have original image that has 3 different bands (each pixels has 3 values) and I want to try to see if I can predict the first value using the two others. For that I used Random Forest regression, I have created train and test and fit the model: rf.fit(X_train,y_train) rf_pred=rf.predict(X_test) then after I checked the prediction on the test and saw it was good, I wanted to use the same model in order to predict all the values of all the pixels in the image, so I did that: pred_all=rf.predict(data) *data includes all the pixels of the image. My question : is this the right way to do this? can I just predict all the pixel values just by using the rf.predict after fit it with the train and test sets? Or am I missing here some step that should be taken? AI: It seems good. (Understanding that you are not getting any error) You are fitting your model in some data, then evaluating on other (basic validation). Now you have to find data that have the same format and similar distribution and predict there. You can have a look at the random forest documentation to see what else you can do https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
H: How to add a new column with labels in a dataframe? I have thousands of sentences that I would need to label based on their sentiment. An example is Testo "Può dirmi quale fotocamera della Sony fa delle belle foto?" "Che macchina fantastica! Ha smesso di funzionare dopo due giorni" "Quel pallavolista ha un fisico pauroso" "Questo iPad è fantastico" I have tried using Fasttext (Python Package) but I do not know how to add the sentiment label in a new column. Also, I got the wrong sentiment (below it is shown the expected output). Testo Sentiment "Può dirmi quale fotocamera della Sony fa delle belle foto?" 0 "Che macchina fantastica! Ha smesso di funzionare dopo due giorni" -1 "Quel pallavolista ha un fisico pauroso" 1 "Questo iPad è fantastico" 1 How can I do it? AI: Do you have a working method to get the sentiment label for an individual sentence? If so, I would think you could just create a method to get the sentiment given a sentence and use it's return value as the row entry. The apply function from pandas will apply a function to every row if you specify axis=1 def getSentiment(row): # Do some sentiment operations here on row['Testo'] return sentiment_label df.apply(getSentiment, axis=1)
H: XGboost and regularization Does the XGBClassifier method utilizes the two regularization terms reg_alpha and reg_lambda, or are they redundant and only utilized in the regression method (i.e., XGBRegressor)? I think that some hyperparameters in XGBoost could have no effect on specific methods (e.g., scale_pos_weight in XGBRegressor). AI: XGB uses the two kinds of regularization in both classification and regression; each leaf is a continuous score, these scores added together for the final prediction (of log-odds in the classification case), so penalizing the weights makes sense in either setting. See also L1 & L2 Regularization in Light GBM But yes, some hyperparameters (scale_pos_weight) seem to be vestigial: What does xgb's scale_pos_weight parameter do for regression? https://discuss.xgboost.ai/t/scale-pos-weight-for-regression/218/10
H: Text comparison: spot the differences I would like to know what would be the best approach to compare two texts and see the differences between them. For example: Sent_1=“This toolset is a set of macros for performing a number of modelling tasks.” Sent_2=“This tool is a set of macros which help performing a certain number of tasks.” I do not mind the context/meaning at the moment, but I would like to know what it would be the best approach to spot differences (looking at each word, antecedent and subsequent to it) and see how accurate it is. AI: You could look at string similarity measures and TFIDF (usually with cosine). If you want a measure which works at both levels of words and sentences, there are more advanced options such as SoftTFIDF.
H: Is this a valid stability concern/improvement for DQN/DDQN reinforcement training? As you all know, DQN or DDQN are known for "unstable training". Let's use the well known "CartPole". The agent has to balance the stick and gets a reward of +1 per frame. You can reach the 195 threshold with Cartpole-v0, but results will vary a lot. You will have a hard time to get this working until it is "nearly stable". Possible reasons are learning rate, batch size and so on... If you master v0, switch to "Cartpole-v1" and I'm sure your "stable" system will fail again. You normally have to adapt parameters to make it working again. (just my experience) But, there is something in the workflow of the algo, i don't understand: for ep in range(num_episodes): state = env.reset() total = 0.0 done = False while not done: action = agent.get_action(state) next_state, reward, done, info = env.step(action) agent.remember(state, action, reward, next_state, done) agent.train() total += reward state = next_state ep_rewards.append(total) You all have seen this workflow before, what's the problem here...? 1. We measure performance while we are training and moving the weights Every agent.train() call does BATCH TRAINING and changes the weights. The "total" reward is calculated with a lot of different models, which one are we measuring? 2. In case of the cartpole example the episode ends (done) if it fails to balance Some (the first) runs are very short - that leads to lesser training (inconsistent loop count). That means, if the agent performs bad, it does lesser training. If it works well - it trains a lot, does loop a lot until done and moves its weights away from the good policy and can get unstable. 3. If we save a model - which model are we really saving? We effectively test a bunch of models and get some type of average performance, but what happens if we save the model after a good run? We can have a good run (high total reward) - but save a bad (the last) model? What weights are we really saving, i can't explain that in a way that makes sense, can you? Now a simple improvement that solves all problems just by moving some code parts: for ep in range(num_episodes): state = env.reset() total = 0.0 done = False while not done: action = agent.get_action(state) next_state, reward, done, info = env.step(action) agent.remember(state, action, reward, next_state, done) total += reward state = next_state # the total result comes from a fixed model # correct performance measures and saving ONE model are now possible ep_rewards.append(total) # train outside of while(done) # every episode has now a constant number of train runs, 50 for example for i in range(50): agent.train() After changing it i get a really stable performance at max values as you can see here: Run 7 | Episode: 770 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 780 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 790 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 800 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 810 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 820 | eps 0.0 | total: 432.00 | ddqn True Run 7 | Episode: 830 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 840 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 850 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 860 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 870 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 880 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 890 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 900 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 910 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 920 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 930 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 940 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 950 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 960 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 970 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 980 | eps 0.0 | total: 500.00 | ddqn True Run 7 | Episode: 990 | eps 0.0 | total: 500.00 | ddqn True Are my concerns valid? Is this a valid improvement or do i miss something here? AI: The "total" reward is calculated with a lot of different models, which one are we measuring? The total reward in principle should be calculated by freezing a policy (equivalent to freezing the parameters for a parametric policy) and then computing the average of multiple rollouts on the environment, the usual Monte Carlo. If it works well - it trains a lot, does loop a lot until done and moves its weights away from the good policy and can get unstable. One of the reasons of having a replay buffer in off-policy learning is in principle to prevent precisely this catastrophic forgetting in the neural network parametrized for the policy under dataset shift (the shift in distribution of states, actions and rewards observed). If we save a model - which model are we really saving? You would probably want to compute the rewards like mentioned earlier (becomes sort of a validation set) and pick the model which gives you the best mean reward (and probably variance). This is often expensive and a cheap proxy is your first version of the algorithm which is biased in reward computation but the bias reduces to zero as the policy converges in theory. The changes you've made to the data collection and learning loop is a valid one. However, this is usually not done because in principle this is sample inefficient. You have to wait to complete a full trajectory before incorporating that information into the model. A more attractive approach usually followed in literature is where you'd ideally do it every few steps.
H: Issue while predicting multiple values which possess different order of magnitude (regression) I have a system that take electric signal and output parameters (regression). However I run into an issue. The parameters I want to predict are not on the same magnitude. I would like also to use only one neural networks if possible. Parameter 1: Mean = 12.53673 Minimum = 10.00461 Maximum = 14.98899 Parameter 2: Mean = 148656955394038 Minimum = 75029133522564 Maximum = 224934092847235 Parameter 3: Mean = 1.475720134278e+17 Minimum = 7.506184799345e+16 Maximum = 2.249190781380e+17 ... If I use any loss function such as RMSE, MSE, MAE, MSLE the losses of the big parameters takeover all the rest. The network therefore randomly guess all the smaller parameters I am searching a good way to find those parameters using one network. I tried using "mean absolute percentage error". But the learning is exceptionally slow, and it often get stuck into extremely low value and extremely high value. I am maybe thinking about creating one loss per target. Freeze all the network except the output layer and each epoch let one of the loss train the whole network. But I am not sure it is a good idea. Currently I am normalizing the target before predicting it, but I don't like this solution very much. I would prefer having a slightly more complicated architecture than normalizing and denormalizing (afraid of data leakage) and it seems more robust to directly output the right value. Moreover In the future I will have an issue similar except I will need to predict series that possesses different order of magnitude (and not only some distinct value). So I hope being able to reuse the technique I am learning Any help would be very appreciated! Thanks :) EDIT: For those who happens to have the same issue, this is the code I went with (keras) class Model: def __init__(self, shape_features, shape_targets, mean_targets, std_targets): self.shape_features = shape_features self.shape_targets = shape_targets self.mean_targets = mean_targets self.std_targets = std_targets def build(self): inputs = Input(shape=self.shape_features, name='Input') x = Conv1D(filters=16, kernel_size=3, activation='relu', name='Convolution1D_1')(inputs) x = MaxPooling1D(pool_size=2, strides=2, padding='valid', name='MaxPooling1D_1')(x) x = Conv1D(filters=32, kernel_size=3, activation='relu', name='Convolution1D_2')(x) x = MaxPooling1D(pool_size=2, strides=2, padding='valid', name='MaxPooling1D_2')(x) x = Conv1D(filters=64, kernel_size=3, activation='relu', name='Convolution1D_3')(x) x = MaxPooling1D(pool_size=2, strides=2, padding='valid', name='MaxPooling1D_3')(x) x = Flatten(name='Flatten')(x) x = Dense(128, activation='relu', name='Dense_1')(x) x = Dense(64, activation='relu', name='Dense_2')(x) x = Dropout(0.2, name='Dropout')(x) outputs = Dense(self.shape_targets, activation='linear')(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) def de_normalizing(tensor): return tensor * self.std_targets + self.mean_targets predictions = Lambda(de_normalizing)(outputs) model_prediction = tf.keras.Model(inputs=inputs, outputs=predictions) def custom_loss(y_true, y_pred): y_true = (y_true-self.mean_targets)/self.std_targets return K.mean(K.square(y_pred - y_true), axis=-1) return model, model_prediction, custom_loss One model is used for training, the other is used for predictions AI: I am maybe thinking about creating one loss per target. Freeze all the network except the output layer and each epoch let one of the loss train the whole network. But I am not sure it is a good idea. This is one option. It should work but it may be slower to converge as you only make use of only a third of your losses each epoch. Also, it is not completely unlikely that your losses may work "against each other". Meaning that they could somewhat cancel each other out, which can make convergence take longer. Currently I am normalizing the target before predicting it, but I don't like this solution very much. I would prefer having a slightly more complicated architecture than normalizing and denormalizing (afraid of data leakage) and it seems more robust to directly output the right value. That's what I would do. I don't think you should be afraid of data leakage. Computers have enough precision in them that normalizing/denormalizing shouldn't create any issues. Also, keep in mind that computers have greater precision for floats close to 0. So normalizing your data may actually be good. And when you say, I'd rather directly output the right value. You can do both! Your network could directly output the denormalized value, but the loss used to train the network can be computed on any layer before that.
H: PCA vs.KernelPCA: which one to use for high dimensional data? I have a dataset which contains a lot of features (>>3). For computational reasons, I would like to apply a dimensionality reduction. At this point I could use different techniques: standard PCA Kernel PCA LLE ... My problem is to choose the right approach since the number of features is so high that I cannot know beforehand what the distribution of points is like. I could do it only if I have 3D data, but in my case I have much more than that. I know for example that if the set of points was lineary seperable I could use standard PCA; if it was somehow a sort of concentric circles like shape, then KernelPCA would be a better option. Therefore how can I know beforehand which dimensionality reduction technique I need to use for high dimensional data? AI: The fact is that in Unsupervised algorithms, you never know. That is their main bottleneck. Unsupervised algorithms (Clustering, Dimensionality Reductions, etc.) are based on assumptions. When an assumption is made, then it will be translated into a math algorithm and applied. Choosing the right thing, as you said, is possible only if you know how is the distribution and/or topology of your data beforehand. But unfortunately it does not happen most of the time. Higher dimensional the data is, more difficult it gets to guess its structure. If you are using it as a feature extraction step for a supervised task, then the right way is to evaluate the impact of each on your Supervised learning through a statistical model selection (e.g. cross validation). If you are using them for an unsupervised task like clustering then you may choose some practical criteria (there is NO theoretical one i.e. there is NOT any theoretical justification for clustering task). For example you can visualize them in 2 or 3 dimensions and try to inspect if clusters are right (for instance by some known samples from your data. If you know two extreme cases of different samples, a better clustering puts them in far clusters, etc.) Again I would emphasize that there is no universally true evaluation for unsupervised tasks like clustering. Hope it helped!
H: What's the best graphic in R to study air pollution? I have a Python dataframe (in addition to an excel file) with data on air pollution for countries, (one line every day between 2014 and 2020). I would like to see what's the polluant the most related to industrial production, fort that I want to compare the data during quarantine and after. These are the first lines of my excel file (also a python dataframe) I am new to Python/R as well as statistics and I do not really know how to handle this data and what type of graphic I should use ? Any information is welcome Thank you very much AI: Well you can just start with simple line graph. Each line in graph will represent Specie. The X axis will be years from your data and Y axis will be the median. I am not sure why you have median and not actual data but this is how you can chart with visualization. Use this link to start.
H: Can I train a CNN to detect the number of objects without localizing them first? So I was trying to search but couldn't find any answers. I was wondering if it possible to train a model to detect the number of items of interest in a photo without having bounding boxes or dots to locate the objects in the training set. For example say I wanted to count something simple like street poles in a photo, would it be possible with just the photos in the training set and the number of poles as the target only. So no bounding boxes or points labeled for the training data targets. AI: There are different approaches to achieve your goals: 1.) Provide only number of objects in an image in order to train an object detector. This is called weak supervision or weak labeling. Some works that utilize this approach: https://arxiv.org/pdf/1711.05282.pdf : Their approach is to train an object detector on image proposals, together with object counts. https://hal.inria.fr/hal-02393688/file/1912.00384.pdf : Object detectors can be trained by providing only one categery label per image 2.) There are several works focusing on counting objects: For example the following works focus on counting arbitrary objects: https://arxiv.org/pdf/1903.02494.pdf https://arxiv.org/pdf/1604.03505.pdf For density estimations there are several works counting people: https://arxiv.org/abs/1903.00853 Depending on the setup, it could also be used for your application,
H: What is an object detection problem with only one class called? Object detection is defined as the problem in which a model needs to figure out the bounding boxes and the class for each object. A lot of ML solutions for object detection base around having "two passes" - one for creating the bounding box of the region and another for classifying it. I was wondering if there is a name for a subset of this problem where $n_{classes} = 1$. I feel like there is an interesting opportunity here as the whole classification part of the model can (basically) be ignored. Obviously, I can just train a typical object detection model with one class, but was just interested to see if there are any more specialized methods. AI: If you are talking about "two-stage" obejct detectors like Faster R-CNN, note that the second phase is not only for classification, but to obtain more accurate results (https://stackoverflow.com/a/61965140/4762996). in addition, I guess training a detector with many classes acts like a regularizer, which results in much better accuracies. The only benefit of explicitly training with one class could be the reduzed model size (and a corresponding speedup). Note also that there are one stage object detectors (CornerNet: https://arxiv.org/abs/1808.01244, YOLOv3: https://pjreddie.com/media/files/papers/YOLOv3.pdf, DETR: https://arxiv.org/abs/2005.12872, and many more). I just wanted to stress that just because you only have one class, it does not make sense to take a two-stage detection architecture and skip the second stage. Finally, have a look at: https://stats.stackexchange.com/q/351079 An interesting direction would be to incorporate one-class classification approaches, e.g. as mentioned here: https://stackoverflow.com/a/61965358/4762996.
H: Clusters: how to improve results for text classification I am trying to classify texts using kmeans, TfidfVectorizer, PCA. However, it seems that many texts are not correctly classified as you can see: I have texts in cluster2 that should be in Cluster 0 or 1. My question is on how to improve the results, if increasing the number of clusters or adding more constraints (like specific words to look at for clustering texts). Help and suggestions will be greatly appreciated. Code: def preprocessing(line): line = re.sub(r"[^a-zA-Z]", " ", line.lower()) words = word_tokenize(line) words_lemmed = [WordNetLemmatizer().lemmatize(w) for w in words if w not in stop_words] return words_lemmed vect =TfidfVectorizer(tokenizer=preprocessing) vectorized_text=vect.fit_transform(df['Text']) kmeans =KMeans(n_clusters=n).fit(vectorized_text) cl=kmeans.predict(vectorized_text) df['Predicted']=pd.Series(cl, index=df.index) df.groupby("Predicted").count() kmeans_labels =KMeans(n_clusters=n).fit(vectorized_text).labels_ pipeline = Pipeline([('tfidf', TfidfVectorizer())]) X = pipeline.fit_transform(df['Text']).todense() pca = PCA(n_components=n).fit(X) data2D = pca.transform(X) kmeans.fit(X) centers2D = pca.transform(kmeans.cluster_centers_) labels=kmeans.labels_ cluster_name = ["Cluster"+str(i) for i in set(labels)] AI: You have tagged this as text-classification, but describe your efforts to cluster the documents. My answer is based on the assumption that you don't know how the documents are grouped for you to classify new documents. The first step would be to determine the clusters/classes/groups and then classify new documents. There are at least 2 things one can think of for this problem. Clustering Algorithm Data for the clustering algorithm 1. Clustering Algorithm K-Means clustering is a simple and fast algorithm that produces adequate results. It is almost always the first tool that people reach for, but it has some limitations a) User must choose the number of clusters ahead of running it b) Data must not be very sparse c) Data must be reasonably distributed around the centroid - spherical in nature See also, k-Means Advantages and Disadvantages. Do you know apriori, how many clusters you have? You could experiment by running the algorithm multiple times with different cluster sizes and observing the silhouette score. Then choose the one with the best score or one that fits your expectations the best. There are other algorithms that might suit your needs better, like DBSCAN or Agglomerative clustering. They don't require the user to choose number of parameters, but do require you to pass in metrics to determine cluster separation. I've used DBSCAN effectively, but don't have as much experience with Agglomerative clustering. Once again, you may have to experiment with the _min_samples_ and eps parameters for DBSCAN. SKlearn has good description in the Clustering User Guide. 2. Data for clustering algorithm TF-IDF is a good metric for the text and it is a simple and quick way to produce. It is a representation of the different terms that occur with the documents and the weight associated with it. You have a few different options to try out sklearn.TfidfVectorizer can take a custom vocabulary. This will help by reducing the vocabulary to important terms. Also, as suggested by @Graph4Me, you should eliminate stopwords. Both of these techniques will reduce the sparsity of the matrix, in turn making your clustering algorithms more efficient. Change from a TF-IDF model to use an word embedding. This is much more complex, but you could use GloVe or BERT to produce them. Spacy can also produce an embedding vector.This will give you an contextual representation rather than just a count/density based on presence of words.
H: Interpreting Loss in neural network: Neural network train loss gradually tappers and validation loss never reaches a minima Unable to improve the network validation loss. Is it overfitting/underfitting. How can I get a better validation loss?.The code is below def create_model(lr=0.05): #tf.random.set_seed(1) tf.keras.backend.clear_session() gc.collect() # Dense input dense_input = Input(shape=(len(dense_cols), ), name='dense1') # Embedding input #Turns positive integers (indexes) into dense vectors of fixed size. wday_input = Input(shape=(1,), name='wday') month_input = Input(shape=(1,), name='month') event_type_1_input = Input(shape=(1,), name='event_type_1') item_id_input = Input(shape=(1,), name='item_id') dept_id_input = Input(shape=(1,), name='dept_id') store_id_input = Input(shape=(1,), name='store_id') cat_id_input = Input(shape=(1,), name='cat_id') state_id_input = Input(shape=(1,), name='state_id') wday_emb = Flatten()(Embedding(7, 3)(wday_input)) month_emb = Flatten()(Embedding(12, 2)(month_input)) event_type_1_emb = Flatten()(Embedding(5, 1)(event_type_1_input)) item_id_emb = Flatten()(Embedding(3049, 3)(item_id_input)) dept_id_emb = Flatten()(Embedding(7, 1)(dept_id_input)) store_id_emb = Flatten()(Embedding(10, 1)(store_id_input)) cat_id_emb = Flatten()(Embedding(3, 1)(cat_id_input)) state_id_emb = Flatten()(Embedding(3, 1)(state_id_input)) # Combine dense and embedding parts and add dense layers. Exit on linear scale. x1 = concatenate([dense_input, event_type_1_emb, wday_emb , month_emb, item_id_emb, dept_id_emb, store_id_emb, cat_id_emb, state_id_emb]) x = BatchNormalization()(x1) x = Dense(7142, activation=None,kernel_initializer='lecun_normal',kernel_regularizer= regularizers.l1_l2(0.001))(x) x = BatchNormalization()(x) x = Activation("selu")(x) x = AlphaDropout(0.30)(x) x = Dense(714, activation=None,kernel_initializer='lecun_normal',kernel_regularizer = regularizers.l2(0.001))(x) x = BatchNormalization()(x) x = Activation("selu")(x) x = AlphaDropout(0.3)(x) x = Dense(34, activation = None,kernel_initializer='lecun_normal',kernel_regularizer = regularizers.l2(0.001))(x) x = BatchNormalization()(x) x = Activation("selu")(x) x = Add()([x,x1]) outputs = Dense(1, activation="softplus", name='output',kernel_regularizer = regularizers.l2(0.001))(x) inputs = {"dense1": dense_input, "wday": wday_input, "month": month_input,# "year": year_input, "event_type_1": event_type_1_input, "item_id": item_id_input, "dept_id": dept_id_input, "store_id": store_id_input, "cat_id": cat_id_input, "state_id": state_id_input} # Connect input and output model = Model(inputs, outputs) model.compile(loss=keras.losses.mean_squared_error, metrics=["mse","mape","mae"], #optimizer=keras.optimizers.SGD(learning_rate=lr_schedule)) optimizer=keras.optimizers.RMSprop(learning_rate=lr)) return model ``` AI: this is a case of overfitting: your model performs very good just after few epochs but only in the training set, while on the validation set it improves too slowly. There are several tricks that you can use: 1) a simpler network: you can lower the dimensions of the embeddig layers, as example. I also see some of the embeddings you are using can be avoided: the case of the month_emb, which I suppose stands for "month", and with have a dimensionality of 12. Is much better to use months as a simple number, or a categorical feature. 2) got more data: this is the hardest and often impossible way to proceed, but often is the best one. 3) reduce the number of inputs, by transforming/joining some of them.
H: CSVs: how to output missing data to make processing easier? If supplying data (that can be either string or numeric) via CSVs, what's a good strategy for marking that a value is missing? Some non-empty sentinel, like NA Some non-empty sentinel, but quoted like "NA" The empty string, e.g. in the CSV the value will be 0 characters long. The quoted empty string "" Something else / depending on case? If it makes a difference, would like the CSVs to behave reasonably when read using R readr, Python Pandas, and Excel. AI: Both pandas and readr allow flexibility on the marker used to indicate missing values, allowing to specify values that should be treated as missing when reading csvs: readr https://readr.tidyverse.org/reference/read_delim.html pandas https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html given that, maybe you can handle missing data in your code rather than replacing values in the original csvs. If you get to choose, maybe NA (no hyphens) is a sensible default as it's handled as missing value by both libs by default.
H: How to work with Log-transformation? I'm beginning my data science journey and I've faced a challenge that confuses me a bit. I have a set with few features and a target variable whose raw distribution is highly skewed. I've read that it's possible to use a log transformation to normalize the target variable (loss in $) and thus increase the accuracy. When I train my model with "y_raw", using MAE I get an error of 306k. When I log-transform y = y.transform(np.log) I get MAE accuracy of around 2 (log-transformed units I suppose?), which is e^2 = 7.39 (y_raw). This is a significant drop from 306k to only 7.39 ($) (or am I getting it wrong?), so I am a bit suspicious about it. So here are my questions: 1) Did I get it correct that the error rate drop from 306k to only 7.39 is real and is valid? 2) How do I make a predictions from there? If I feed a sample to my model, receive a log-transformed output, lets say it returned a prediction of y_log = 10. Do I then simply use an inverse of it by placing e^10 = 22,026.5 and will it be my final prediction? AI: Taking the log doesn't result in a normally-distributed target; it would tend to if the target was log-normally distributed, and you have something normalish there, not quite. But, this distribution isn't actually what matters. What taking the log does is change your model of how errors arise when fitting a regressor. You're now saying that the target values are $e^{P + \epsilon}$ where $P$ is your model's prediction and $\epsilon$ is Gaussian noise. Or: $e^P e^\epsilon$. That part directly interacts with the assumptions in your regressor. So what you're finding is that on average the predictions are wrong by a factor of 7.39, not +/- $7.39. What you really want to do is evaluate MAE on actual target values vs $e^P$ . You probably have a better model but not that much better.
H: in NLP academic paper, would it be okay to refer the "token embeddings" as "tokens"? I am writing a paper in Natural Language Processing (NLP), and I just have a quick question about terminology. In language models like Transformers, "token" refers to individual word in a text sequence, whereas there is a special term "token embedding" to refer to the embedding that results after token gets passed through the initial embedding layer. Would it be problematic if I just refer a "token embedding" as a "token"? (e.g. "interaction between hidden embeddings and token embeddings" ---> "interaction between hidden embeddings and tokens") I am trying to accommodate the different terminologies, but my sentences are getting really wordy... Thank you, AI: At least to me, it would sound strange, as I would understand tokens as the discrete textual units they are, not their assigned vectors. I would suggest that you don't try to force nonstandard simplifications. Just say what you want, in a technically accurate way, preferably using short sentences to avoid making it difficult for the reader to follow your discourse.
H: What is syntax V and S standing for nominal subject? I was reading the recent paper https://www.aclweb.org/anthology/P19-1580.pdf and noticed that in section 5.2, the syntactic relation is studied in terms of the "direction between two tokens". In table 1, the result is further shown with direction like $$v \to s, s \to v$$ for nsubj(nominal subject). However, what does V and S refer to here and where I can find more materials about this ? AI: This is explained in the Methodology section of the paper: We calculate for each head how often it assigns its maximum attention weight (excluding EOS) to a token with which it is in one of the aforementioned dependency relations. We count each relation separately and allow the relation to hold in either direction between the two tokens. The nsubj dependency label is used in arcs between the verb ($v$) and a subject ($s$). In the paper, they define this "dependency score" taking the maximum attention weight and checking if it points to the expected word according to the dependency parse arc, in either of the directions. This is why they count it from verb to subject and from subject to verb. If you want to know more about the conventions usually followed in dependency parsing, including these nsubj, amod, etc, you can take a look at the Stanford typed dependencies manual.
H: Random Forest in R with only character variables I am new to using random forest in R an my goal is to identify the independent variables which have the highest impact on the dependent variables. I am looking at sales data, and sales is my dependent variable (1 vs. 0) I have other variables which have different levels such as professional status (retired, employed, unemployed), searching for (myself, parent, other) and region (north, west, south), etc... summary(data) provides me with the information that the class of my variables is character (dependent variable shows min, 1st Qu, Media - so I assume R reads it as continuous?) and I believe that a character variable needs to be factored before I can run the randomForest command. Is there a single command that transforms all character into factors? My second questions is whether I should remove the id of the customer from my imported table, or whether it will affect the results if I keep it in the RF model? AI: You can check the class of the class(df$dependent). You are expecting it to be numeric. To convert multiple columns to factors, you can do something like this factor_cols <- c("col_1","col_7"), df[factor_cols] <- lapply(df[factor_cols], as.factor) If you keep the customer id, then you will have a problem when applying your model to a new customer.
H: What supervised machine learning model can be used to generate a scorecard-like result? A scorecard is typically used in Credit Application. One very common model for developing a credit scorecard is logistic regression since it has well-defined probabilities. Apart from logistic regression, is there any model that can be used in the scorecard? For example, I don't know whether Support Vector Machine can be used since it only outputs a decision boundary. More on the scorecard: Features are assigned with weightings All features are categorical The sum of weightings of all features with value True is the total score (like a checklist) There will be a cutoff point to classify good/bad (label, +1,-1) How far from the cutoff point represents probabilities. AI: It depends what you mean by "can be used": any regression algorithm can be used, the question is how reliably it would perform. You can compare different algorithms experimentally (if you have a dataset). [Updated after question edited] In general the way to use ML with this kind of setting is to train a classification model based only on the categorical features. Depending on the type of algorithm, the combination of features might not always be a weighted sum, and the result label may or may not be based on a cutoff point. In order to have a cutoff point (thus a numerical prediction), the method must be a soft classification method. Alternatively a regression model could be trained for predicting the numerical value. So that leaves you with many options: soft classification: linear/logistic regression, Naive Bayes, ... regression: linear/logistic regression, SVM, decision trees, ... Note: technically the probability doesn't represent "how far from the cutoff point", it represents the probability of the instance being positive (p=1).
H: Can bidirectional RNN use variable sequence length? A bidirectional RNN consists of two RNNs, one for the forward and another for the backward sequential directions, which outcome is concatenated at each time step. Would this configuration restrict the model to always use a fixed sequence length? Or would it still work as the unidirectional RNN, which can be applied to any sequence length? This question was raised because the bidirectional architecture merges the output of both forward and backward RNNs at each time step. Thus, if the sequence length was 4, the outputs of both forward and backward RNNs would merge this way: 1th forward with 4th backward, 2nd forward with 3rd backward,... 4th forward with 1st backward. However, if a different sequence length was used this merge order would be modified: Let's say the network was trained with sequence length 4, but at test time a sequence length of 5 was used. Merge would be: 1th forward with 5th backward, 2nd forward with 4th backward... 5th forward with 1th backward. Would this shift in merge order negatively affect the bidirectional RNN performance? AI: The short answer is no, a bidirectional architecture will still take in a variable sequence length. To understand why, you should understand how padding works. For example, let's say you are implementing a bidirectional LSTM-RNN in tensorflow on variable length time series data for multiple subjects. The input is a 3D array with shape: [n_subjects, [n_features, [n_timesteps...] ...] ...] so to ensure that the array has consistent dimensions, you pad the other subject's features up to the length of the subject with features measured for the longest period of time. Let's say subject 1 has one feature with values = [22,20,19,21,33,22,44,21,19,26,27] measured at times = [0,1,2,3,4,5,6,7,8,9,10]. subject 2 has one feature with values = [21,12,22,30,13,42,20] measured at times = [0,1,2,3,4,5,6]. You would pad features for Subject 2 by extending the array so that the padded_values = [21,12,22,30,13,42,20,0,0,0,0] at times = [0,1,2,3,4,5,6,7,8,9,10], then do the same thing for every subsequent subject. This means the number of timesteps for each subject can be variable, and the merge you refer to occurs with the dimension for that particular subject. Below is an example of a bidirectional LSTM-RNN architecture for a model that predicts sleep stages for different subjects using biometric features measured over variable lengths of time.
H: Use serialized model without installing dependencies I prototyped an ML model consisting of preprocessing + multiple stacked regressors. I would like a colleague of mine to develop an API that will query the model. Is there any way to query the model (sklearn pipeline) without having to download all the dependencies (XGBoost, LGBM, CatBoost, ...). I tried to serialize it with Joblib but when we deserialize it on another machine it requires to have dependencies installed. The goal is really to transform the sklearn's pipeline to a complete inert black box that requires minimal setup. Is it possible? AI: You may be able to convert the entire model pipeline to a standardized format. PMML is such a format, and there are tools (e.g. jpmml) to convert all your named modeling package objects to PMML, though perhaps you've used something else that isn't already easily-converted. Otherwise, just force installation of dependencies (and make it easy), through a virtual environment or docker image or ...
H: PCA for complex-valued data I'm quite shocked for encountering this error on PCA from sklearn ValueError: Complex data not supported After trying to fit complex-valued data. Is this just unimplemented thing? Should I just go ahead and do it 'manually' with SVD or is their a catch for complex-values? AI: Apparently this functionality is left out intentionally, see here. I'm afraid you have to use SVD, but that should be fairly straightforward: def pca(X): mean = X.mean(axis=0) center = X - mean _, stds, pcs = np.linalg.svd(center/np.sqrt(X.shape[0])) return stds**2, pcs
H: ValueError: X has 3 features per sample; expecting 1500 import numpy as np from flask import Flask, request, jsonify, render_template import pickle from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.linear_model import LogisticRegression from flask_bootstrap import Bootstrap app = Flask(__name__) Bootstrap(app) model = pickle.load(open('model.pkl', 'rb')) @app.route('/') def home(): return render_template('index.html') @app.route('/predict',methods=['POST']) def predict(): ''' For rendering results on HTML GUI ''' vectorizer = TfidfVectorizer() namequery = request.form['namequery'] data = [namequery] vect = vectorizer.fit_transform(data) my_prediction = model.predict(vect) return render_template('result.html',prediction = my_prediction) if __name__ == "__main__": app.run(debug=True) here is preprocessing of dataset code import numpy as np import pandas as pd import re import string import nltk from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn import metrics import matplotlib.pyplot as plt import seaborn as sns import pickle data = pd.read_csv('train.csv') #removing id column data=data.drop('id',axis=1) string.punctuation nltk.download('stopwords') stopword=nltk.corpus.stopwords.words('english') def remove_punct(text): text_nopunct=''.join([char for char in text if char not in string.punctuation]) return [word for word in text_nopunct.split() if word.lower() not in stopword] x=data['text'].apply(remove_punct) ps=nltk.PorterStemmer() def stemming(text): text=[ps.stem(word)for word in text] return ' '.join(text) x=x.apply(stemming) vectorizer = TfidfVectorizer(max_features=1500) X = vectorizer.fit_transform(x) print(vectorizer.get_feature_names()) y=data['target'] X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_state=0) # instantiate the model (using the default parameters) logreg = LogisticRegression() # fit the model with data logreg.fit(X_train,y_train) # y_pred=logreg.predict(X_test) from sklearn import metrics cnf_matrix = metrics.confusion_matrix(y_test, y_pred) cnf_matrix class_names=[0,1] # name of classes fig, ax = plt.subplots() tick_marks = np.arange(len(class_names)) plt.xticks(tick_marks, class_names) plt.yticks(tick_marks, class_names) # create heatmap sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="YlGnBu" ,fmt='g') ax.xaxis.set_label_position("top") plt.tight_layout() plt.title('Confusion matrix', y=1.1) plt.ylabel('Actual label') plt.xlabel('Predicted label') # Saving model to disk pickle.dump(logreg, open('model.pkl','wb')) # Loading model to compare the results model = pickle.load(open('model.pkl','rb')) please help me someone to reslove this error. Iam new to machine learning am not understanding where i made mistake. Thanks in advance!! AI: You should not so a fit transform while prediction. You should save the fit model in your preprocessing code for the tfidfvectorizer and use the same vectorizer while predicting. Otherwise your vectorizer is learning new word representations and whose dimensions might not be 1500 what you have specified in your preprocessing code.
H: Implement the following loss function without interrupting the gradient chain registered by the gradient tape I have spent five days trying to implement the following algorithm as a loss function to use it in my neural network, but it has been impossible for me. Impossible because, when I have finally implemented, I get the error: No gradients provided for any variable: ['conv1_1/kernel:0', 'conv1_1/bias:0', 'conv1_2/kernel:0' I'm implementing a semantic segmentation network to identify brain tumours. The network is always returning very good accuracy, about 94.5%, but when I have plotted the real mask with network outputs, I see that the accuracy is 0% because it doesn't plot any white dot: By the way, the mask image only has values between 0.0, black, and 1.0, white, values. So, I have decided to implement my own loss function, which has to do the following: In a nutshell, sum mask image to output and count how many 2.0 values are in this sum. Compare this count of 2.0 values with the count of 1.0 values in the mask image, and get the error. More detailed: Converts the output's value of the model into 0.0 or 1.0. Sum that output to the mask (which values are also 0.0 or 1.0). Counts how many 2.0 are this sum. Counts how many 1.0 are in the mask. Returns the different between them. My question is: Is there a already tensorflow function that do that? Now, I get that network output because I'm using Euclidean Distance. I have implemented the loss function using tf.norm: def loss(model, x, y): global output output = model(x) return tf.norm(y - output) And then, I use this loss function to tape gradients: def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) Maybe, what I am trying to do is another kind of distance. AI: Your intuition about counting the 2 is pretty good. For this type of problem, you can use the dice loss function wich use your idea but sligthly differently. the values of predicted and ground truth components (each pixels) are either 0 or 1, representing whether the pixel bayond to the label (value of 1) or not (value of 0). Therefore, the denominator is the sum of total labeled pixels of both prediction and ground truth, and the numerator is the sum of correctly predicted boundary pixels because the sum increments only when predicted and ground truth match (both of value 1). as cited here: For example, if two sets A and B overlap perfectly, DSC gets its maximum value to 1. Otherwise, DSC starts to decrease, getting to its minimum value to 0 if the two sets don ‘t overlap at all. Therefore, the range of DSC is between 0 and 1, the larger the better. Thus we can use 1-DSC as Dice loss to maximize the overlap between two sets. The dice loss is commonly used in segmentation task because it is resilien to class inbalance (eg too much background) you can implement it in tensorflow keras like this; def dice_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2.0 * intersection + 1.0) / (K.sum(y_true_f) + K.sum(y_pred_f) + 1.0) def dice_coef_loss(y_true, y_pred): return 1-dice_coef(y_true, y_pred) (in the implementation, we use a smoothing termes (here it's 1, the lower the value is the better) The smoothing therme is here to avoid dividing by 0 if the input as no label in it)
H: neural network probability output and loss function (example: dice loss) A commonly loss function used for semantic segmentation is the dice loss function. (see the image below. It resume how I understand it) Using it with a neural network, the output layer can yield label with a softmax or probability with a sigmoid. But how the dice loss works with a probility output ? The numerator multiply each label (1 or 0) of the predicted and ground truth. Both pixels need to be set to 1 to be in the green area. What is the result with a probability like 0.7 ? Does the numerator result in a floating number (ie with ground truth = [1, 1] and predicted = [0.7, 0], the "green" area of the numerator would be 0.7) ? AI: I think there is a bit of confusion here. The dice coefficient is defined for binary classification. Softmax is used for multiclass classification. Softmax and sigmoid are both interpreted as probabilities, the difference is in what these probabilities are. For binary classification they are basically equivalent, but for multiclass classification there is a difference. When you apply sigmoid, you make sure that all output neurons are between 0 and 1. When you apply softmax, you make sure that all output neurons are between 0 and 1 AND that they sum up to 1. This means, when the output is sigmoid, the input data can be in several classes at the same time. For softmax, you force the network to pick one of the classes. The code you posted here is correct for binary classification, but for multiclass, there is some intricacy when it comes to combining the classes. Since in your example the target consists of two pixels, each labeled either 0 or 1, you are dealing with binary classification and should use pixel-wise sigmoid in the first place, i.e. the probabilities from your model should be e.g. [0.7, 0.8] or something like that. Pixel-wise softmax should only be used if each pixel could be in only one of many classes and softmax over all pixels does not make much sense, as this would force the model to pick one out of many pixels to label as 1.
H: Finding a range of values for each feature that contribute to positive classification Consider a classification problem(lets say 2 classes, 'good' and 'bad), where all features are continuous. what I need is a range of values for each feature that contributes to 'good' classification. What I thought was simply partitioning the feature values based on good or bad label, problem is all values don't equally contribute for good/bad classification. So what methods can be applied to find such range for each feature ? AI: It's impossible in general, simply because a particular value or range for feature A might correspond to class 'good' if feature B has a certain value/range but correspond to class 'bad' otherwise. In other words, the features are inter-dependent so there's no way to be sure that a certain range for a particular feature is always associated with a particular class. That being said, it's possible to simplify the problem and assume that the features are independent: that's exactly what Naive Bayes classification does. So if you train a NB classifier and look at the estimated probabilities for every feature, you should obtain more or less the information you're looking for. Another option which takes into account the dependency between variables is to train a simple decision tree model: by looking at the conditions in the tree you should see which combinations of features/ranges lead to which class.
H: Naive Bayes vs Full Bayes model classifiers I have a hard time to understand when Naive Bayes works better than Full Bayes. In general, i know that naive bayes does the assumption that features are independent given the class. However, if features indeed independent, does it mean that assuming that they are dependent yield worst result? e.g. i got this data points for two features, each class is colored in different color. Now my intuition is that Naive bayes will work well here, given a specific class we have two different distributions of the class and both are "unstructured". However, i did ran Naive bayes (with normal pdf) and full bayes (with multivariate pdf) classifiers on that data (using multivariate) and got the same accuracy. AI: There's no clear definition of "Full Bayes" as a classifier. Most "real world" non-Naive Bayesian classifiers take into account some but not all dependencies between features. That is, they make independence assumptions based on the meaning of the features. If by "full Bayesian" you mean a joint model (as your example suggests), then one of the problems is that such a model doesn't generalize: it just describes the probabilities in the training set, and that implies that it's likely to overfit badly. This is actually why NB works quite well in most cases: yes it makes unrealistic independence assumptions, but this simplification allows the model to capture basic patterns from the data. In other words, the ability of the model to generalize comes from its excessively simplified assumptions. Note: as far as I can tell, your example is well chosen and you should see a big difference between NB and a joint model: NB should perform no better than a random baseline while the joint model should obtain near perfect accuracy. There's probably a mistake somewhere if you don't obtain these results. But while this is a good toy example, it cannot help you understand the advantage of the NB assumptions.
H: Using random forest when not all the observations have data I have this given database: I would like to predict column "y" using the columns "index_1","index_2","index_3" using the random forest classifier. as you can see, column "size: does not have values for each observation. My question is : Can I still use random forest classifier when I don't have data for all the observations, and if yes, is it ok?should I give value (e.g "noData") to the empty cells? will it harm the prediction? or maybe no need? AI: In theory decision trees (and random forests) are able to deal with missing values in the data. But whether a particular implementation of the algorithm allows this (and how to use it with this implementation) depends on the specific package.
H: Differerent ways to detect the appropriate number of topics I implemet the LDA topic modeling in R. One essential parameter is the selection of the number of topics Which of the following ways could it be the most suitable: 1. mallet 2. stm 3. or this way https://cran.r-project.org/web/packages/ldatuning/vignettes/topics.html AI: There's no "most suitable" way, but there might be one which works better with your data. The only way to know that would be to try all of them. In case choosing the number of topics is an issue, you might be interested in using the non-parametric extension of LDA for topic modeling, which doesn't require you to specify the number of topics: this is called Hierarchical Dirichlet Processes, see for instance this introduction.
H: What types of matrix multiplication are used in Machine Learning? When are they used? I'm looking at equations for neural networks and backpropagation and I see this symbol in the equations, ⊙. I thought matrix multiplication of neural networks always involved matrices that matched dimensions on both sides, such as... [3, 3]@[3, 2]. (This is what is happening in the animated gif). What part of a neural net uses a Hadamard product and which uses the Kronecker product? Because I see this notation for the Hadamard product (⊙) in papers and deep learning learning materials. AI: There are two distinct computations in neural networks, feed-forward and backpropagation. Their computations are similar in that they both use regular matrix multiplication, neither a Hadamard product nor a Kronecker product is necessary. However, some implementations can use the Hadamard product to optimize the implementation. However, in a convolutional neural networks (CNN), the filters do use a variation of the Hadamard product. Multiplication in Neural Networks Let's look at a simple neural network with 3 input features $[x_1, x_2, x_3]$ and 2 possible output classes $[y_1, y_2]$. Feedforward pass In the feed-forward pass the input features will be multiplied by the weights at each layer to produce the outputs $\begin{bmatrix} x_1&x_2&x_3 \end{bmatrix} * \begin{bmatrix} w_{1,1} & w_{1,2} & w_{1,3} & w_{1,4}\\ w_{2,1} & w_{2,2} & w_{2,3} & w_{2,4}\\ w_{3,1} & w_{3,2} & w_{3,3} & w_{3,4} \end{bmatrix} = \begin{bmatrix} h'_1 & h'_2 & h'_3 & h'_4 \end{bmatrix}$ At the hidden layer these will then go through the activation function, if we assume sigmoid then $\begin{bmatrix} h_1 & h_2 & h_3 & h_4 \end{bmatrix} = \frac{1}{1+e^{\begin{bmatrix} -h'_1 & -h'_2 & -h'_3 & -h'_4 \end{bmatrix}}}$ Finally we go through the next set of weights to the output neurons $\begin{bmatrix} h_1 & h_2 & h_3 & h_4 \end{bmatrix} * \begin{bmatrix} v_{1,1} & v_{1,2}\\ v_{2,1} & v_{2,2}\\ v_{3,1} & v_{3,2}\\ v_{4,1} & v_{4,2} \end{bmatrix} = \begin{bmatrix} y'_1 & y'_2 \end{bmatrix}$ $\begin{bmatrix} y_1 & y_2 \end{bmatrix} = \frac{1}{1+e^{\begin{bmatrix} -y'_1 & -y'_2 \end{bmatrix}}}$ Backpropagation pass In the backpropagation we will update the weights through gradient descent. Usually derivations will ignore the need for the Hadamard product by just representing the derivatives with indexes, or implying them implicitly. However the Hadamard product can be used to be more explicit in the following places. $v^{new} = v^{old} - \eta \frac{\partial C}{\partial v}$ $\frac{\partial C}{\partial v} = \frac{\partial C}{\partial \hat{y}} \circ \frac{\partial \hat{y}}{\partial v}$. $\frac{\partial C}{\partial \hat{y}} = \hat{y} - y$ $\frac{\partial \hat{y}}{\partial v} = \frac{1}{1+exp(v^Th + b)} \circ (1 - \frac{1}{1+exp(v^Th + b)})$. Let's look at why we can define this last equation as Hadamard product. The $(v^Th + b)$ is computed as (I ignored the bias term) $\begin{bmatrix} v_{1,1} & v_{2,1} & v_{3,1} & v_{4,1}\\ v_{1,2} & v_{2,2} & v_{3,2} & v_{4,2} \end{bmatrix} * \begin{bmatrix} h_{1} \\ h_{2} \\ h_{3} \\ h_{4} \end{bmatrix} = \begin{bmatrix} \sum v_{i,1} * h_{i} \\ \sum v_{i,2} * h_{i} \end{bmatrix}$ $\frac{\partial \hat{y}}{\partial v} = \frac{1}{1+exp(v^Th + b)} \circ (1 - \frac{1}{1+exp(v^Th + b)}) = \begin{bmatrix} \frac{1}{1 + e^{\sum v_{i,1} * h_{i}}} \\ \frac{1}{1 + e^{\sum v_{i,2} * h_{i}}} \end{bmatrix} \circ \begin{bmatrix} 1 - \frac{1}{1 + e^{\sum v_{i,1} * h_{i}}} \\ 1 - \frac{1}{1 + e^{\sum v_{i,2} * h_{i}}} \end{bmatrix} $. $\frac{\partial C}{\partial \hat{y}} = \hat{y} - y = \begin{bmatrix} \hat{y}_1\\ \hat{y}_2 \end{bmatrix} - \begin{bmatrix} y_1\\ y_2 \end{bmatrix} = \begin{bmatrix} \hat{y}_1 - y_1\\ \hat{y}_2 - y_2 \end{bmatrix}$ $\frac{\partial C}{\partial v} = \frac{\partial C}{\partial \hat{y}} \circ \frac{\partial \hat{y}}{\partial v} = \begin{bmatrix} \hat{y}_1 - y_1\\ \hat{y}_2 - y_2 \end{bmatrix} \circ \begin{bmatrix} \frac{1}{1 + e^{\sum v_{i,1} * h_{i}}} \\ \frac{1}{1 + e^{\sum v_{i,2} * h_{i}}} \end{bmatrix} \circ \begin{bmatrix} 1 - \frac{1}{1 + e^{\sum v_{i,1} * h_{i}}} \\ 1 - \frac{1}{1 + e^{\sum v_{i,2} * h_{i}}} \end{bmatrix}$ As you can see all the matrix multiplications in both these steps are simple matrix multiplication but the Hadamard product can simplify the representation if used. Convolutional Neural Networks CNNs add an additional filtering step before passing through the weights. It passes a filter through the matrices to get a value that represents a neighborhood around a value. The filter takes the Hammond product and then sums all the elements of the resulting matrix. For example, if we have the matrix in green with the convolution filter Then the resulting operation is a element-wise multiplication and addition of the terms as shown below. This kernel (orange matrix) $g$ is shifted across the entire function (green matrix) $f$. Performing a Hammond product at each step, and then summing the elements. Note: The Hammond product is often defined for matrix of exactly the same dimensions, this is relaxed in CNNs due to the filter moving through the image successively. It is also possible to perform the Hammond product on the edges of the image where different padding techniques can be used.
H: How to identify similar words as input from a dictionary Let's say I have a CSV file (single column) with a list of words as input. Meaning the file looks like below Input Terms Subaaaash Paruuru Mettromin Paracetamol Crocedinin Vehiclehcle Buildding Dict terms #this dict has around million records and I have to find a closest match for input term from this dictionary Metformin 250 MG Metformin ..... Crocin Vehcile Paru Subash Now, I expect my output to be like as shown below As you can see that red colored Paru is not a correct match for paracetamol, but that's the closest match we can get for the input term paracetamol from the dictionary. So, I would like to do matching based on 1) Word sounds (when pronounced). Phenotics 2) Spelling mistake corrections 3) Find the best matching word from the dictionary for input terms Can you let me know how can I do the above? AI: So, your question concerns how to effectively translate the input words into their proposed correct words (e.g. Paruuuu --> Paru) via phonetics and spelling mistake corrections. My first idea on this would be to use a deep sequence to sequence model. In a sequence to sequence model, we encode the input word (e.g. Paruuuu) as a sequence of characters into an encoder (effectively an RNN / LSTM, etc.) into a "hidden representation". Then you decode your hidden representation as a sequence of phonemes (which denote how we pronounce the word) with a decoder (again, another RNN / LSTM, etc.). Then, we can take this sequence of phonemes as input into another encoder and then decode with a neural network, where the output layer is a softmax layer, which computes the probability distribution over all words in your vocabulary (in you Dict terms) and select the word the highest probability. So overall, we have: Encode input word --> Decode output phoneme sequence --> Encode output phoneme sequence --> Decode with neural network to classify word. The proposed method is supervised, so of course, you will need examples of input words and their correct "translations" (e.g. ("Paruuu", "Paru")) Here is an article which gives a good intuition behind sequence to sequence models which can classify your input words: https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346
H: Classification model to classify large number of classifiers? Hello I am very new into the field of machine learning/deep learning , and I am finding it hard to select the right model for my research. What I am trying to build is a model to classify which subway route a user have used based on travel time and transfer time given the origin station and destination station. Here is a description of my data set. BSEC BSTN ASTN1 BSTN2 ASTN2 BSTN3 ASTN3 BSTN4 ASTN4 BSTN5 ASTN TFtime Ttime 69551 1001 1703 1703 0 0 0 0 0 0 1003 399 2933 69664 1001 1703 1703 0 0 0 0 0 0 1006 399 2284 66606 1001 1703 1703 0 0 0 0 0 0 1701 118 1750 66600 1001 1703 1703 0 0 0 0 0 0 1701 118 1750 66601 1001 1703 1703 0 0 0 0 0 0 1701 118 1750 69434 1001 0 0 0 0 0 0 0 0 1703 0 1005 ASTN1,BSTN2,ASTN2...BSTN5 refers to via stations BSTN ASTN refers to boarding and arrival stations. I have a another data set of route information labeled. The problem starts here. I am trying to build a model that can classify which route a user have used given BSTN, ASTN, and time info BSEC, TFtime, Ttime. There are too many labels of routes because the routes all differs for each pair of Origin Station and Destination Station. Below are number of routes per origin station and destination station BSTN ASTN trips <dbl> <dbl> <int> 1 150 152 3 2 150 153 7 3 150 154 2 4 150 156 2 5 150 157 2 6 150 158 4 as described there are already 20 different routes for only 5 Origin Destination pair. There are total of 109,425 pairs of origin and destination and 236,213 number of routes. I could not give label to every 236,213 routes for the model to classify. i tried making random forest model for every pair of Origin Destination pair. But I was not able to tune or interpret them because there are too many type of models. What would be a proper model for my situation? Would there be a way the model could interpret given OD pair, and then perform classification within the Origin Destinatnion pair assembly? I would really appreciate some advice or help. AI: It looks like a very difficult problem, since there are many possible classes and very little information in the features to distinguish them. For the record, the reverse problem of estimating the travel time based on the route would probably be more feasible. So you can't expect great performance on a problem like this, the goal will be to design the problem in a way which makes things as simple as possible for the classifier to do a decent enough job. Here are some suggestions: Start with training a model specific to a pair BSTN,ASTN. Discard the least likely routes, i.e. routes which are rarely used for the pair BSTN,ASTN (for instance routes with frequency lower than 10). Inspect the data to see if the features allow a distinction between the (main) classes. For instance you can plot the distribution of BSEC, TFtime, Ttime for different routes: if the distributions are close there's little chance the classifier will succeed. You can also train a decision tree and inspect it manually so see what happens.
H: How to divide a dataset for training and testing when the features and targets are in two different files? I am trying to divide a dataset into training dataset and testing dataset for multi-label classification. The datset I am working on is this one. It is divided into a file which contains the features and another file which contains the targets. They look like this below. This is the image about the features. This is the image about the targets. I intend to use this dataset for multilabel classification. I am following this tutorial . Here the dataset looks like this. The dataset that I am working on has 17203824 samples and 58255 different and unique labels in the target file. So to follow the tutorial what I intend to create is a new numpy 2d array with 17203824 rows and 58255 columns where appropriate indices will be marked with 1. I am able to create it. But when I try to populate with 1s in the appropriate indices, I am getting an error. Its says that I don't have enough memory. My code is given below. questions = pd.read_csv("/kaggle/input/stacklite/questions.csv") question_tags = pd.read_csv("/kaggle/input/stacklite/question_tags.csv") d = {v: i[0] for i, v in np.ndenumerate(question_tags["Tag"].unique())} y = np.zeros([questions.shape[0], len(question_tags["Tag"].unique())], dtype = int) for k in question_tags["Tag"]: j = d[k] for i, l in enumerate(y): y[i][j] = 1 Can anyone please help in telling me how I should proceed? AI: I suggest you look at some of the common python libraries that will convert a column value into labels. Many of these functions have been around a while, and are optimized to use less memory and/or run faster. For example, you can use what is called "One Hot Encoding" from get_dummies() in pandas or "LableEncoder" from sklearn. Here is a good reference with many methods to try, depending on your needs. https://pbpython.com/categorical-encoding.html Here is a sample of how word embedding works. Each word in this example is reduced to 2 values (x and y)
H: Tensorflow / Keras Python CNN I'm doing a project where a Python script used a convolutional neural network to determine if a plant is healthy, and then water it based on that. While training the CNN, it seems to get up to 100% accuracy quite early, although it isn't accurate. I only have a little less than 2000 images, and was wondering if I didn't have enough, or it was my model, which is here self.model = Sequential() self.model.add(Conv2D(numFilters, filterSize, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3))) self.model.add(Conv2D(numFilters * 2, (3, 3), activation='relu')) self.model.add(MaxPooling2D(pool_size=poolSize)) self.model.add(Dropout(0.25)) self.model.add(Flatten()) self.model.add(Dense(numFilters * 4, activation='relu')) self.model.add(Dropout(0.5)) self.model.add(Dense(2, activation='softmax')) self.model.compile(loss='categorical_crossentropy', optimizer = 'adam', metrics=['accuracy']) I would just like to know the reason why it doesn't train well. AI: Not sure if your model is too simple, I have used the same number of layers before and had very good results. Guess it depends on many factors, such as the training image dataset, etc. The high accuracy is coming from learning the training data quickly, but the low accuracy means the model you built doesn't apply to the test data, so the model is over-fitting. Some possible causes: 1) not enough training data 2) training images are too similar to each other, but different from the test data 3) model is too complex and does not generalize well 4) etc Possible fixes: 1) If your training data is too small, you can use public pre-built models (VGGnn, etc) as a starting point. This will work best on models that were built with similar data. 2) If your images are too similar to each other, you can alleviate this by using image augmentation. The keras.preprocessing.image.ImageDataGenerator library is good for this. Here is an example on how to use: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html 3) If the model is too complex, you can try to simplify by changing filter sizes, or add regularization
H: looking for public dataset for stock market I want to do some modelling and data visualization on historical stock data, including price, volume, financials, etc. Is there an public dataset available for stock price history? I looked at a few, but either they have a high cost, or not sure they would be reliable. Free would be preferred, also established and reliable. If not, what are some good options for collecting the data myself? Maybe web scraping, or public api's etc. AI: You can have a look at the kaggle stock dataset. https://www.kaggle.com/borismarjanovic/price-volume-data-for-all-us-stocks-etfs This questions are normally done in OpenData stack exchange. https://opendata.stackexchange.com/
H: Are there machine learning methods that can give captions to the New Yorker magazine cartoons? Are there machine learning methods that can give captions to the New Yorker magazine cartoons? AI: Yes. Image captioning is a very active area of Deep Learning, at the intersection between Computer Vision and Natural Language Processing. These models are based on an Encoder-Decoder structure. In its most classical form, the Encoder consists of Convolutional layers that process pixel data. Their representation is then fed to a Decoder based on Recurrent layers (usually LSTM or GRU) that generate the output sentence. I suggest you to take a look at this tutorial on Image Captioning in TensorFlow 2. You can download the Notebook and run. That specific model is based on attention mechanism, which is a fancier version of seq2seq models. You can find tons of other models and tutorial simply googling them.
H: Keeping DataFrame in Memory while working - R vs Python in VS Code I am considering migrating from R-Studio to Python. So I am just starting Python with VS Code as my editor. My purpose is mainly to analyze data, build predictive models and make such models available via a web API. When working in R-studio with a large data set, I can run a script to read in the data from say a CSV file, then I can explore the data using the normal statistical approaches. I only need to load the data set ONCE, which is great since loading a large data set often takes some time. And data exploration requires lots of repeated "probes" to view histograms, compare counts etc. So not having to reload the data each time I change a script or run a script to view a plot, is a great advantage. However, using Python in VS Code, it seems to me that if I load the data frame say in a method at the top of a script, then have a method to draw a graph, then each time I run that script, the data is reloaded. So if I want to draw 20 histograms, each time changing only the title, I need to do the data reload each time I modify and run the script. Am I missing a feature of Python and VS Code or is my summary correct? AI: This is more about the IDE than the programming language. When you use RStudio, you can import the data and the IDE saves it in its memory so you don't have to reload it when you write subsequent code in that same session. To do this in Python, consider using a Jupyter Notebook or something similar. That way, you only need to import your dataframe once, and you can then perform exploratory analysis, visualizations, and build models as needed.
H: Why don't different output weights break symmetry? My deep learning lecturer told us that if a hidden node has identical input weights to another, then the weights will remain the same over the training/there will be no separation. This is confusing to me, because my expectation was that you should only need the input or output weights to be different. Even if the input nodes are identical, the output weights would affect the backpropagated gradient at the hidden nodes and create divergence. Why isn't this the case? AI: It is fine if a hidden node has identical initial weights with nodes in a different layer, which is what I assume you mean by output weights. The problem with weight-symmetry arises when nodes within the same layer that are connected to the same inputs with the same activation function are initialized identically. To see this, the output of a node $i$ within a hidden layer is given by $$\alpha_i = \sigma(W_i^{T}x + b) $$ where $\sigma$ is the activation function, $W$ is the weight matrix, $x$ is input, $b$ is bias. If the weights $W_{i}=W_{j}$ are identical for nodes $i,j$ (note that bias is typically initialized to 0), then $\alpha_i = \alpha_j$ and the backpropagation pass will update both nodes identically.
H: Precision and recall clarification I'm reading the book Fundamentals of Machine Learning for Predictive Data Analytics by Kelleher, et al. I've come across something that I think as an error but I want to check to be sure. When explaining precision and recall the authors write: Email classification is a good application scenario in which the different information provided by precision and recall is useful. The precision value tells us how likely it is that a genuine ham email could be marked as spam and, presumably, deleted: 25% (1 − precision). Recall, on the other hand, tells us how likely it is that a spam email will be missed by the system and end up in our inbox: 33.333% (1 − recall). Precision is defined as: $TP \over {TP + FP}$. Thus: $$1 - precision = 1 - {TP \over TP+FP} = {FP \over TP + FP} = P(\textrm{prediction incorrect}|\textrm{prediction positive})$$ So this should give us the probability that an email marked as ham (positive prediction) is actually spam. So precision and recall in the quote above should be switched? AI: It's very likely that the authors assume that the spam class is positive, whereas you intuitively associated the ham class with positive. Both options make sense in my opinion: the former interpretation is based on the idea that the goal of the task is to detect the spam emails, seen as the class of interest. the later interpretation considers that the ham emails are the "good ones", the ones that we want, hence the "positive" class. There's no error when one reads the paragraph with the authors' interpretation in mind. This confusion illustrates why one should always clearly define which class is defined as positive in a binary classification problem :)
H: Multivariate, Multi-step LSTM time series forecast I'm trying to predict the Pollution using a Multivariate and Multi-step LSTM code, I've been following this tutorial. I've been following the code until the end, but couldn't understand where the code writer determined the Pollution column to be the code output (predicting the Pollution column instead of an other column?) I'm new to python, and it got me confused. At first, I thought that this is the part of the code where we define it, but I was wrong: # invert scaling for forecast pred_scaler = MinMaxScaler(feature_range=(0, 1)).fit(dataset.values[:,0].reshape(-1, 1)) inv_yhat = pred_scaler.inverse_transform(yhat) print(inv_yhat.shape) # invert scaling for actual inv_y = pred_scaler.inverse_transform(test_y) print(inv_y.shape) Any explanations on how he determined the pollution column out of the 7 other columns to be the output? AI: I know this tutorial, it's a good start for RNNs but it contains a lot of passages and transformations that could have been kept shorter. First, he defines a function called series_to_supervised() to process data to be fed into an RNN. Paragraph 3, line 37: reframed = series_to_supervised(scaled, 1, 1) This reframed dataframe contains all data, either y columns and all the X variables to make a prediction. In the following code block it is turned into a numpy array at line 2: values = reframed.values Ok, so now all our information is store in values. Now it's time to separate it in train and test: train = values[:n_train_hours, :] test = values[n_train_hours:, :] And again, each train and test is separated in x and y pieces: # split into input and outputs train_X, train_y = train[:, :-1], train[:, -1] test_X, test_y = test[:, :-1], test[:, -1] Line 7-8. This is where the dependent variable is separated from the rest. Knowing it was the last column, it was extracted with index -1 (i.e. the last element). As I said, this tutorial is a good start to learn time series prediction with RNNs. However, I find that sometimes he tried to simplify the steps so much... that he ended up with some messy parts. All those objects: reframed, values, train, test, ... there was no need to make so many of them. That apart, I'm a fan of the blog. It provided a lot of useful tips on RNNs.
H: Unusually High BLEU score on a NMT model This is the project on Neural Machine Translation on the English/Irish language pair. I have been spending the past month or so trying to train a good baseline to do 'experimentation' on. I have a corpus of ~850k sentences (unfortunately Irish is very limiting). When I trained it and evaluated it with BLEU, I got a score of 65.02, which is obviously absurdly incorrect. These were my Fairseq-train settings: !CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin-full_corp/MayNMT \ --lr 5e-4 --lr-scheduler inverse_sqrt --optimizer adam\ --clip-norm 0.1 --dropout 0.2 --max-tokens 4096 \ --arch transformer --save-dir checkpoints/full-tran I know not everyone uses Fairseq in NLP, but I hope the arguements are self-explanatory. I deduplicated the dataset (converted to a Python set() which only takes unique entries) so I don't think the issue is the dev/valid and test sets contain duplicate entries, but I'm not sure what else causes this. Some suggest overfitting may be a cause, but I feel that would only affect the BLEU if the dev set shared training entries. I've tried to find the problem myself, but there aren't many places that cover NMT, let alone BLEU. AI: According to recent publications, it is not impossible to get BLEU scores as high as yours for English→Irish. Nevertheless, without any other knowledge, they certainly seem too high. From the command line arguments, there does not seem to be any evident problem. The most probable explanation is, as you already pointed out, a data leakage between validation/test and training. Note that, while you removed exact duplicates, you may be getting partial matches that go unnoticed. You may want to look into different similarity metrics. The most straightforward is the Jaccard Similarity.
H: What is the use of the np.ndarray() function and how is it used? I just started Data Science and I want to know what the np.ndarray() function's use is, and in which scenario(s) it is used. AI: numpy.ndarray is not a function but a class, as you can see in the documentation. An ndarray is meant to store data, it is just a multidimensional matrix. You store your matrix data in an ndarray and then you operate on it, adding to other ndarrays, or any other matrix operation you want to compute. From the documentation of ndarray: An ndarray is a (usually fixed-size) multidimensional container of items of the same type and size. The number of dimensions and items in an array is defined by its shape, which is a tuple of N non-negative integers that specify the sizes of each dimension. The type of items in the array is specified by a separate data-type object (dtype), one of which is associated with each ndarray. While you can create yourself instances of numpy.ndarray by invoking its constructor, most of the times, arrays are created with the convenience function numpy.array, which ends up creating and returning an instance of numpy.ndarray. For instance, let's create a 3x3 identity matrix manually: import numpy as np a = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) And now we can compute operations on it, e.g.: b = a + a print(b) The result is: [[2 0 0] [0 2 0] [0 0 2]]
H: For imbalanced classification, should the validation dataset be balanced? I am building a binary classification model for imbalanced data (e.g., 90% Pos class vs 10% Neg Class). I already balanced my training dataset to reflect a a 50/50 class split, while my holdout (training dataset) was kept similar to the original data distribution (i.e., 90% vs 10%). My question is regarding the validation data used during the CV hyperparameter process. During each iteration fold should: 1) Both the training and test folds be balanced or 2) The training fold should be kept balanced while the validation fold should be made imbalanced to reflect the original data distribution and holdout dataset. I am currently using the 1st option to tune my model; however, is this approach valid given that the holdout and validation datasets have different distributions? AI: Both test and validation datasets should have the same distribution. In such a case, the performance metrics on the validation dataset are a good approximation of the performance metrics on the test dataset. However, the training dataset can be different. Also, it is fine and sometimes helpful to balance the training dataset. On the other hand, balancing the test dataset could lead to a bias estimation from the performance of the model because the test dataset should reflect the original data imbalance. As I mentioned at the beginning the test and validation datasets should have the same distribution. Since balancing the test dataset is not allowed, the validation dataset can not be balanced too. Additionally, I should mention that when you balance the test dataset, you will get a better performance in comparison to using an unbalanced dataset for testing. And of course, using a balanced test set does not make sense as explained above. So, the resulted performance is not reliable unless you use an unbalanced dataset with the same distribution of classes as the actual data.
H: Is epsilon error a standard known error or custom created by this paper? I'm reading this computer vision paper, research paper link, about creating a model to estimate the real age and perceived age of the person in the image (or at least that's what I think it's about). The perceived age is decided by this method: each image is looked at by 10 independent individuals and they estimate the age of the person. The mean and standard deviation are taken from the age guesses of the 10 individuals. The paper then goes on to use an epsilon error by this equation as part of the model evaluation for perceived age and state the following the evaluation employs fitting a normal distribution with the mean µ and standard deviation σ of the votes for each image. The paper also says that this epsilon error covers the aspect of the uncertainty of the ground truth age. The results are then posted in a table showing different epsilon error values based on different models. My questions are the following: What is that equation and is it standardly used? It says What is x in this equation? I thought it was some type of feature scaling of the image but that makes no sense because the results are all different in the table and completely out of context anyway. Is x supposed to be the predicted age? AI: What is that equation and is it standardly used? I havn't seen exactly that before, so I guess it's not "standard", but it's not so unusual. The equation is "just" 1 - (normal distribution). If you think of the normal distribution as a measure of "how similar a sample is to the mean", then that equation turns the similarity (big is good) into a distance (small is good). It looks related to "robust loss functions"; See below. What is x in this equation? I thought it was some type of feature scaling of the image but that makes no sense because the results are all different in the table and completely out of context anyway. Is x supposed to be the predicted age? Your second thought is right. $x$ is predicted age. They use that set up since they don't have a single correct answer for their data, but rather, many guesses/votes as what the correct age is. From the paper: The LAP challenge evaluation employs fitting a normal distribution with the mean μ and standard deviation σ of the votes for each image. Say you have two images. For one, every guess you have for the age is the same. For another, you get a wide range of guesses for the age. The second image must be "harder" somehow. This equation gives you an error measure that tells you how the algorithm performed relative to the many people that guessed the true age. Robust losses Check out this work The "epsilon-error" equation from the paper you linked to will look sort-of like the bottom (orange) trace in Figure 1:
H: Multivariate Linear Regression with exponential trend-line The dataset that I'm working with has 2 independent variables (qty, volume) and 1 dependent variable (cost). When I plot individual X with Y, it turns out qty vs cost gives an exponential decay trend line while volume vs cost gives a linear relationship. I'm trying to come up with a linear model (since it's beginner-friendly) to predict the cost when a new input, volume set is given. I tried to excel and while the statistic output looks reasonable, the predicted values are off quite a lot. I read up on some articles and there is something regarding log transformation so that both qty and volume have a linear relationship with cost. I'm not sure if log transformation applicable in this situation. What would be the best way to approach this problem? I plan to use R instead of excel next. Here are some outputs of what I've done Note: Thickness, width, and height essentially become volume so it doesn't affect the data much AI: I would suggest transforming "qty" into the log space. Even, you can do this using Excel. You can make a new column (for example qty_log), which is equal to the log of "qty". Then, you can fit a linear regression as follows: $cost = a1 * log(qty) + a_2 * volume + a_0$ You will get much better results in this case. This equation simply means that try to fit a model using volume and log of qty for predicting cost. You did so before but you used qty instead of log(qty)
H: How to understand the performance of different machine learning models? I have a dataset, which contains the processing conditions (i.e., 42 features) and the property (i.e., 1 target) of a class of material. To know the performance of different machine learning models, I tested five different machine learning models by considering different numbers of features in training. These models are linear regression (LR), Bayesian ridge (BR) , k-nearest neighbor (NN), random forest (RF), and support vector machines (SVM) regression. The coefficient of determination (R2) for the test dataset is used to represent the performance of trained machine learning models. As we can see that the max. accuracy of these models are in order: RF>BR~LR~SVM>NN. The top 8 features are required to obtain good accuracy for RF and afterward the accuracy is almost independent of the number of top-ranking features. The performance of BR, LR and SVM continuously improve with an increasing number of top-ranking features until reaching the maximum with top 26 features. NN exhibits different trends from others. It already reaches the best performance with top 8-12 features but gets worse and worse with more features. I am wondering what are the possible reasons for this results? Some directions or some hints? for example: why the accuracy is in the order of RF>BR~LR~SVM>NN. why RF model reaches a good accuracy with a few features then keeps almost constant with more features? why linear-based models BR and LR have very similar performance as the SVM models. why NN model reaches a good accuracy with a few features then the accuracy reduces with increasing numbers of features? I understand the reason is from case to case, but what is the general explanation or directions for finding the answers? AI: First, it would be beneficial if you could mention whether that $R2$ presented in the figure is for the test or training dataset. Let's assume that it is for the test dataset. The answer to all of your questions is subjective and depends on the type of data you used for this purpose. But let me briefly answer them. Because the Random Forest model (RF) is an ensembling method (a combination of weak learners), it is expected to have a very good performance. This method is robust to overfitting. We can see that when the number of features increases, there is no drop in the performance of the RF model; while using a large number of features could lead to a drop in the performance of the model on the test dataset. RF model is not prone to overfitting because of using many weak learners (decision trees here). Linear Regression and SVM probably perform the same because you might use a linear kernel for SVM. If that is not the case, it means that transforming the data to a new space (SVM transforms the data into new space to easily separate the classes) is not useful in your dataset because of the nature of your data (it means that your data might be linearly separable). It seems that you did not use regularization for linear regression. If that is true, your model is overfitted especially when you have a large number of features. If you add regularization to your model, the performance of the linear regression model does not decrease as the number of features increases. Naive Bayes is good when features are independent and when dependencies of features from each other are similar between features. So, if they are not true for your dataset that might be the reason that Naive Bayes does not work well. Also, Naive Bayes classifiers can quickly learn to use high dimensional features with limited training data because of the independence assumption. So, it is expected that Naive Bayes works better when the number of features >> sample size compared to more sophisticated ML algorithms. But we can see that is not the case for your study. Probably, it is because of the violation of the feature's independence.
H: Need for Dense layer in Text Classifcation While creating a model for text classification, what is the need for a Dense Layer? I noticed in multiple examples the following is the structure. A softmax is what required right instead of the Dense Layer? model = tf.keras.Sequential([ tf.keras.layers.Embedding(encoder.vocab_size, 64), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)), tf.keras.layers.Dense(1) ]) Consider the following sentence in 5 class classification: "movie is good" . The model structure could be: a = activation_unit emb= embedding_vector(word) a0 -> emb("movie") ->a1->emb("is") ->a2->emb("good") ->a3, and sample_y = softmax(np.dot(Wya,a3)) and sample_y = [0.1,0.2,0.2,0.4,0.1] which says the sentence belongs to "class 4". So where is the need for a "Dense Layer"? Can anyone please explain this AI: In neural networks meant for classification, you need a linear layer before the softmax to project the internal representation, which has some dimensionality $d_i$, to the output space, which has dimensionality $d_o$ equal the number of choices (5 in your case). So you either place a Dense(5) layer after the BiLSTM or you take the output of the BiLSTM "manually" and implement the projection. The code above has some strange things: Uses numpy.dot to multiply the output of the BiLSTM. Is this a typo and you actually meant tf.dot or tf.matmul? The model ends with a tf.keras.layers.Dense(1), maybe because it was originally meant for binary classification. Has both a Dense layer and then a dot product (i.e. matrix multiplication). These two operations are equivalent to a single Dense layer, so it is pointless to have both. So yo answer your question: assuming that the np.dot actually means a tf matrix multiplication, then the Dense layer in the model is pointless.
H: ExtraTreesRegressor criterion As I understand, ExtraTreesRegressor from sklearn works by doing random splits instead of minimizing a metric like gini for classification or mae for regression. I don't understand why there's a criterion parameter, as the criterion for the splits should be random. Is it just for code compatibility, or am I missing something? AI: No, extremely-random trees does still optimize splits. It does only pick one random splitting point for each feature (out of those randomly chosen max_features) but then which feature is actually used for the split depends on the criterion chosen. https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees
H: Should the baseline comparison be done on the training or the test set results? I have a classification problem where I want to find out whether feature engineering has improved my final model. Cross-validation is used evaluate the impact of the feature engineering steps, so there is no validation set (only train/test). In short, my situation entails the following: Collect data Train baseline model Feature engineering Train final model Compare final model against baseline (question) Comparing the baseline and final models, I assume, can be done by running both models on the test set, subsequently evaluating the differences in their results (if any). However, I wonder if it is useful to compare the models using the training set as well/instead. It would be great if someone could elaborate on this issue. AI: You definitely want the comparison to be based on the test set: Evaluating on the training set doesn't make sense for all the usual reasons. Especially in the case of different features, comparing the performance on the training set could be badly misleading: if one of the model overfits, its performance on the training set will appear better but its real performance (on the test set) is likely worse. Note that it might make sense to study what happens on the training set (e.g. to measure overfitting), but that cannot be the real evaluation used for comparing the models.
H: How to get back stationary data from non stationary data after performing diff operations in var (time series)? I have applied var model on multiple features ,but var model accepts the stationary data so i have converted non-stationary data to stationary data by doing diff() ,given the data to var model. what my question is ,is their option get back data normal data from stationary to non-stationary in time series? AI: You simply have to add the resulting time series. The cumulative sums are comparable to your original input.
H: Is there any problem with dropping only part of the OneHot generated features? The one hot encoder adds more columns to the data, one for each category in the encoded feature. In the example below, the column City was transformed into 4 other columns. Suppose a Decision Tree is ran on a dataset the below is part of and City_Chicago and City_New_York appear to be in top most important features while City_Detroit and City_SanFrancisco in the least important. Would there be any problem if I drop City_Detroit and City_SanFrancisco from my dataset, but keep City_Chicago and City_New_York or do I need to keep all city features as they are part of one initial feature? |---------------------|------------------|-------------|---------------|---------------| | City | City_SanFrancisco| City_Detroit| City_New_York | City_Chicago | |---------------------|------------------|-------------|---------------|---------------| | San Francisco | 1 | 0 | 0 | 0 | |---------------------|------------------|-------------|---------------|---------------| | Detroit | 0 | 1 | 0 | 0 | |---------------------|------------------|-------------|---------------|---------------| | New York | 0 | 0 | 1 | 0 | |---------------------|------------------|-------------|---------------|---------------| | Chicago | 0 | 0 | 0 | 1 | |---------------------|------------------|-------------|---------------|---------------| AI: I think you can keep as many as you want, and it'll be alright. Sometimes it's even worth to delete the very rare classes to have more stable features. In addition, for linear regression, you shouldn't include all of them, as you might have a collinearity issue. To sum up, no problem with not keeping them all.
H: Machine learning goal: given a population of 100,000 students, predict a group of 3,000, and minimize the median grade of that group In other words, I am looking to predict students that will fail out of school before it happens. The data includes socioeconomic status and other related variables. I have tried an XGB binary classification (both tree and forest), but the problem is that it doesn't penalize severely wrong answers (predicting that a student will be in the bottom 3% in terms of grades, but they're actually A+ students). The result is that the average grades of the predicted students is quite low, but the median grades aren't actually that bad - there are a few extremely bad students that pull down the average but not the median. I have tried a XGB regression (both tree and forest), but the problem is that I can't get the model to focus on the bottom 3%. It seeks to reduce error for all predictions. I couldn't care less about telling the difference between an A student and a B student, I only need to consistently identify the bottom 3%ile. I was thinking that perhaps this could lend itself to reinforcement learning instead of supervised, but I know nothing about reinforcement... WOuld it be possible to make a reinforcement model where the goal is to minimize the median grades of the 3% of students predicted? Or are there any other machine learning techniques that would work? AI: Try writing a custom loss function for a regression model! Keras' neural networks support this, for example. See https://stackoverflow.com/q/43818584/745868 (But many other libraries give support for this as well) The only special thing about your custom loss function is that it doesn't add up the error of a datapoint if min(pred_y, actual_y) >= THRESHOLD
H: How Calculate Effect (percentage) label of the input variables on the output variable by DecisionTreeClassifier a description problem below. I have 10 words like X1 , X2 , X3 , ... , X10 and three Label like short , long , hold. My problem is that how calculate Effect (percentage) label of the input variables by DecisionTreeClassifier Algorithm. DT=DecisionTreeClassifier() DT.fit(X_train, y_train) and how calculate Effect (percentage) label of the input variables AI: I don't think there is any way of doing that with decision trees, because that's not how decision trees work: the predicted label is not the result of some linear combination of the features. Instead you can look at the actual decision tree that the model represents and see which features have been used to classify a particular instance.
H: What is the distribution of the goal feature in this dataset https://www.kaggle.com/kemical/kickstarter-projects I was checking the distribution of the "goal" feature in this 2016 KickStarter data set. I figure goal would be more normally distributed but it wasn't. I checked the Kick Starter website for published statistics. It seems the sampling was random as certain categories appear at the frequency Kick Starter Published but the data didn't meet the required assumptions for a T test. I check the Kaggle site for a summary of the goal data and the distrubtion looks the same but I'm very skeptical. I made a qq plot to check normality but this really can't be right. What type of distribution is the column 'goal' in this data set? If the distribution should look like this why? Maybe I broke something or put in the wrong code and I wanted to be sure. My gut is telling me to take a serious look at the goal column. AI: Vaguely looks like it's pareto distributed, meaning that most goals are small, some are medium, and a few are REALLY big. It's fat-tailed and not normally distributed at all. This isn't too surprising- projects vary a lot in nature, there's no reason to expect them to be clustered around a particular value. Caveat: There's bunching effects at certain integral number values. A nice way to see it is to do a histogram plot with log=True and bump up the number of bins.
H: Understanding Loss function and Learning Algorithm In Keras, when specifying a loss such as Mean absolute error, does it replace the cost function in the learning algorithm (Adam or SGD) with the mean absolute error? I'm new to ML and a bit confused on this aspect. AI: Yes, They are the same. In machine learning models, the goal is to minimize the loss during training to compute the quantity that a machine learning model should seek. Depending on the type of data, we can use different loss functions. Here is the list of loss functions used in Keras. You can check how different loss functions work.
H: XGB custom objective function - small change to default regression squared error objective function Where can I find the code for the default squared error objective function? I just want to make a small change to re-weight certain datapoints? AI: You can find a good explanation and example on creating a custom objective function here: https://xgboost.readthedocs.io/en/latest/tutorials/custom_metric_obj.html https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_rmsle.py Original XGB code repo is here, you may have to do some digging to find the code for your objective function: https://github.com/dmlc/xgboost
H: What happens if at leaf node both classes have same number of samples? I analyzed a small dataset which had three features, so I kept max_depth of decision tree to be 3, in doing so I found it something intresting, there was a leaf node which had number of samples of both classes to be equal and decision tree choose one class, now I am intrested to know how class is decided in such scenario, is it random or some other criteria, I have attached image to explain my scenario AI: This is an implementation detail, and I wouldn't necessarily rely on this behavior, but presently in sklearn, it will choose the "first" class. The predict method calls for the probability prediction, then takes the argmax, which in case of ties takes the first one: https://github.com/scikit-learn/scikit-learn/blob/fd237278e/sklearn/tree/_classes.py#L403 https://numpy.org/doc/stable/reference/generated/numpy.argmax.html
H: XG Boost result interpretation for unbalanced datasets (Accuracy & AUCROC) My dataset is of shape – 5621*8 (binary classification) Label/target : Success (4324, 77 %) & Not success (1297, 23 %) (success and Not success were been equally important for my prediction i.e, TP & TN) I split my data into 3 (Train, Validate, test) For train & Validate i perform 10 fold CV. Test is the seperate data, which I evaluate for each folds I tune my scale_pos_weight ranging between 5 to 80, and Finally I fixed my values as 75 since I got average higher accuracy rate for my Test set (79 %) for those 10 folds But, If i check my average auc_roc metrics it is very poor, i.e only 50 % for all 10 folds. If i did not tune scale_pos_weight my avg.accuracy drops to 50% & my avg auc_roc increases to 70 %. How can I interpret from the above results between AUCROC & Accuracy in this situation? What might be the problem in my case? AI: With Success already being the larger class, you probably shouldn't be using a scale_pos_weight larger than one: you want to scale the positive class's contribution to the loss function down rather than up. I suspect that's what's happening in the first case. With scale_pos_weight=75, the model ends up basically only caring about the positive class, predicts everyone is in the positive class, and so your accuracy is just a little better than the 77% baseline you'd expect with that strategy. With that motivation, it's not too surprising the AUC is poor, although I wouldn't have expected a drop all the way to the 50% baseline... If you don't use scale_pos_weight (you said "if I did not tune", but does that mean you left it at the default 1?), then the model performs better in rank-ordering (AUC=70%), but not so well in the hard classification. You might want to tweak the prediction threshold here; there's probably a different threshold that will perform better for accuracy score. You could also try scale_pos_weight=0.25 or so; that should make the default cutoff better, hopefully with little effect on AUC?
H: Challenge in visualizing large coronavirus clusters in US data I'm trying to take the data on large coronavirus clusters in the US and visualize them to show the sizes and the different settings (prisons, healthcare facilities, etc). I want to show the difference between the different settings. If the sizes were more similar, I'd try to show a stacked bar chart (with size as horizontal axis and count as vertical axis). Unfortunately, that's not working well because some clusters are much bigger than others. The first few lines of my data look like (there are lots of aged care facilities with 50 cases): size category 50 agedcare 50 agedcare 50 agedcare 50 agedcare 50 agedcare 50 agedcare 50 agedcare and the bottom looks like (the prisons and meat packing facilities have huge outbreaks) 931 prisons 981 prisons 1028 prisons 1031 meat 1051 prisons 1065 prisons 1098 meat 1107 prisons 1283 prisons 1362 prisons 1374 prisons 1791 prisons 2439 prisons Here is a visualization of the smaller sizes I can do some binning and I get this: But it's still hard to immediately see that some of these setting types have small outbreaks while others have much larger ones. Any suggestions on how to visualize would help (I use python primarily if that matters) AI: How about a small multiples style visualization? A nice 2 x 3 grid would cover the six categories here. Size on X, frequency on Y. This approach is one of the clearest ways to show this kind of data. The stacked histogram is difficult to interpret because the bars of the same color do not have common starting points. If you make six histograms arranged in a rectangle with common scale on the Y, you can quickly visualize the distribution for each category. Here is a plot made with some simulated data like yours. The top image has all six categories on one plot, like yours. The bottom image facets each category into its own histogram with common scale. You can much more easily compare the six categories.
H: Transform large pd.Series into a DataFrame of n columns I have a pd.Series with, for instance, n lines. I would like to transform this series in a pd.DataFrame as follows: Ex: Input: pd.Series([10,11,12,13,14,15]) and a variable chunk_size = 2 that will be the number of columns. Target: 0 | 1 _ _ 10 11 12 13 14 15 The target DataFrame will have a shape of (n / chunk_size) rows by chunk_size columns. Thanks in advance. AI: Here is a quick solution that does not do it in-place but takes up extra space: def transform_series(x, chunk_size): df = pd.DataFrame() for i in range(chunk_size): df[f'column_{i+1}'] = x[i::chunk_size].reset_index(drop=True) return df input_series = pd.Series([10,11,12,13,14,15]) transformed_df = transform_series(input_series, chunk_size=2) Output: print(transformed_df) column_1 column_2 0 10 11 1 12 13 2 14 15
H: How Calculate Effect (percentage) label of the input variables on the output variable by BernoulliNB a description problem below. I have 10 words like X1 , X2 , X3 , ... , X10 and three Label like short , long , hold. My problem is that how calculate Effect (percentage) label of the input variables by BernoulliNB Algorithm. NB = BernoulliNB() NB.fit(X_train, y_train) and how calculate Effect (percentage) label of the input variables AI: A Naive Bayes model consists of the probabilities $P(X_i|Class)$ for every feature $X_i$ and every label $Class$. So by looking at the parameters of the model one can see how important a particular feature is for a particular class. the opposite could be calculated as well: $P(Class|X_i)$ represents the distribution of the classes given a feature. Now at the level of individual instances it's not so clear what would be the "effect" of a particular feature: For every class the posterior probability is: $$P(Class| X_1,..,X_n) = \frac{P(Class)\prod_i P(X_i|Class)}{P(X_1,..,X_n)}$$ You can easily order the features by how much they contribute to the prediction, i.e. the class which obtains the maximum posterior probability (for instance obtain the top 3 features). However you cannot quantify precisely the effect of each feature, because the prediction is not a linear combination of the features. [Details added following comments] Due to the NB assumption that features are independent, we have: $P(Class|X_1,..,X_n) = \prod_i P(X_i|Class)$ $P(Class|X_1,..,X_n) = P(X_1|Class) * P(X_2|Class) * .. * P(X_n|Class)$ From the conditional definition: $P(Class|X_1,..,X_n) = P(Class,X_1,..,X_n) / P(X_1,..,X_n)$ which gives: $P(Class,X_1,..,X_n) = P(Class) * P(Class|X_1,..,X_n)$ $P(Class,X_1,..,X_n) = P(Class) * P(X_1|Class) * P(X_2|Class) * .. * P(X_n|Class)$ Now we use the marginal to calculate $P(X_1,..,X_n)$: $P(X_1,..,X_n) = \sum_j P(Class_j,X_1,..,X_n)$ $P(X_1,..,X_n) = P(Class_1,X_1,..,X_n) + .. + P(Class_n,X_1,..,X_n)$ So at the end we have $P(Class,X_1,..,X_n)$ and $P(X_1,..,X_n)$, so we can calculate: $P(Class|X_1,..,X_n) = P(Class,X_1,..,X_n) / P(X_1,..,X_n)$ Note that if you do all these steps you should obtain the same probability for $P(Class|X_1,..,X_n)$ as the one returned by the function predict_proba. Caution: the functions feature_log_prob_ and class_log_prior_ don't give you the probability directly, they give you the logarithm of the prob. So you need to apply exponential in order to get back the probability.
H: Why would GradientBoostClassifier do better than XGBoostClassifier? I am working on the Kaggle home loan model and interestingly enough, the GradientBoostClassifier has a considerably better score than XGBClassifier. At the same time it seems to not overfit as much. (note, I am running both algos with default settings). From what I've been reading XGBClassifier is the same as GradientBoostClassifier, just much faster and more robust. Therefore I am now confused on why would XGB overfit so much more than GradientBoostClassifier, when it should do the contrary? What would be a good reason why this is happening? AI: GradientBoostClassifier is slower but more precise. In your case, it could be finding a better model without suffering from overfitting. Here are some of the main differences you are looking for. XGBClassifier was designed to be faster. However, XGBClassifier takes a few shortcuts in order to run faster. For example, to save time, XGBClassifier will use an approximation on the splits, not spend so much time on calculating and evaluating the best splits. Usually XGB results will be close and you won't see much difference, but your case could be an exception. If you are still curious, you can try changing some parameters in the XGBClassifier model, such as the tree_method='exact' or you can modify the sketch_eps parameter in XGBClassifier to more closely match the GradientBoostClassifier results. Of course, this will slow down the XGBClassifier model as a result. sketch_eps [default=0.03] Only used for tree_method=approx. This roughly translates into O(1 / sketch_eps) number of bins. Compared to directly select number of bins, this comes with theoretical guarantee with sketch accuracy. Usually user does not have to tune this. But consider setting to a lower number for more accurate enumeration of split candidates. range: (0, 1)
H: Should I use keras or sklearn for PCA? Recentl I saw that there is some basic overlapping of functionality between keras and sklearn regarding data preprocessing. So I am a bit confused that whether should I introduce a dependency on another library like sklearn for basic data preprocessing or should I stick with only keras as I am using keras for building my models. I would like to know the difference for scenarios like which is good for production which will give me better and faster response is there any issue with introducing dependency over other libraries for just 1 or 2 functionality which has a better compatibility with other tools like tensorboard or libraries like matplotlib, seaborn etc. Thanks in advance. AI: which is good for production They are both good. sklearn can be used in production as much as tensorflow.keras which will give me better and faster response I think that doesn't really depends on the libray, rather on the size of your models and of your datasets. That what really matters. Both modules can be used to create very optimized and fast models. is there any issue with introducing dependency over other libraries for just 1 or 2 functionality There are not issues in using sklearn and tensorflow.keras together. In the ML/Data Science world they are probably the two most common tools. No worries about that! which has a better compatibility with other tools like tensorboard or libraries like matplotlib, seaborn etc. Well, keras is now a branch of tensorflow (it's tensorflow.keras). The TensorBoard is designed specifically for it. Other than that, all other visualization libraries such as matplotlib and seaborn are perfectly compatible. Final thoughts: use sklearn and keras in sequence without problems, Data preprocessing steps can use a lot more libraries. Don't worry of using one more, especially if it's a very solid and popular one such as sklearn. However, you might want to substitute PCA with Autoencoders. That's arguably the best dimensionality reduction technique, it's non-linear, meaning it can carry more information with less variables, and it can be implemented in tensorflow.keras. In that way you'd have a neural network that generates a compressed representation of your data, and another that makes the prediction. That's just a suggestion of course, you know your task better than anyone else.
H: What is semantic gap? Why it exists in AI The semantic gap characterizes the difference between two descriptions of an object by different linguistic representations, for instance languages or symbols. The semantic gap can be defined as "the difference in meaning between constructs formed within different representation systems". I need to find a clear answers to the second part AI: Any ML model assumes a particular representation of the data, and connects bits of information only within this representation. For example a linear regression model assumes a linear relation between the features and the target variable. A Naive Bayes model assumes that the features are independent of each other. And these are only the most obvious kind of simplifications made by ML models. Naturally this results in different models representing data differently. Any semantic interpretation based on some model outcome is potentially biased by the assumptions made by the model.
H: is there an adequate number of levels of categorical variables? I have a project that I'm working on. The dataset contains many categorical variables and some of them have too many levels (+100). My question is : is there any advice to know the "adequate" number of levels of a variable ? is it based on the number of levels of other variables ? (for example most variables have between 10 and 30 levels and one or two variables have 80 100 levels). For the variables that contain too many levels, I want to take 80% of most frequent levels and put the 20% into a new level "others" but I don't know at which number of levels I should stop (for example : var 1 : 70 levels, var 2 : 100 level, var 3 : 13, var 4 : 30, var 5 : 60, should I apply the 80-20 method starting from 60? 70? 100?) I don't know if I'm being clear but I hope you understand AI: No, there's no "adequate" number of levels for a categorical variable. The choice to simplify the data by discarding some levels (for example by using a default category, as you propose) depends on what the goal is (and also number of instances, etc.). Very often this choice is made experimentally, that is by trying different methods (e.g. different thresholds) and observing which one gives the best performance: here you could do a program which tries the different proportions as threshold, then train and test the resulting model for every value, and finally plot the performance for every value.
H: How can I explain this chart showing 5-days moving average? I have plotted the frequency of items sold through time, trying to determine the trends by moving average. I considered a 5-days window. I would like to know if this approach makes sense and how I could interpret the results. It's my first time with time-series and moving average (I have no a scientific background at all). I hope you can help me. AI: Yes it makes sense, a moving average makes the curve "smoother" in the sense that it's less sensitive to short variations. This usually makes it easier to observe the general tendency. You could also try different time periods for the average, e.g. 10 days or 15 days. It looks to me like there's a moderate increase trend in your data, but the variations are important and the time window is short, so it's too early to be sure. You could apply linear regression to confirm the increase.
H: Classifier using pytorch I'm writing a demo code to predict a 2-class classification for a dataset of 10-D inputs. Below, function _data generates the data: import math import numpy as np import torch import torch.nn as nn import torch.optim as optim def _data(dimension, num_examples): # Create a simulated 10-dimensional training dataset consisting of 1000 labeled # examples, of which 800 are labeled correctly and 200 are mislabeled. num_mislabeled_examples = 20 # We will constrain the recall to be at least 90%. recall_lower_bound = 0.9 # Create random "ground truth" parameters for a linear model. ground_truth_weights = np.random.normal(size=dimension) / math.sqrt(dimension) ground_truth_threshold = 0 # Generate a random set of features for each example. features = np.random.normal(size=(num_examples, dimension)).astype( np.float32) / math.sqrt(dimension) # Compute the labels from these features given the ground truth linear model. labels = (np.matmul(features, ground_truth_weights) > ground_truth_threshold).astype(np.float32) # Add noise by randomly flipping num_mislabeled_examples labels. mislabeled_indices = np.random.choice( num_examples, num_mislabeled_examples, replace=False) labels[mislabeled_indices] = 1 - labels[mislabeled_indices] return torch.tensor(labels), torch.tensor(features) The code below shows my attempt, where predictor is the model and the loss function is chosen to be Hinge loss. import math import numpy as np import torch import torch.nn as nn import torch.optim as optim dim = 10 N = 100 target, features = _data(dim, N) class predictor(nn.Module): def __init__(self): super(predictor, self).__init__() self.f_1 = nn.Linear(10, 1) def forward(self, features): return self.f_1(features) model = predictor() optimizer = optim.Adam(model.parameters(), lr=1e-2) loss = torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean') running_loss = 0 for _ in range(1000): optimizer.zero_grad() output = model(features) objective = loss(output, target) objective.backward() running_loss += objective.item() optimizer.step() print(running_loss) My questions: I see my loss increase from zero to 20 and then dives deep into negative realm. I was wondering if my implementation is correct. I was trying to implement my predictor without using nn.Linear by defining the computations myself as: class predictor(nn.Module): def __init__(self): super(predictor, self).__init__() self.weights = torch.zeros(dim, dim, requires_grad=True) self.threshold = torch.zeros(1, 1, requires_grad=True) def forward(self, features): return torch.matmul(self.weights, features) - self.threshold but in the optimization process, model = predictor() optimizer = optim.Adam(model.parameters(), lr=1e-3) I get the following error: ValueError: optimizer got an empty parameter list I would appreciate direction or comments on how to fix these issues. Thanks. AI: Choose lr of optimizer something very small. It might be because of exploding gradient. In self.weight use nn.Parameter() then pass your torch.zeros() to make it a model parameter.
H: Why to adjust class weights instead of simply finding the best threshold? In a binary supervised classification where classes 1 and 0 have different number of samples in training, it’s very common to find tutorials about adjusting class weights, over and under sampling for imbalanced data sets. In a situation where there are enough samples for both classes (e.g. not an anomaly detection), why would one adjust class weights or balance the training data if in the end you’ll have to adjust a threshold anyway? AI: If there are enough samples of both classes, I don't think it makes a lot of sense. I've been in kaggle competitions with very imbalanced datasets, like: Fraud detection Home default risk Quora insincere questions And none of the top solutions used any kind of treatment of the imbalance, as there were enough samples for both classes. I've also done myself, several fraud-related models with high imbalance, and using no imbalance solution proved better than all the other options.
H: Overfitting results with Random Forest Regression I have one image that contains for each pixel 4 different values. I have used RF in order to see if I can predict the 4th value based on the other 3 values of each pixel. for that I have used python and scikit learn. first I have fit the model, and after validate it I used it to predict this image. I was very happy and scared to see that I got very high accuracy for my model : 99.95%! but then when I saw the resulted image it absolutly wasn't 99.95% of accuracy: original image: result image: (I have makrd the biggest and most visible difference). My question is- why would I get this high accuracy when the visualization shows very well that there is much less accuracy? I understand it might come from overfitting but then how this different is not detected? edit: Mean Absolute Error: 0.048246606512422616 Mean Squared Error: 0.00670919112477127 Root Mean Squared Error: 0.0819096522076078 Accuracy: 99.95175339348758 AI: Where are you evaluating the performance of your algorithm? Are you making a train test split and evaluating in the test split? It might be that you overfitted your train and you are just measuring the accuracy there. If you have made correctly the train/test split and the evaluation it could be that the images that you are predicting do not have the same properties/configuration/topology than the with you are trainning
H: How many layers should I replace in transfer learning CNN I am designing a convolutional neural network that I believe requires transfer learning to function in practice. The network will be a character level CNN for text classification, more specifically, authorship identification of an author given unknown texts. The initial model will be trained on millions of texts from thousands of authors. In practice, if I want to be able to determine the authorship of a new given author/class not trained upon originally, I need to use transfer learning. The structure of the network involves 6 convolutional layers and 3 fully connected layers. Given that the amount of data of the new author/class will be minimal in most cases, which layers should I replace and retrain for the new class for it to be the most effective? Or are there other methods I could consider to solve this problem? AI: To build on the previous answer: In transfer learning, the goal is to use a pre-trained model and tweak the model to then specialise it to suit a certain task. So, what we do is, as SrJ has eluded to, keep the main model's architecture in tact. So this would be the 6 CNN layers (and possibly the three linear layers, if they were also involved in pre-training). After a model has been pre-trained, what we do is add additional layers to the model so that it is suited for our task. So in this case, the least you would do is have a final output softmax layer, which produces a probability distribution over the authors. In between the final output layer and the original model's architecture, you can add more layers if it is appropriate. When training this model with your task-specific data (this stage is called fine-tuning). We freeze our original model's architecture. This essentially means that the parameters within the models original layers will not change to prevent possible loss in generalisation performance. We only allow the additional layer's parameters to change during fine-tuning. Overall message is to not replace layers, always add onto the existing model to tailor the model more to your classification task.
H: How can I reshape this array? I have few difficulties in picking values from an array. Here is a simplified version of the concerned part of the code. (If you would like to put the whole code please mention it) n=2 m=8 test= test[:,-n*m:] test=test.reshape(test.shape[0],n,m) So, I end up with : test.shape= (350, 2, 8) when I run the code I've got this: array([[[0.01911469, 0.32352942, 0.18032786, ..., 0.006101 , 0. , 0. ], [0.01810865, 0.32352942, 0.18032786, ..., 0.006101 , 0. , 0. ]], [[0.01810865, 0.32352942, 0.18032786, ..., 0.006101 , 0. , 0. ], [0.01710262, 0.32352942, 0.1967213 , ..., 0.01297103, 0. , 0. ]], [[0.01710262, 0.32352942, 0.1967213 , ..., 0.01297103, 0. , 0. ], [0.01408451, 0.32352942, 0.18032786, ..., 0.00763907, 0. , 0. ]], ..., [[0.01006036, 0.2647059 , 0.26229507, ..., 0.40558836, 0. , 0. ], [0.01006036, 0.2647059 , 0.26229507, ..., 0.41399646, 0. , 0. ]], [[0.01006036, 0.2647059 , 0.26229507, ..., 0.41399646, 0. , 0. ], [0.00804829, 0.2647059 , 0.24590163, ..., 0.4208665 , 0. , 0. ]], [[0.00804829, 0.2647059 , 0.24590163, ..., 0.4208665 , 0. , 0. ], [0.01207243, 0.2794118 , 0.26229507, ..., 0.42621556, 0. , 0. ]]], dtype=float32) My question is how can I get only the values of the first columns like this: array([[0.01911469], [0.01810865], [0.01810865], [0.01710262], [0.01710262], [0.01408451], ..., [0.01006036], [0.01006036], [0.01006036], [0.00804829], [0.00804829], [0.01207243]], dtype=float32) I have tried this: test=test[:,:,:1] but I've got this instead of what I'm looking for: array([[[0.01911469], [0.01810865]], [[0.01810865], [0.01710262]], [[0.01710262], [0.01408451]], ..., [[0.01006036], [0.01006036]], [[0.01006036], [0.00804829]], [[0.00804829], [0.01207243]]], dtype=float32) AI: Try the following code. As you want all first value of zeroth dimension. test=test[:,0,0]