text
stringlengths
83
79.5k
H: Image Classification low accuracy I have a dataset that has two folders for training and testing. I am trying to determine whether a patient has an eye disease or not. However, the images I have are hard to work with. I've ran this code below, tweaked it by changing the epochs, batch size, adding more conv2D, and adjusting the image size, and still have a really low accuracy. My guess is that the accuracy is low because the images have different heights (500px-1300px) (same width though of 496px) or the images also have slants which is causing the accuracy to decrease. https://i.stack.imgur.com/2XUjJ.jpg There are 3 disease and 1 non-disease related folders that each contain 100 images in the validation folder (400 images total) Training folder contains about: 37,000 images for disease 1 11,000 images for disease 2 9,000 images for disease 3 27,000 images for non-disease Any feedback on what I should do to improve accuracy? from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D from keras.layers import Activation,Dropout,Flatten,Dense from keras import backend as K import numpy as np from keras.preprocessing import image img_width, img_height= 496,900 train_data_dir='/content/drive/My Drive/Research/train' validation_data_dir='/content/drive/My Drive/Research/validation' nb_train_samples=1000 nb_validation_samples=100 epochs=10 batch_size=20 if K.image_data_format() == 'channels_first': input_shape=(3,img_width,img_height) else: input_shape=(img_width,img_height,3) train_datagen=ImageDataGenerator( rescale=1/255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen=ImageDataGenerator(rescale=1. /255) train_generator=train_datagen.flow_from_directory( train_data_dir, target_size=(img_width,img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width,img_height), batch_size=batch_size, class_mode='binary') ############ model=Sequential() model.add(Conv2D(64,(2,2),input_shape=input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.summary() model.add(Conv2D(32,(3,3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(32,(3,3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(64,(3,3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('softmax')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=nb_validation_samples // batch_size) model.save_weights('first_try.h5') Epoch 1/10 50/50 [==============================] - 919s 18s/step - loss: -4.7993 - accuracy: 0.1400 - val_loss: -7.6246 - val_accuracy: 0.2500 Epoch 2/10 50/50 [==============================] - 902s 18s/step - loss: -5.1060 - accuracy: 0.1440 - val_loss: -9.9120 - val_accuracy: 0.2300 Epoch 3/10 50/50 [==============================] - 914s 18s/step - loss: -4.4773 - accuracy: 0.1200 - val_loss: -5.3372 - val_accuracy: 0.2700 Epoch 4/10 50/50 [==============================] - 879s 18s/step - loss: -3.8793 - accuracy: 0.1390 - val_loss: -4.5748 - val_accuracy: 0.2500 Epoch 5/10 50/50 [==============================] - 922s 18s/step - loss: -4.4160 - accuracy: 0.1470 - val_loss: -7.6246 - val_accuracy: 0.2200 Epoch 6/10 50/50 [==============================] - 917s 18s/step - loss: -3.9253 - accuracy: 0.1310 - val_loss: -11.4369 - val_accuracy: 0.3100 Epoch 7/10 50/50 [==============================] - 907s 18s/step - loss: -4.2166 - accuracy: 0.1230 - val_loss: -7.6246 - val_accuracy: 0.2200 Epoch 8/10 50/50 [==============================] - 882s 18s/step - loss: -3.6493 - accuracy: 0.1480 - val_loss: -7.6246 - val_accuracy: 0.2500 Epoch 9/10 50/50 [==============================] - 926s 19s/step - loss: -3.5266 - accuracy: 0.1330 - val_loss: -7.6246 - val_accuracy: 0.3300 Epoch 10/10 50/50 [==============================] - 932s 19s/step - loss: -5.2440 - accuracy: 0.1430 - val_loss: -13.7243 - val_accuracy: 0.2100 AI: What you want to do is multi-class classification, but loss and your network is made for binary classification. Change: train_generator=train_datagen.flow_from_directory( train_data_dir, target_size=(img_width,img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width,img_height), batch_size=batch_size, class_mode='binary') To: train_generator=train_datagen.flow_from_directory( train_data_dir, target_size=(img_width,img_height), batch_size=batch_size, class_mode='categorical') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width,img_height), batch_size=batch_size, class_mode='categorical') This should make your generators produce the correct label from your folder structure. And change: model.add(Dense(1)) model.add(Activation('softmax')) To: model.add(Dense(4)) model.add(Activation('softmax')) The 4 is for the output nodes in the layer that should correspond to your different classes, disease 1-3 and non-disease. Then also change: model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) To: model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) This will change your loss function from binary to multi-class.
H: Is my bayes classification right or meaningful? I have this dataset and I am learning about Bayes Classifier. After data cleaning, I have tried to use bayes classifier on it. I used R with this code: library(klaR) trainingChoco <- chocolateApriori[1:707,] testChoco <- chocolateApriori[708:884,] naivebayesChocolate <- NaiveBayes(RatFactor ~ ., data=trainingChoco) predictionChoco <- predict(naivebayesChocolate, testChoco) predictionChoco$posterior library(caret) tableChocolate <- table(predictionChoco$class, testChoco$RatFactor) confusionMatrix(tableChocolate) Where RatFactor only has 1, 2, 3, 4, 5 as values (I rounded the ratings). This is my result: predictionChoco$posterior 1 2 3 4 5 1434 5.987980e-14 2.619121e-05 0.080559640 0.919414168 5.987980e-11 1435 4.205489e-10 8.759363e-03 0.022926453 0.968314183 4.205489e-10 1436 4.205489e-10 8.759363e-03 0.022926453 0.968314183 4.205489e-10 1439 1.004950e-10 4.709587e-03 0.006445339 0.988845074 1.004950e-10 1442 9.257687e-10 3.133371e-04 0.004947920 0.994738741 9.257687e-10 1443 3.260598e-13 8.227926e-03 0.171718690 0.820053058 3.260598e-07 [...] confusionMatrix(tableChocolate) Confusion Matrix and Statistics 1 2 3 4 5 1 0 0 0 0 0 2 0 2 5 7 0 3 0 7 42 28 0 4 0 5 45 36 0 5 0 0 0 0 0 Overall Statistics Accuracy : 0.452 95% CI : (0.3772, 0.5284) No Information Rate : 0.5198 P-Value [Acc > NIR] : 0.97 Kappa : 0.0431 Mcnemar's Test P-Value : NA Statistics by Class: Class: 1 Class: 2 Class: 3 Class: 4 Class: 5 Sensitivity NA 0.1429 0.4565 0.5070 NA Specificity 1 0.9264 0.5882 0.5283 1 Pos Pred Value NA 0.1429 0.5455 0.4186 NA Neg Pred Value NA 0.9264 0.5000 0.6154 NA Prevalence 0 0.0791 0.5198 0.4011 0 Detection Rate 0 0.0113 0.2373 0.2034 0 Detection Prevalence 0 0.0791 0.4350 0.4859 0 Balanced Accuracy NA 0.5346 0.5224 0.5177 NA Do you think this result is right? Or am I missing something? Can you explain what would you understand by looking at this result please? Or can you post a similar example done step-by-step? Thanks. AI: Do you think this result is right? Depends what you mean by "right"... the results seem reasonable, I don't see any obvious sign of mistake in the process. Can you explain what would you understand by looking at this result please? I observe that you don't have any data for classes 1 and 5, so technically it's a 3-classes problem. First with 3 classes the random baseline accuracy would be 0.33, 0.45 is better so your model does better than this (that's the bare minimum). However according to the confusion matrix class 3 has 92 instances out of a total of 172, which means that a basic majority class learner always predicting class 3 would get 52% accuracy (if my calculation is correct). So 45% is not very good.
H: Why when my local cv of loss decreases, my leaderboard's loss increases? I got a cv log_loss of 0.3025410331400577 when using 4-fold cross-validation and my leaderboard (with 30% of test dataset) got 0.26514. I further did feature engineering and added some features to the model, which decreased my cv log_loss to 0.2946628055452142 but my leaderboard score increases to 0.30021. With all other techniques used, my cv log_loss decreased but my leaderboard loss increased. I used XGBoostClassifier model. I have removed all correlated features (corr > 0.8) also. Usually we will be judging whether our model generalizes or not, based on cv score. But here, cv score is not reliable. What may be the reason of this? And is it valid to judge my model performs better when my cv score decreases ? If not, what are all the other techniques to judge my model? AI: I think there are a few things going on in this question so I will take them one at a time. Just to note there are multiple reasons you could be faced with the issues you are noticing so I'm just going to give some possible reasons that come to mind. Just to note by leaderboard is something like a Kaggle competition where data is held back for use as a blind test. Overall, I think more information is needed to have a stab at troubleshooting which I will explain as I go. I got a cv log_loss of 0.3025410331400577 when using 4-fold cross-validation and my leaderboard (with 30% of test dataset) got 0.26514. I further did feature engineering and added some features to the model, which decreased my cv log_loss to 0.2946628055452142 but my leaderboard score increases to 0.30021. Your approach to use cross-validation (cv) is good however 4-fold seems odd to me and also a little low. The standard would generally be 5 or 10-fold, see Cross Validation for a nice discussion about cv and some advantages of 10-fold. My thought is that if you have a proportion of outliers/misclassified data in on your training set your low choice of 4-fold could mean the outliers are present in all training sets so your model is trained on these misclassified cases. Perhaps test increasing the number of folds on your model's performance. Conversely, cv does depend on sample size so if you are restricted on sample size this would necessitate reducing the number of folds or avoiding cv completely. The issue with not doing this is explained very nicely here but in short, each of your folds for training should have the same distribution as the test set so if you think this may not be the case avoid cross validation or drop the value of k. Here is a really nice discussion about k-fold cross validation and over-fitting but the bottom line is it can happen. This is also without being sure of what supervised machine learning technique you are using however which would also play a role as some machine learning techniques work better with larger training sets e.g. DNNs. With all other techniques used, my cv log_loss decreased but my leaderboard loss increased. I used XGBoostClassifier model. I have removed all correlated features (corr > 0.8) also. Your choice to use a gradient boosting technique makes me think you are doing logistic regression and as such your choice to remove one of the highly correlated variables is, in general, a good idea; removing the extra influence of the variables, here is a good discussion. The fact that you are using cv to compare your approaches is good, this is what it is essentially for but note that as you do this you risk over fitting to the training data similar to doing this with just a training set. An important step I think you have missed is to split your data and create your own test set that you do not touch until hyperparameter tuning is complete to use as a blind test. This should give you equivalent results to the leaderboard test set if you have big enough sample size, your test/train split has equivalent distributions and lastly, if the test set from the leaderboard is, in fact, the same as the data you are using for training. And is it valid to judge my model performs better when my cv score decreases ? Assuming a high score is good (i.e. we want to maximise for accuracy etc not minimise some loss function) no, this is definitely not the way to go, I hope I have convinced you of possible reasons why here. EDIT Another possible reason I thought of after discovering the dataset is quite small is that the issue could be caused by data leakage if an upsampling technique was used. Here is a very good discussion but basically if you upsample from a pool of training data before performing cv then your model could learn some traits of other real training data that was split into other cvs. This would cause your cv performance to be far higher than it really should. The way around this is t upsample within each cv. To summarise, I think your question is generally as to why a machine learning approach does not perform as well on test data as it does on train to which the answer is simple; it never will. I like to think of the analogy of studying for an exam, you would do brilliantly if the exact questions you studied came up but this rarely if ever happens! If something similar to what you studied came up, you may do well but probably not quite as well.
H: I need some guidance how to develop myself in deep learning Few months ago I've taken an Andrew Ng course at coursera and started my data science/machine learning journey. During those months, I've take another courses from the internet - about Deep Learning, statistics - read articles from towardsdatascience website, now I'm reading a book "Hands on Machine Learning". But I feel I'm stuck, I have over a dozen pages of courses at coursera, few books, and I don't know what to do next. I feel like I'm doing the same things all the time. I'll be very thankful for some guidance :) AI: I am not an expert in the field, but I'd like to share with you my personal opinion. Taking online courses and reading books is great, but the problem is you don't get to practice much on your own. I know there might be assignments and projects included, but they're mostly guided and pretty straightforward, or related to a certain topic of the chapter/lecture. However, real world problems are a bit different. They usually encompass several smaller sub-problems that might not be clear at the beginning, and there are a plenty of choices that you have to make on your own. Usually you'd have to try several solutions before you're confident what suits your problem best. I believe it's something you learn mostly by practice. So I think given that you have enough knowledge now, it's about time to use your knowledge and build interesting stuff, stuff that would speak for you. Kaggle is also a great way to move forward, since there are tons of competitions and data sets that could inspire you, and help you know where you stand among others. With every project you try to do, you discover something new or learn to appreciate something you already know more. Plus you can always learn from others who publish their kernels. Whatever you choose to do, I wish you all the best :)
H: Reinforcement (Q) learning: does it learn while in production? I have a question for which I could not find the answer to it: While training reinforcement learning (using DQN), I get a model for the best reward for the next action. Now, if I deploy this model (i.e., use this model to make predictions), does it keep learning (i.e., updating the Q values)? AI: Now, if I deploy this model (i.e., use this model to make predictions), does it keep learning (i.e., updating the Q values)? If you want it to (and understand how to code it) then yes a reinforcement agent - including a DQN-based one - can do this. This is online learning, and is possible also with many supervised learning techniques. Because there is risk of the agent learning incorrectly, you may choose to limit online learning, or disable it in production. It could also happen by mistake if you are not sure how to stop it, and simply deploy your existing agent code from the training scripts into production. So make sure that you, or whoever you get to implement production version of the agent, understands how to control whether learning is still occurring. You may choose to enable online learning if your initial training was in simulation, and you would like the agent to learn more from real-world interactions. Or you might choose to do so if the problem was non-stationary because the environment description changes over time. Many problems involving interacting with a population of people change over time with demographic changes. If the environment is stochastic, then in theory you could also switch off exploration - typically by setting $\epsilon = 0$ in an $\epsilon$-greedy behaviour policy. That would allow the agent a (limited) ability to refine its Q value estimates with a relatively low risk. The agent would continue to attempt to act optimally, but may learn enough to decide that different actions were optimal. Note this is still not without risk, because the learning process could fail in some way, leading the agent to predict a wildly incorrect optimal action. Allowing Q learning to explore non-optimal actions - typically by setting $\epsilon > 0$ in an $\epsilon$-greedy behaviour policy - is more risky in production, because the agent will occasionally pick a non-optimal action in order to refine its estimates of that action. It may result in improved learning in the longer term though, so you might decide to do that if the consequences of non-optimal actions were mild.
H: What is the concept of Normalized Mutual Information in the evaluation of Clustering? I know what mutual information basically is but not quite sure about why and how it is used in the context of evaluation of clustering mechanisms ? Can someone please explain the intuition behind it ? i.e, how it is defined in the case of clustering evaluation ? AI: This is a method for evaluation of two clusterings in the presence of class labels so it is not proper for real clustering problems in which class labels are not available. Imagine you have class labels and you want to evaluate a clustering or (compare two clusterings). The most natural idea is to use Purity score. It simply checks labels with clusters and the best case is, of course, when each cluster contains only and only one class label. This score, however seemingly natural, has a drawback. If you consider each cluster having only one data point, then Purity is maximized! So there should be an awareness about the number of clusters when calculating the purity score. The next idea is calculating the Mutual Information. Mutual Information considers two splits: (1) split according to clusters and (2) split according to class labels. Then it tells you how these two splittings agree each other (how much information they share about each other or how can you know about one of them if you know the other one). Like purity, MI also gets bigger when the number of clusters is large. Then comes NMI which is bias-corrected for the phenomenon explained above and also normalizes the score between $0$ and $1$ (MI does not have an upper bound). NOTE: I think your question was answered in first line. If you want to evaluate clustering you are not looking for external measures where labels are needed. I just explained a bit for sake of completeness of answer.
H: Web Scraping: Multiple small files or one large file? I plan to scrape some forums (Reddit, 4chan) for a research project. We will scrape the newest posts, every 10 minutes for around 3 months. I am wondering how best to store the JSON data from each scrape, so that pre-processing (via Python) later would be as simple as possible. My options are the following: Dump data from each scrape into a fresh file (timestamp as filename). Resulting in 12,960 files of approx. 150kb each OR Maintain 1 single large file, whereby the scraper simply appends the new output to the end of the file. Resulting in 1 file of size approx 1.9Gb after 3 months Does anyone have any recommendations or warnings from their experience about either approach and how this affected pre-processing? I am cautions that having a pre-processing script work on a larger file might take longer to process but then again, opening and closing thousands of files will also be time-consuming. AI: The danger of one large file is that you may not be able to read the whole thing into memory at once. Since you already know that 3 months of data will be ~2GB, this should not be a problem, so I would recommend option 2. Perhaps an even better option is to save the data as several medium-sized files. For example, you could combine all the observations for a week into a single file. This way, you eliminate nearly all of the file I/O overhead (~13,000 files down to 12-14) but also free yourself from potential memory problems. This is certainly what I would do if I were collecting data for a very long time (years instead of months).
H: How to solve XNOR problem with neural network? How do we design a neural network with one hidden layer, two hidden neurons and an output neuron that implements an XNOR function? The truth table of XNOR is given below: And how to provide weight and bias coefficients of this network? AI: Combine a NN for x1 AND x2 [your a1] with a NN for (NOT x1) AND (NOT x2) [your a2] to get the XNOR. You end up with inputs +1, x1, and x2 going to your hidden neurons a1 and a2, then +1, a1, and a2 into your output neuron.
H: Is it conscientious to use a threshold for a model output in order to play on the recall and precision? I have just finished reading an article about the F1 score, recall and precision. Everything was clear except the fact that the author, in his example (see https://towardsdatascience.com/beyond-accuracy-precision-and-recall-3da06bea9f6c#a9f0), is using a threshold for his model output to play on the F1 score, recall and precision values. So my question is: is it conscientious to use a threshold as the author did in his article? In my opinion, this is like a kind of manual over-fitting, but maybe there is something I misunderstood... AI: Good question. Using a threshold is perfectly fine and is not "manual overfitting". It is not manual because this is a step that can (and should) be done automatically. It is not overfitting as it doesn't modify the model itself. It modifies how you interpret the model's output. What the user did is actual called cost-sensitive learning. It is a technique where you define which error is more costly and these costs are reflected in the performance metric you use. I can see why it may feel like overfitting but it is important to understand that all you are doing here is handcrafting a performance metric with which you will evaluate your model. Your model will simply try to optimize this metric.
H: Classify event announcement I would like train a classifier to spot news articles which would spot articles that announce an event. The issue is that I do not have a large pre-labeled dataset for this task (I only have 200 examples). So here are my two questions: Have you heard of a training set labeled for such task I heard of few shot learning that can be helpful to train a classifier with few example would it be applicable to this case and is there any library/reading you would recommend. Thanks in advance AI: Semi-supervised learning and in particular active learning could be considered in cases like this: The general semi-supervised setting consists in training a model from an initially small training set by applying it iteratively to unlabelled instances. There are various methods to minimize the risk of training the model on wrongly classified instances. Active learning is a variant of semi-supervised learning where the model queries the human expert for annotations, but the instances are carefully selected in order to minimize the amount of human labor. There is also bootstrapping, where one would focus on the positive instances: apply the original model to the unlabelled data, the manually annotate only the instances which are predicted as positive (useful only in cases where the positive class is much smaller than the negative one).
H: Should dimensionality reduction be done before k-means clustering if there are many features? My data contains over 200 features and over 500 observations. I want to place the observations into a number of clusters based on the features that make them different. There are numerous ideas I have and I'm not sure which one is apt: 1) Conduct principal component analysis (PCA) to reduce the features to two dimensions. I've already done this so that I could visualize them on a 2D plot. It would now be quite easy to use k-means clustering with these two dimensions to create the clusters, but I wonder if this isn't a good idea because of all the components that are being lost. But then again, if they're being lost they're probably not that important? Not sure 2) Conduct principal component analysis (PCA) to determine which features are worth including and then conduct k-means clustering on those features. So I probably wouldn't be reducing the dimensions to two, but they would be reduced and then the k-means clustering would be done. This seems like the best idea intuitively to me, but I'm not sure. 3) Forget the PCA and just conduct k-means clustering on all the features I have at the beginning. This feels like it's probably the worst idea because some of the features could be useless but could still be factored into the distance calculations for the clustering, but I'm just including everything I've thought of. AI: For the first idea about PCA, you can not simply just use 2 components. You need to take a look at the explained variance by your principal components and based on that you should select the required number of components. If, for example, you found that the first two components explain a significant amount of variance (e.g., more than 95%), then, you can use them to perform k-means clustering. In that case, it is expected (but not for sure and for all cases) that you get the same results as when you perform k-means using all features. My suggestion is to use all of your features if you need to consider a large number of components. My reason is that your dataset is too small and it is not computationally demanding to perform k-means clustering using your dataset. As a side note, also you can try all of your options because your dataset is small and then you will find out what's going on in your data.
H: Where to upload large (0.5Gb) weights anonymously? I need to upload a number of checkpoints for ConvNets (weights + optimizers, all dicts of pytorch tensors), each about 0.5Gb anonymously. I don't want to use Google Drive. I trained models on the university cluster (if it's relevant). Where can I load these files anonymously? The files must be publicly available, but my identity must remain anonymous. AI: file.io or transfer.sh are good choices. transfer.sh has a 14 day lifespan for all files, while you can increase or decrease the lifespan on file.io. Additionally, you can use both from a command line or terminal via curl, and they're anonymous.
H: Data augmentation for multiple output heads in Keras I have a transfer learning based two output classification problem. So, accordingly, I have formatted my data to have X_train as a (number of samples, height, width, channels) numpy array, y_train1 as (number of samples,) numpy array and y_train2 as (number of samples,) numpy array. As I am not training using directory structure, I am using ImageDataGenerator.flow(). However, I am not able to figure out how I can pass two label arrays because, it is taking the labels as (2, number of samples) when I send it as [y_train1, y_train2] list. I am able to train the network without Keras data augmentation (for two outputs). But, I am not able to apply data augmentation. I am trying to do the following: datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rescale=1./255, class_mode="multi-label") model.fit(datagen.flow(X_train, [y_train1, y_train2], batch_size=batch_size), batch_size=batch_size, epochs=nb_epochs, steps_per_epoch=spe, validation_data=(X_val, [y_val1, y_val2])) Also, ImageDataGenerator.flow does not have class_mode unlike ImageDataGenerator.flow_from_dataframe. Any suggestions/help would be appreciated! References: Data Augmentation Multi Outputs (No answer. I upvoted this just now) Get multiple output from Keras (Does not explain data augmentation) AI: Please refer to the source code provided at https://gist.github.com/swghosh/f728fbba5a26af93a5f58a6db979e33e which should assist you in writing custom generators (basis ImageDataGenerator) for training end to end multi-output models. In the provided example, GoogLeNet is being trained which consists of two auxiliary classifiers and thus, comprising 3 outputs in the complete model. The output from ImageDataGenerator.flow_from_directory() have been passed into a Python function with yield statements such that the targets can be repeated thrice as per requirement. def three_way(gen): for x, y in gen: yield x, [y, y, y] Also, please make sure to pass a value to steps_per_epoch argument in case of model.fit() or model.fit_generator() so as to ensure that the training loop doesn't run indefinitely. (This is because the Python generator object won't be inherited from Sequence class and number of batches/steps cannot be inferred implicitly by Keras) Also, you may consider using an image data pipeline constructed using tf.data APIs combined with Random* Layers and Preprocessing Layers which is a more modern approach for tf.keras compared to the old ImageDataGenerator. There should be a bunch of performance and usability benefits in case you transition. Let me know if you are looking for an example. Please look at this example in case you want to implement multi-output multi-label classification model. https://www.pyimagesearch.com/2018/06/04/keras-multiple-outputs-and-multiple-losses/ I'd still advise to use tf.data.Dataset instead of raw tensors which'd be more feasible for larger datasets that do not fit into memory plus, the tf.data APIs provides for an additional bunch of flexibilities.
H: How to deal with missing continuous data So I have a dataset where I have a continuous variable for only about 10% of the entries. How would you incorporate this in a model. Imputing does not make much sense to me, because there are so few values, however for those entries where I do have it, it is quite an important feature. AI: Where ever you don't have an entry just make that to a null value or a random unique value. an then create another column, make that column data equal to 1 if data is present 0 otherwise so, the model may learn about that unique imputation and data using another column we are adding.
H: Formal math notation of masked vector I'm struggling to write my algorithm in a concise and correct way. The following is an explanation for an optimizer's update step of part of a vector of weights (not a matrix in my case). I have a vector $\alpha \in \mathbb{R}^d$, and a set $S$ that includes some indices $1\leq i \leq d$ ($S \subseteq \{1,\dots, d\}$). Now, I want to denote that $\alpha$ is 0 for every index $i\in S$, and otherwise it's the value as in $\alpha_i$. At first I denoted it $\alpha_S$, but I'm not sure it is properly defined or understandable. I could use the following notation: $\alpha_S = \begin{cases} \alpha_j & j \in S\\ 0 & j \notin S \end{cases}$ But its line height is twice the size, and I want to avoid that. Is there any other formal, simplistic way to notate this correctly? Maybe some kind of a masking vector to be multiplied with $\alpha$? Thanks! AI: Check out the indicator function $1_S(\cdot)$. In your case it would be fined as $$1_S: \{1, \ldots, d\} \rightarrow \{0, 1\}, j \mapsto \begin{cases} 1 & j \in S \\ 0 & \, \text{else} \end{cases}.$$ Multiplying this function with the respective values should give you what you are looking for. Edit: If I understand your comment correctly, the vector you are looking for is $$ \alpha_s = \sum\limits_{i = 1}^{d} \alpha_i e_i 1_{S^C}(i). $$ Here $e_i$ denotes the i-th unit vector of $\mathbb{R}^d$, $S^C$ is the complement $\{1, \ldots, d\} \setminus S$ in $\{1, \ldots, d\}$ and $1_{S^C}$ is the indicator function that is $1$ for $i \notin S$ and $0$ for $i \in S$.
H: Tuning SVM C parameter I would like to ask for help regarding my model. I have a dataset of preprocessed images and I performed a binary classification with SVM on Python. I tuned the value of the c parameter from 0.001 to 1 with 10-fold CV and I obtained the following results. My questions are: Does this mean that the optimal value of C is 0.001? The rest means that there is overfitting? The first value is not 0.000 but 0.001 Thank you. AI: Essentially, you only want to look at your test accuracy. The training accuracy is only relevant in comparison with the test accuracy. In your situation, it seems that your test accuracy is roughly the same across the board beside a slight peak around 0.620. So I would pick this as the best value for C. My answer is related to the one I gave on this post, here is a summary: Overfitting is not when your train accuracy is really high (or even 100%). It is when your train accuracy increases and your test accuracy decreases is low. it is not abnormal that your train accuracy is higher than your test accuracy. After all, your model has an advantage with the train set since it's been given the correct answer already.
H: What is the difference between ensemble methods and hybrid methods, or is there none? I have the feeling that these terms often are used as synonyms for one another, however they have the same goal, namely increasing prediction accuracy by combining different algorithms. My question thus is, is there a difference between them? And if so is there some book/paper that explains the difference? AI: Here is a para that I found by searching What are hybrid methods in Machine Learning, on google. "In general, it is based on combining two different machine learning techniques. For example, a hybrid classification model can be composed of one unsupervised learner (or cluster) to pre-process the training data and one supervised learner (or classifier) to learn the clustering result or vice versa." Along with that example, let's consider an example of Ensemble Learning that is Random Forest. * In classical ensemble learning, you have different or similar algorithms, working on different or the same data-sets (for example Random Forest Stratifies the data set and builds different Decision Trees for those data-sets, while at the same time you can build different models on the same unstratified data-set and create an ensemble method). So in essence, you have different machine learning models, working independently of each other to give a prediction and then there is a voting system (hard or soft voting) which determines the final prediction. According to the example of the hybrid machine learning model that we saw, the models in hybrid machine learning models essentially feed their output to one another (one-way) in an effort to create an efficient and accurate machine learning model. So the difference in both is that ensemble methods work independently to vote on an outcome while hybrid methods work together to predict one single outcome, which no voting element present in it. *https://www.sciencedirect.com/science/article/pii/S1568494609001215
H: How does scaling affect Logistic Regression? I have searched a lot on the web for this question, but I never seem to find a consistent yet straight forward answer. Simply put, the question is: How exactly does scaling affect logistic regression? What should you expect from a logistic regression classifier when the data is scaled and when it is not? Is there any difference between min-max scaling and standard scaling in terms of logistic regression? AI: It affects anything optimized by a form of gradient descent, because it affects the relative scale of the dimensions of the input. If A is generally 1000x larger than B, then changing B's coefficient by some amount is in a sense a 1000x bigger move. In theory this won't matter but in practice it can cause the gradient descent to have trouble landing in the right place in B's dimension. I think the more significant effect may be regularization. All terms are penalized equally in a simple formulation of regularization. Again if A is generally 1000x larger than B, its coefficient will tend be 1000x smaller (all else equal) than B's, so will be far less penalized, when there's no particular reason to treat them differently. Finally it affects interpretation, if you want to view the coefficients' magnitudes as correlating with importance, for the same reason.
H: Which insights a data scientist could derive from text-analysis? I have many texts and I am trying to analyse them. After tokenising them, studying words frequency, spotting any typos, studying punctuations, I have been working on POS tagging. Since it is my first time in text mining and manipulation, I would like to know which kind of insights I could get from this information and what the best approach to present this analysis would be. For example: if I had many texts like the following: Hemingway =["When spring came, even the false spring, there were no problems except where to be happiest. The only thing that could spoil a day was people and if you could keep from making engagements, each day had no limits. People were always the limiters of happiness except for the very few that were as good as spring itself.","Most people were heartless about turtles because a turtle’s heart will beat for hours after it has been cut up and butchered. But the old man thought, I have such a heart too.","Perhaps as you went along you did learn something. I did not care what it was all about. All I wanted to know was how to live in it. Maybe if you found out how to live in it you learned from that what it was all about.","The people that I liked and had not met went to the big cafes because they were lost in them and no one noticed them and they could be alone in them and be together."] Shakespeare=["These violent delights have violent ends And in their triump die, like fire and powder Which, as they kiss, consume","Let me not to the marriage of true minds Admit impediments. Love is not love Which alters when it alteration finds, Or bends with the remover to remove. O no, it is an ever-fixed mark That looks on tempests and is never shaken; It is the star to every wand'ring barque, Whose worth's unknown, although his height be taken. Love's not Time's fool, though rosy lips and cheeks Within his bending sickle's compass come; Love alters not with his brief hours and weeks, But bears it out even to the edge of doom. If this be error and upon me proved, I never writ, nor no man ever loved.","O serpent heart hid with a flowering face!Did ever a dragon keep so fair a cave? Beautiful tyrant, feind angelical, dove feather raven, wolvish-ravening lamb! Despised substance of devinest show, just opposite to what thou justly seemest - A dammed saint,honourable villain!","Lord Polonius: What do you read, my lord? Hamlet: Words, words, words. Lord Polonius: What is the matter, my lord? Hamlet: Between who? Lord Polonius: I mean, the matter that you read, my lord."] Would it be useful a word frequency distribution, POS tags distribution? I have never presented any results, so I would like to know more on how a data scientist/analyst may 'tell' and look at this data. AI: Well, obviously the use cases depends on the industry. Also, I am assuming you are thinking of use cases that are somehow useful. But let's think of some examples: I once worked with a book distributor that tagged each book they sold with keywords (Fantasy, Horror, etc). You can automate the tagging process if you have a sufficiently large dataset of already labeled books. You can do the same with your phrases (Inspiring, Funny, etc) but probably you don't have labeled data. It would be nice to be able to ask an app for a phrase of certain type :). Something easier is sentiment analysis: are most of the phrases positive? Negative? You don't need labels for this. Style transfer: maybe you have several phrases by, let's say, Shakespeare. You can try transferring his style to Einstein's phrases. This is hard but feasible: look up Generative Adversarial Networks.
H: What is the 1 Unit in the contraint of SVM: $y_i(wx_i+b) \geq1$ I am following this note on SVM. The constraint, $y_i(wx_i+b) \geq 1$, basically said all inputs, $x_i$, lie at least 1 unit away from the hyperplane on the correct side. What does it mean by 1 unit? In 2D space, does it mean moving the hyperplane, $w$, up by 1? AI: Main formula for SVM is - $y_i(wx_i +b) \geq d$ In the derivation process, it is changed to 1 to make it standardized for all hyper-plane. If it has to be described, it will be - "Greater than" "per unit of minimum margin distance" Let's suppose, If a hyper-plane has the minimum margin point at 4 Eucledien distance Another one has it at 4.5 Eucledien distance So, this $y_i(wx_i +b) \geq$ 1 means, 1 unit of "every 4 units" for first hyper-plane and 1 unit of "every 4.5 units" for the other hyper-plane What it meant - This is more for Mathematical convenience. Another neatness it added, the maximizing equation changes to 1/$w$ from F/$w$. F is the distance of the point which is nearest to the plane. Why it will not affect point position A plane i.e. $(wx_i +b)$ will not change if we rescale $w$ and $b$. So we rescaled it in such a way such that F becomes 1. This "1" will be different for different Hyperplanes depending on it's $w$. Added this screen from Support Vector Machines Succinctly. Please read it if you want a very detailed start to end explanation of SVM with python code Good references for SVM Alexandre Kowalczyk Shuzhanfan Professor Yaser Abu-Mostafa
H: Is it acceptable not to transform() test data after train data is being fit_transform()-ed We know that the best practice in data preprocessing (such as standardization, Normalization, ... etc) is that while we perform fit_trasform() on the training data, we apply transform() testing data so that the learned parameters from scaling the train data are applied on testing data. Similar to this: from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform (X_test) The question is: Does it also make sense to perform fit_transform() on the training data but NOT transform() testing data at all so that we get to test the model performance on actual real-world data that are not transformed at all? Thank you AI: No, it does not make sense to do this. You model has learned how to map one input space to another, that is to say it is itself function approximation, and will likely not know what to for the unseen data. By not performing the same scaling on the test data, you are introducing systematic errors in the model. This was pointed out in the comments by nanoman - see that comment for a simple transformation example. To exaggerate the point, imagine you have a language model, translating text from English to French. You apply a single transformation on your English data: you first translate it to Spanish. Now the model is trained to translate Spanish to French, with some accuracy. Now you move on to the test data - still in English - and you do not apply the same transformation as you did to your training data. You are asking the model to translate directly from English to French, instead of Spanish to French, and it is obvious that the results won't be good. In principal, this idea is the same as with any other model and transformations, just that the impact might not always be so visible i.e. you might get really lucky and not notice a large impact. The language model might have learnt some elementary linguistics common to all three languages (e.g. overlapping vocabulary or sentence structuring), but we cannot expect the model to perform well, translating English to French. Practical Note You should only compute the transformation statistics (e.g. mean and variance for normalisation) only on training data and use these values to then transform the training data itself, and then the same values to transform the test data. Including the test dataset in the transform computation will allow information to flow from the test data to the train data and therefore to the model that learns from it, thus allowing the model to cheat (introducing a bias). Also, it is important not to confuse transformations with augmentations. Some "transformations" might be used to synthetically create more training data, but don't have to be used at test time. For example, in computer vision, deleting regions of an image. Test time augmentation is something you could read about. Extra discussion More complicated models (ones with many many more parameters) might be able to perform some kind of interpolation, especially if your dataset if N-dimensions with a large N (e.g. > 10). This has recently been seen with extremely large models, such as Open AI's GPT-3, which has 175 BILLION parameters, and is therefore even able to perform quite well on completely different tasks, let alone the given problem in the training set range.
H: static and dynamic data in clinical trials Hi everybody and thanks in advance for those who will help me for this problem. I have multiple data regarding patients involved in a clinical trial and my goal is to predict their death/non death. These data are composed by two datasets , the first is static (age, gender ecc) and the second is dynamic (analysis results collected during multiple sessions like %hematocrit, %sodium, blood pressure and so on). I know that if I had only static data I would have used any ML classification algorithm and of course in case of only dynamic data the "right" way was to build a LSTM but with both of them I'm a bit confused. Which kind of model can I build in this case? I was thinking at two models where the output of the first one (static) becomes the input of the second one (dynamic). Is it a good idea? Thank you. AI: What you are referring to is called a multi-input model and can be esaily built in most deel learning frameworks. The idea is to have both types of data as separate inputs, then use specific layers depending their types (recurrent layers to sequence data, CNN to images, and so on...) to later on concatenate them together. If you can use Keras, there is the functional Api which is specialy well suited for the task at hand. An example of code (based the example given in the documentation) for your problem could be: from keras.layers import Input, Embedding, LSTM, Dense, merge from keras.models import Model # headline input: meant to receive sequences of 100 integers, between 1 and 10000. # note that we can name any layer by passing it a "name" argument. main_input = Input(shape=(100,), dtype='int32', name='main_input') # this embedding layer will encode the input sequence # into a sequence of dense 512-dimensional vectors. x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input) # a LSTM will transform the vector sequence into a single vector, # containing information about the entire sequence lstm_out = LSTM(32)(x) #At this point, we feed into the model our auxiliary input data by concatenating it with the LSTM output: auxiliary_input = Input(shape=(5,), name='aux_input') x = merge([lstm_out, auxiliary_input], mode='concat') # we stack a deep fully-connected network on top x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) # and finally we add the main logistic regression layer main_output = Dense(1, activation='sigmoid', name='main_output')(x) #This defines a model with two inputss: model = Model(input=[main_input, auxiliary_input], output=main_output) #Then compite and train model.compile(optimizer='rmsprop', loss='binary_crossentropy') model.fit([headline_data, additional_data], labels, nb_epoch=50, batch_size=32) In your case, the dynamic data would be the headline_input and your static data the auxiliary_input. The model will take both, apply the recurrent layer to the former and concatenate them to pass the union through the dense layers. Of course, many of these parameters will depend on your data, but at least this example will give you an idea on how to build such model. There is also this interesting project conditionall RNN which is also meant for this purposes. Worth taking a look.
H: How to implement a YOLO model using TensorFlow? I want to implement a face detection model using TensorFlow. I have a dataset downloaded from Google's OID. I have prepared the annotation file as required in the YOLO. But I don't understand how to convert the annotations to the required output tensors. Is learning Darknet a simpler option? AI: I personally prefer to use darknet rather than Tensorflow. However, use these codes to build YOLOV3 in TensorFlow. Also, take a look at this link
H: Data Imbalance in Regression Tasks Having read a lot about class imbalance in classification tasks, I'd like to know what is the methodology for data imbalance in regression tasks. Particularly, - What is the procedure to check for it (for example in classification, we'd check if all classes have the same training examples) - What are possible ways to solve it? AI: In a regression framework one would usually not speak about (class) imbalance. In this case it is about the distribution (mostly of the dependent variable). A distribution has several moments. The expected value (1st), variance (2nd), skewness (3rd) and kurtosis (4th). They all together define how a distribution looks like. Generally, a normal distribution works well with regression. You can also test normality. As mentionned by others, you can plot the (empirical) probability density function. Another useful thing is a boxplot. And you can look at the moments. R example (skewed distribution): library(sn) library(moments) set.seed(12) d1=rsn(n=10000, xi=1, omega=5, alpha=100, tau=0.5, dp=NULL) plot(density(d1)) boxplot(d1) skewness(d1) # 0 for normal kurtosis(d1) # 3 for normal This yields: > skewness(d1) # 0 for normal [1] 0.8130714 > kurtosis(d1) # 3 for normal [1] 3.412369 The probability density function shows a "tail" to the right (positive skewness / right-skewed distribution). Also the box plot indicates that there are some "unusually high" values in the distribution. R example (normal distribution): Same procedure for normal... d2=rnorm(10000, 3, 2) plot(density(d2)) boxplot(d2) skewness(d2) # 0 for normal kurtosis(d2) # 3 for normal Density function and boxplot look much more "harmonic" in this case. > skewness(d2) # 0 for normal [1] 0.002732919 > kurtosis(d2) # 3 for normal [1] 2.947005 What can you do with non-normal distributions? With skewed/non-normal distributions you often get bad predictions from a regression model, especially where "unusually low/high" values are found. See this post for example. The solution often is to do some kind of linear transformation to the data. This is one reason why it often is a good idea to standardize data in predictive modelling. One can also try other linear transformations, e.g. taking logs. However, there is no "silver bullet" to adress the problem. Another consequence of skewed data in linear regression can be heteroscedasticity, which leads to unreliable standard errors, p-values, and confidence bands. See this post for a discussion.
H: Google OR-Tools - Routing - Penalties - Adding Different Penalty to Different Location (Python) I am using Google's OR-Tools for route optimisation. References can be found here. I am performing an optimisation where certain pick-up locations are dropped based on a penalty at each location. The documentation gives an example and states: "In this context, a disjunction is simply a variable that the solver uses to decide whether to include a given location in the solution. In this example, the method adds the same penalty to each location, but in general you can add different penalties to different locations.", and provides the following chunk of code as an example of adding a constant penalty to each location: penalty = 1000 for node in range(1, len(data['distance_matrix'])): routing.AddDisjunction([manager.NodeToIndex(node)], penalty) where data is a dict containing a n x n distance matrix (distance_matrix). (This code runs and returns a sensible outcome). Now, I would like to vary the penalty at each location, and I've used the following code to do so: for node in range(1, len(data['distance_matrix'])): routing.AddDisjunction([manager.NodeToIndex(node)], data['penalty'][manager.NodeToIndex(node)]) where penalty is an array of length n containing penalties for each location. (In these examples, the depot is assumed to be at the first location and is penalty-free; range starts at 1). When I run my code, I get the following error message: TypeError: Wrong number or type of arguments for overloaded function 'RoutingModel_AddDisjunction'. Possible C/C++ prototypes are: operations_research::RoutingModel::AddDisjunction(std::vector< int64 > const &,int64,int64) operations_research::RoutingModel::AddDisjunction(std::vector< int64 > const &,int64) operations_research::RoutingModel::AddDisjunction(std::vector< int64 > const &) Question: What am I doing wrong? How do I go about adding different penalties to different locations? Could someone please translate the C/C++ into Python? AI: Given your code, it looks like data['penalty'][manager.NodeToIndex(node)] is not an integer, probably a float, and therefore it doesn't match the function signature, as the error message suggests. Try converting it to an integer.
H: Variance as criteria for feature selection I'm working on an unsupervised clustering problem. I read multiple times that a variable with higher variance can be chosen over a variable with a lower variance. For example, scikit-learn implements a function that removes features with a variance lower than a threshold. (sklearn.feature_selection.VarianceThreshold) However, isn't the variance entirely dependent on scale/measurement unit? If I standardize my features, the variance is 1 for all of them. AI: You are correct that variance depends on the scale and typically it is not a good proxy for how informative a feature will be in terms of the response. The exception is zero variance features. A feature with zero variance has the exact same value for all observations and is therefore completely uninformative. Sklearns VarianceThreshold function defaults to removing only the features with exactly zero variance. Another group of non-informative features is the near-zero-variance feature. However, typically, classifying a feature as a near-zero-variance feature does not actually rely on computing the feature's variance at all. See, for example, the NearZeroVar function documentation from R's caret package: "nearZeroVar diagnoses predictors that have one unique value (i.e. are zero variance predictors) or predictors that have both of the following characteristics: they have very few unique values relative to the number of samples and the ratio of the frequency of the most common value to the frequency of the second most common value is large." You can see that the algorithm is not scale-dependent and does not rely on variance.
H: How to adjust/smooth a certain number using constants or rules Hi, I am handling a dataset with a customer purchase history. The field ord_cnt represents the purchase without coupon usage, and cpn_ord_cnt represents the purchase with coupon usage. There are two things I'm trying to figure out: How to penalize the small purchase count when it comes to a coupon purchase proportion. I added a field 'mod_cpn_ratio' because when i calculate the simple proportion: (cpn_ord_cnt/total_ord_cnt), it did not do a good job for the customers with the smaller number of purchases, eg. (1 coupon purchase / 2 entire purchases) = 0.50% vs (50 / 100) = 0.50% So for the 'mod_cpn_ratio' I added a constant '10' to the denominator to penalize those who bought less. I wonder if this was a fair way to handle this problem, or if there are better ways to handle it(possibly not just constant but dynamic number) Rank customers with total_ord_cnt, but give some extra credit for those who made more purchases without coupons. When two customers have the same number of 'total_ord_cnt', I would like to rank higher the one purchased more without coupons. Any advice would be appreciated. Thanks! AI: This is a good question. The best method will be whatever gives you the best result, so the method you suggest of adding a constant to the denominator might be the right idea. However, I don't think it is ideal. The reason is that you will be systematically underestimating the percentage of coupon uses, which is not ideal. It also might be difficult to interpret or explain. What I suggest instead is to borrow a play from the Bayesian handbook and make use of a prior. A prior represents your beliefs about the coupon spending of the average person. For example, you can represent your belief about the distribution of percent coupon usage as a beta distribution. All you need to know about this distribution is that it has parameters $\alpha$ and $\beta$ and that $\alpha$ corresponds to the number of products that used coupons while $\beta$ corresponds to the number of products that didn't use coupons. Let's say you believe the average person uses coupons for 10% of items. You might choose $\alpha=1$ and $\beta=9$ for your prior which has a mean of $\tfrac{\alpha}{\alpha+\beta}=0.1$. For each person in your dataset, you can estimate how often they use coupons by updating the prior with the new information. For example, if the person used one coupon on two items, you update the beta distribution by adding 1 to $\alpha$ and 2 to $\beta$ which gives a new estimated mean of $\tfrac{2}{2+10}=0.17$. You can see that when the number of purchases for an individual is small, you have little information, so your estimate is going to be close to your prior. Without much information, you assume a person is close to the average. However, when you have a lot of information, the effect the prior will be minimal because you have a lot of information and no longer need to make assumptions. The simple way to think about this is that you are adding a constant to the numerator and the denominator: (# coupon purchases + $\alpha$) / (# entire purchases + $\alpha$ + $\beta$) Note: even for a constant proportion like 10%, you can choose different values for $\alpha$ and $\beta$ that represent how confident you are in the prior. For example, you can choose $(\alpha,\beta) = (0.1, 0.9)$ or $(\alpha,\beta)=(10,90)$. This controls how much "smoothing" you apply.
H: Gensim: create a dictionary from a large corpus without loading it in RAM? The topic modelling library Gensim offers the ability to stream a large document instead of storing it in memory. Streaming is possible for the stage of converting the corpus to BOW, but the dictionary must have been created first and apparently this requires loading the full corpus: from gensim import corpora dictionary = corpora.Dictionary(a_huge_corpus) Is there a way to create the dictionary without loading the whole corpus at once? ps: this is my first question on the site, I'm a beginner ;) AI: Not completely sure if the issue having the full corpus in memory (i.e. the a_huge_corpus variable) or having the full Dictionary in memory, but in the case of the first one it seems that the add_documents method useful to add part of the corpus at a time: from gensim.corpora import Dictionary corpus = [ ["this", "is", "a", "random", "sentence"], ["this", "is", "another", "random", "sentence"], ["and", "another", "one"], ["this", "is", "sparta"], ["just", "joking"] ] dct = Dictionary() dct.add_documents(corpus[:3]) dct.add_documents(corpus[3:]) print(dct == Dictionary(corpus)) # True
H: NER - What advantage does IO Format have over BIO Format In this paper, the authors say that they used IO schema instead of BIO in their dataset, which, if I am not wrong, means they just tag the corresponding Entity Type or "O" in case the word is not a Named Entity. What advantage does this method have? I would imagine that it just takes away valuable information from the model and makes it harder to detect entities that span multiple words AI: To my knowledge, there is no clear best among the different labelling schemes variants for NER: IO, BIO, BILO (L=last), BILOU (U=unique, for a unique word)... I might forget some. In theory at least, the advantage of a simple scheme like IO is that the simplified set of labels may prevent the model from making mistakes cases where a word or type of word can appear in any position of an entity, for instance. A more complex scheme can be more accurate but is more likely to lead to overfitting, because the sample of cases for one class is smaller. If possible, it's preferable to evaluate the different options and pick the best for specific target data.
H: Understanding bootstrapping in bias variance decomposition I was going through bias and variance tradeoff article and it makes use of bias_variance_decomp function from mlxtend library. This method takes a parameter called num_rounds which is described in API docs as follows: num_rounds : int (default=200) Number of bootstrap rounds (sampling from the training set) for performing the bias-variance decomposition. Each bootstrap sample has the same size as the original training set. I was guessing what exactly is it? Given a training set, how many times to run the model on it by sampling the training set? My understanding was, given a training set of say 1 million data points, it will randomly sample "some of" these data points 200 times and train/test model on them. Q1. Am I correct with it? Q2. Also what is the value of "some of" - I mean how it is determined how much data poitns to sample? The doc says "Each bootstrap sample has the same size as the original training set." Does it mean simply select all of 1 million data points? If yes then how is it even sampling (I feel sampling is randomly selecting some data points)? Also this will give same training set each time. I am sure am missing some basic understanding. What is it? AI: In bootstrapping, the sampling is done with replacement. So although each sample is the same size as the original training set, it will contain some duplicated instances and omit other instances. This explains how each bootstrapped sample is both the same size as the training set, but they will all be different. On average, about 63% of instances will be included in each bootstrap sample, and 37% excluded. These excludes instances (the out of bag samples) become the test set for that bootstrapped sample. The overview on this page of the mlxtend documentation tells you a bit more about this.
H: How do I Profile a Jupyter Cell and Rank it by Cumulative Time? I'm using SciKit-Learn's OneHotEncoder to encode some categorical features. However, the time complexity of the function seems to have an exponential dependence on the number of rows passed to it. I would like to profile my code to figure out what's going on, particularly, I would like to use cumulative time to rank the profiler's output. I'm using Jupyter notebooks to write the code, how can this be achieved? AI: Jupyter has built in "cell magics", one of those cell magics is the prun function. To use it, you put a call to prun at the top of the cell. For example: %%prun YOUR-CODE-BELOW The %% at the beginning of the command tells Jupyter that this is a Cell Magic, if you don't include this, you'll get a syntax warning. To rank by cumulative time, you use the -s option and pass it "cumulative time": %%prun -s "cumulative" YOUR-CODE-BELOW This will profile the code in your cell and show you the output in a pop-up box on the bottom of your screen.
H: how to proportionally round to integer values I am trying to model student progress. Grades are integer values, but progress predictions are fractional. For example, I have 10 students who are predicted a grade 6, but the model says they should achieve 6.8 Obviously, I can round this to grade 7 for all students, but it would be more satisfactory to have 8 students predicted 7, and 2 students predicted as 6. so a couple of questions - is this sort of 'rounding' a thing (and if so does it have a name)? and more importantly - any thoughts on methods to achieve this proportional rounding (a name I just made up) for the few thousand grades I am modelling. AI: Instead of a pure regression approach, you could use ordinal regression: In statistics, ordinal regression, also called ordinal classification, is a type of regression analysis used for predicting an ordinal variable, i.e. a variable whose value exists on an arbitrary scale where only the relative ordering between different values is significant. It can be considered an intermediate problem between regression and classification. This would give you discrete predictions but keeping the ordinal relation among classes (unlike pure multiclass classification).
H: Can applying different models and seeing which one fits best be called brute forcing? I have seen a tutorial which said that you have to try different models and see which fits best on your data. Can this be considered brute force? I have searched this on google and the closest answer that came up was on Quora (https://www.quora.com/If-today-s-Machine-Learning-is-considered-brute-force-how-do-you-envision-the-Machine-Learning-of-the-future). Can someone please explain this question in relation to this link in as a beginner friendly manner as possible. AI: I think that the answers in quora miss the point/do not understand the question. Those answers respond to the question "whether or not machine learning algorithms use brute force in order to reach their conclusion". If this was indeed your question, then those answers seem pretty fine. It seems however that you are not asking this. If we consider a machine learning problem as "find the best model for a job" (a meta approach where we already know that we will use machine learning) and we approach it by trying different models and see which one works best, this is indeed a form of brute-forcing. This is made more apparent if we consider that for many problems, there are characteristics of the data that can help us have an idea about which algorithms would perform the best before we even try anything. As an example, if the data are linearly separable, SVM is a good candidate. Since there are such ways to chose a model, if we simply try everything, it is brute-force. In addition, another aspect of the machine learning process that is essentially brute force is hyperparameter tuning. I haven't seen anybody not trying a lot of different values for each parameter and keeping the best ones, because in the end we rarely have an idea about which value would work best, and even when we do, nobody wants to risk not trying something that could potentially lead to better results. So, to sum up, the process of creating a machine learning model (not the algorithms themselves) has aspects of brute forcing that either complement or completely replace other methods.
H: Why does SciKit-Learn's OneHotEncoder take so long on a Large Dataset? I'm using an older version of SciKit-Learn, version 1.0.2, to try and OneHotEncode some data. My dataset is fairly large, 184 columns going to 311 after the OneHotEncoding, with ~500,000 rows. Despite this, I'm confident I could write code that OneHotEncodes my columns in a minute maximum. Currently, SciKit-Learn's OneHotEncoder is on 10 minutes and counting. Why is this code so slow? Is there anything I can do to speed it all up? AI: If you use OneHotEncoder in a Jupyter Notebook, you can use %%prun -s "time" to profile your code. See How do I Profile a Jupyter Cell and Rank it by Cumulative Time? for more information. Doing this with a subset of the rows shows that the function calls taking up the most time are inside the _encode module function from SciKit-Learn's utils sub-package. Looking inside this module you can see that _encode's _check_unknown, and _extract_missing contain inefficient list comprehensions, and this is most likely what's slowing you down. If you want to create a OneHotEncoding of your data I suggest using Panda's get_dummies functionality, it's so much faster. There are a couple of differences, pd.get_dummies(...) disregards any NaN values by default, this can be changed by switching dummy_na=True. It is also sometimes considered best practice to drop one of the categorical features as a column, as one of the columns can be represented by all zeros in the other columns. For more information on that try reading these articles: dummy variable trap, does the dropped column matter?
H: The Sklearn train_test_split function is create training data and test data which are not similar I am working on loan default data and my model is not able to make accurate predictions on the test set because the the default percentage on the test set is very different from that of the training set. Is there any way to ensure that the Sklearn train_test_split function splits the data in such a way that the overall behaviour is not changed . Thanks in advance Here is the data in a tabular format AI: Assuming you want to keep the distributions of the different categories of a certain variable in both test and train, try using the stratify parameter. I'll suppose that in your case, you want to keep the distributions for the "employee_type" variable with categories like: Accountants, Core staff, drivers, etc. I'd use for this: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=X['employee_type'])
H: What should you do with NaN Values? I have a dataset with a number of NaN values in it. I believe ~13,000 rows are affected out of ~500,000, so about 2.6% of the dataset. I know that I can remove these rows or impute values for them. In general, when is one method preferable to the other, and which is best in my specific scenario? AI: Before you do anything, it's good to assess which of the following three categories your NaN rows fall into: Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR). MCAR If they're MCAR that means the "missing data" (i.e., the data that would have been there if it had been entered correctly or received in the first place) has the same distribution of values as the non-missing data. For example, if there's a categorical column with four categories A, B, C, D, which have a combined ratio a:b:c:d in the non-missing data, then any missing data will have the same ratio a:b:c:d. In this case it's possible to impute the missing values using the distribution of the non-missing values. MAR If the missing data is MAR then there is a possibility that the distribution of the missing data and the non-missing data is different. This can occur, for example, in medical tests, where a test that is often performed on older people, who get worse test results, is less likely to be performed on younger people, who get better test results. In this case, it's not possible to impute the missing values with the distribution of the non-missing values. However, if a variable in your model is largely responsible for the change in distribution, in this case age, you can stratify your data by that variable, so that within your chosen strata the distribution for the missing and non-missing data are basically the same. That means that within your strata you can still use imputation to predict the missing values. MNAR Finally, if your data are MNAR then there is a possibility that the distribution of the missing data and the non-missing data is different, but your model doesn't contain another variable that can explain that difference and stratify your data. This situation can occur when the chance of recording a variables value is dependent on the value of that variable and that variable alone. For example, alcoholics may decline to answer a question on alcohol consumption out of embarrassment, whereas T-Totallers would have no issue answering. In this case, you have to come up with some sort of model that explains why the data might be missing and use that to impute values for the missing data. If you'd like to know more about the above three definitions read this article, and this web page: MNAR, Once you know NaN Category Once you know what category your missing data falls into, you can make a decision about whether or not to delete the rows or impute the data. In your specific case, I would find out if your data is either MCAR or MAR and, if it is, delete it. This data is known as "ignorable data" and, as it's only 2.6% of all data, its exclusion shouldn't make an appreciable difference to the effectiveness of your model. If it's MNAR then you need to figure out what the pattern behind its exclusion is and use that to impute the data. For more information on the various methods of imputing data look at the following article from Towards Data Science.
H: Is it valid to use Spark's StandardScaler on sparse input? While I know it's possible to use StandardScaler on a SparseVector column, I wonder now if this is a valid transformation. My reason is that the output (most likely) will not be sparse. For example, if feature values are strictly positive, then all 0's in your input should transform to some negative value; thus you no longer have a sparse vector. So why is this allowed in Spark, and is it a bad idea to use StandardScaler when you want sparse features? AI: The default for the parameter withMean is False, so that data won't be centered, just scaled by the standard deviation(s).
H: What are MLM and NSP models actually used for after they've been trained? I am a Python programmer working with deep learning nets and I have recently built my own language models as well as I have fine-tuned popular models like BERT. MY question is - after these models have been successfully trained, what are they used for? I understand that masked-language models can predict what the masked word is, but what is the point? What is a real-world application of this model? The same question goes for next-sentence prediction models - what is a real-world application? Thank you. AI: You normally fine-tune pre-trained masked language models in a different "downstream" task, e.g. text classification. MLM and NSP pre-training is not an end, but just a means to profit from a large amount of unlabeled data in your supervised learning problem.
H: Revisit - Encoding before vs after train test split? Background: Like a good data scientist, I planned to fit my encoder on my training set and use it to transform my test set. However, when I tried to transform my test set, the encoder threw an error. The reason was that there was a class in the test set that was not encountered in the training set during fitting (due to it being a low frequency class), so the encoder did not know how to deal with it. One may argue that if I had performed encoding on the total dataset, then the encoder would have knowledge of the class in my test set, which was not in my training set. This would be data leakage. However, I am unconvinced that data leakage--even if it exists--is problematic. In fact, the classes are known beforehand (even if they are not known to the model), so why not encode them accordingly. Prior search: The issue of whether to perform encoding before or after the train-test split has been brought up before (see post1 and post2). I was unable to leave a comment being a newcomer, so I am raising the issue again, this time to get a deeper explanation. The second post says "it's probably not as serious as other kinds of data leakage" but does not offer support. Question: Is the data leakage that occurs for variable encoding (one-hot encoding or ordinal encoding) problematic and how so? A reference would be appreciated. By the way, I am not arguing that data leakage is not a problem for every situation. I agree that for some preprocessing steps--like normalization--it makes total sense to perform the fit on the training set and the transform on the test set. With normalization, the data leakage is obvious and it is clear that it would be problematic. I am just not convinced that data leakage for encoding categorical variables is a problem. AI: In my opinion, the risk of data leakage is not the main reason why it's not a good idea to encode the class on the full dataset. The only reason why one would do the encoding on the full dataset would be to avoid exactly this issue of having an unknown class in the test set. In other words, to prevent a technical error because the code will crash when it encouters the unknown class during evaluation. But the technical error is just the symptom, the real problem is that it makes no sense to train a supervised model and expect it to predict classes that it did not see during training: Either the class should really be taken into account in the task, and then the training set is clearly not representative enough. Or it's an anomaly and it should be completely discarded. If there is no way to extend the training set, this is the only option. Anyway if the class appears very few times in the full dataset, it's very likely that the model doesn't have enough examples to accurately deal with this class. So even if by chance it has one or two instances in the training set, the high risk overfitting means it's not worth to take the class into account.
H: What does precision-recall curve and ROC curve tell us abouth threshold invariance Consider a binary classification problem. Intuitively, a value for the area under the curve (for both curves) very close to 1, shows that the curve is almost L-shaped. Thus, this means that the value on y axis stays rather consistent despite changes in threshold, and if we were to invert the axes, this would hold true for both values plotted. Does this essentially mean that an L-shaped curve means that the model performs equally well (especially for the PR curve, since precision and recall are used to calculate F1 which is a pretty robust and widely used metric) for all classification thresholds? Or did I make some jump in my logic? AI: Does this essentially mean that an L-shaped curve means that the model performs equally well [...] for all classification thresholds No it doesn't. Let's consider a perfect predictor: all the positives are 1.0, all the negatives are 0.0. The curve is perfectly L shaped. There are an infinity of thresholds between -Inf and 0.0 which will perform terribly, and classify all the negatives as positives. Similarly, all the thresholds between 1.0 and +Inf will incorrectly classify all the positive observations as negatives. This is an extreme example, but you can easily extend it to cases where not all positives are 1.0 (for instance they are between 0.5 and 1.0) and not all negatives are 0.0 (for instance between 0.0 and 0.5), and see that there are a lot of possible thresholds that will missclassify observations. Here is a way to visualize: You can see although the curve is L shaped, many classification points that missclassify a significant number of cases fall on the I and _ parts of the curve. Only the threshold 0.5 is a good classification threshold.
H: Validation and training loss of a model are not stable Below I have a model trained and the loss of both the training dataset (blue) and validation dataset (orange) are shown. From my understanding, the ideal case is that both validation and training loss should converge and stabilize in order to tell that the model does not underfit or overfit. But I am not sure about the model below. What you can tell from it's loss, please? In addition, this is the accuracy of the model: Update: after setting learning rate to 0.0001 as the answer suggested, I got the following loss: And accuracy of the model: AI: It seems like you're over fitting. There are tones of articles and blogs on how to avoid over fitting, but I mention some of them here anyway: Reduce your learning rate to a very small number like 0.001 or even 0.0001. Provide more data. Set Dropout rates to a number like 0.2. Keep them uniform across the network. Another modern approach is the idea of using Batch Normalization instead of Dropout. Try decreasing the batch size. Using appropriate optimizer: You may need to experiment a bit on this. Use different optimizer on the same network, and select an optimizer which gives you the least loss. Also try reducing the size of your model.
H: What kind of model should I use? I am working with a dataset with about 10,000 customers. About 3,000 engaged with dozens of marketing campaigns over the years. I am trying to create a model to find which marketing campaign to use on a given customer. I was thinking of creating affinity scores based on conversions from these campaigns and do some sort of random forest/logistic regression. It gets tricky because only 3,000 have engaged and I am not sure what to recommend for the other 7,000. Any suggestions? AI: Use an unsupervised method such as clustering to group users, then assign marketing campaigns that have been used by others within the same cluster.
H: Accuracy vs Categorical Accuracy I was running a DNN model that uses ResNet50 for Transfer Learning. While fitting the training data on my model to check the initial trend (would run for more epochs if initial trend seems right), I noticed that even though my training ACCURACY metric consistently remained 0.0000e+00 for 3 epochs, my training CATEGORICAL ACCURACY metric increased from 0.4689 to 0.5278. The loss showed a decreasing trend as well. Similar behaviour was observed for validation set. Code Snippet: from tensorflow.keras.applications.resnet50 import ResNet50 METRICS = [ tf.keras.metrics.Accuracy(), tf.keras.metrics.CategoricalAccuracy(), ] tf.keras.backend.clear_session() model = tf.keras.Sequential() pretrained_model = ResNet50( include_top = False, weights = "imagenet", input_shape = (512,512,3), pooling = "avg" ) for layer in pretrained_model.layers: layer.trainable = False model.add(pretrained_model) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(512,activation="relu")) model.add(tf.keras.layers.Dense(3,activation="softmax")) CLASS_WEIGHTS = get_CW_dict(distribution) model.compile( optimizer = tf.keras.optimizers.SGD(), loss = tf.keras.losses.CategoricalCrossentropy(), metrics = METRICS, steps_per_execution=639 ) history = model.fit( x = TRAIN_GEN, validation_data = VALID_GEN, epochs = 5, batch_size=8, callbacks = [tf.keras.callbacks.TensorBoard(log_dir="logs/fit")], class_weight = CLASS_WEIGHTS ) OUTPUT: Epoch 1/5 160/160 [==============================] - 107s 669ms/step - loss: 0.4983 - accuracy: 0.0000e+00 - categorical_accuracy: 0.4689 - val_loss: 1.1232 - val_accuracy: 0.0000e+00 - val_categorical_accuracy: 0.4891 Epoch 2/5 160/160 [==============================] - 99s 616ms/step - loss: 0.3922 - accuracy: 0.0000e+00 - categorical_accuracy: 0.5088 - val_loss: 1.1451 - val_accuracy: 0.0000e+00 - val_categorical_accuracy: 0.5031 Epoch 3/5 160/160 [==============================] - 98s 614ms/step - loss: 0.3853 - accuracy: 0.0000e+00 - categorical_accuracy: 0.5278 - val_loss: 1.1407 - val_accuracy: 0.0000e+00 - val_categorical_accuracy: 0.5094 Now, my questions are: 1. Should I consider Categorical Accuracy and ignore 'Accuracy' metric in this case? 2. Considering that TF/Keras automatically chooses the accuracy metric on the basis of the activation function of the output layer and the type of loss function, what may be the reason for such ambiguous behavior? AI: Investigating the Keras docs, there is a major difference between Accuracy and categorical_accuracy: Accuracy: This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as binary accuracy: an idempotent operation that simply divides total by count. categorical_accuracy: This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as categorical accuracy: an idempotent operation that simply divides total by count. y_pred and y_true should be passed in as vectors of probabilities, rather than as labels. If necessary, use tf.one_hot to expand y_true as a vector. And since you're using Categorical Cross Entropy for your loss (assuming you have converted your labels to one hot encoded labels) then you should use categorical accuracy for your accuracy metric.
H: unable to bar plot the data for all the columns I am trying to plot the data in a loop, I am using the below code to build the plot. but Its running for all the loop, I can see the column name is getting printed for all th but its building the plot for only the last one. is there anything I have add extra to generate the plot for individual iteration? for (index, colname) in enumerate(df): print(colname) counts = df[colname].value_counts(dropna=False) counts.plot.bar(title=colname, grid=True) AI: I think the thing is that by calling the drawing function, you are constantly redrawing the same figure. Try to create a figure every time: import matplotlib.pyplot as plt for (index, colname) in enumerate(df): print(colname) plt.figure() counts = df[colname].value_counts(dropna=False) counts.plot.bar(title=colname, grid=True) You can also divide one drawing into many parts and draw several graphs at a time using matplotlib subplots. matplotlib subplots doc
H: Success metric of database migration using row counts Description I have a problem where I'm tasked to successfully transform and repurpose data from one SQL server to another. Call the source $\text{src}$ and the target database $\text{tgt}$. In order to assess the quality of the migration for a given field, I am given volumes only, namely $V_\text{src} = \#\{\text{rows in src for which property }P\text{ is true}\}$ $V_\text{tgt} = \#\{\text{rows in tgt for which property }P\text{ is true}\}$ I was wondering if there was a set of metrics one often uses to report completion metrics from $0\%$ to $100\%$. The metric should be 0 if the target is very dissimilar from source, and 100 if it is perfect similarity. An important side issue Often you overshoot and you have $V_\text{src} \ll V_\text{tgt}$, so just reporting ratios yields that $\text{tgt}$ has $270\%$ more content than $\text{src}$. In these situations I would like to assign property P a low score near 0, but not a negative score. Approach For now I have for a given error coefficient $\varepsilon$ that's (mostly) $-1\leq \varepsilon \leq 1$ (but can overshoot to values close to 2 or 3) the following rescaling functions: $$\text{invLin}(\varepsilon) = \dfrac{1}{1+\varepsilon}\qquad {\color{blue}\checkmark}\quad\text{slow decrease from }100\%\text{ to }0\%$$ However it gives huge error percentages like $500\%$ a high grade and doesn't penalize low error percentages enough to my taste. I came up with this second one: $$\text{invLog}(\varepsilon) = 1-\log^{+}(1+\varepsilon)\qquad {\color{blue}\checkmark}\quad\text{quick decrease from }100\%\text{ to }0\%$$ You can see the two functions plotted for values of $100\lvert\varepsilon\rvert$ ranging from $0\%$ to $500\%$ with a zoom on the $1-100$ zone on the left part Notation: $f^+$ is the positive part of a function $f^+(x)=\max(f(x),0)$ Addendum (Sample data) I was told it was easier to guess with some sample data, here's an example: Condition $P$ $V_{\text{src}}$ $V_{\text{src}}$ Sum Expected Amount (\$) 1543385231 1543385217,9 Sum Commited Amount (\$) 83123640,62 83123640,62 Sum Real Amount (\$) 1246623860,05 203779813,48 Sum Amount for Region 1 (\$) 4898 26712 Sum Amount for Region 2 (\$) 205509 93393 Sum Amount for Region 3 (\$) 3818 1667 Number of users with unlimited rights 412390 1286545 Number of users with limited rights 100286613 376796 Number of shared costs 402 222 Number of items created between 2019-2030 4 4 Number of items created between 2020-2023 260 260 AI: Would this work for you? $$100 \times \bigg(1 - \frac{|V_{src} - V_{tgt}|}{V_{src} + V_{tgt}}\bigg)$$ The fraction part is the ratio of the absolute difference in values to the combined total, which gives $0$ if the values are the same and near $1$ if one value is $0$ and the other $>0$. This is then adjusted so a match is $1$ and and a big difference is near $0$ and scaled to the range $0 - 100$. For example, for your "Number of users with unlimited rights" this gives: $$100 \times \bigg(1 - \frac{|412390 - 1286545|}{412390 + 1286545}\bigg) = 48.55$$ and for "Number of users with limited rights" this gives: $$100 \times \bigg(1 - \frac{|100286613 - 376796|}{100286613 + 376796}\bigg) = 0.75$$
H: How to detect anomalies in web log data I have the following challenge: We have the web logs of a platform where people can download publications and we need to detect anomalies. From time to time and only by chance we observe spikes in usage over a day or so, where there are many more downloads than usually. Often these spikes are caused by the same IP address or several from the same IP address range. Also, these requests are for the same publication (as identified by the same URL). I now wonder how we can identify anomalies like this automatically. Taking into account that sometimes, an item is not downloaded for several days or weeks, so there is not necessarily a lot of "normal" download data available. The business wants us to detect these anomalies, qualify them (are they fraudulent or legit) and exclude them from the overall results if fraudulent. What would be the best approach to tackle this problem? AI: Since you've used the word 'Automatically', I assume that you're looking for an unsupervised method. Unsupervised Anomaly Detection techniques do not need training data. They are based on two basic assumptions. First, they presume that most of the network connections are normal traffic and only a very small traffic percentage is abnormal. Second, they anticipate that malicious traffic is statistically various from normal traffic. According to these two assumptions, data groups of similar instances which appear frequently are assumed to be normal traffic, while infrequently instances which considerably various from the majority of the instances are regarded to be malicious. The most common unsupervised algorithms are: K-Means Self-organizing maps (SOM) C-means Expectation-Maximization Meta algorithm (EM) Adaptive resonance theory (ART) Unsupervised Niche Clustering (UNC) One-Class Support Vector Machine I suggest you to take a look into all of the above methods and choose the one that fits your problem. There are also more advanced and modern approaches which I don't suggest (at the beginning). If the above methods didn't solve your problem, then use the more advanced methods.
H: Which anaconda related websites are official and which ones are 3rd party? New to Anaconda and following a tutorial. I see that there is Anaconda dot com, anaconda dot org, and conda dot io. I have in the past seen instances where unofficial sources and pages for a result were higher ranked on Google for strange reasons. Which of these websites do I go to and are all three of these legit? Thank you. AI: All the above mentioned websites belong to Anaconda Inc.
H: How to fit a math formula to data? I have math formula and some data and I need to fit the data to this model. The math is like. $y(x) = ax^k + b$ and I need to estimate the $a$ and the $b$. I have triet gradient descend to estimate these params but it seems that is somewhat time consuming. Is there any efficient way to estimate the params in a formula? AI: If you know $k$, which it seems you do, then this is just a linear regression. In fact, with just one feature (the $x^k$), this is a simple linear regression, and easy equations apply without you having to resort to matrices. $$ \hat b=\dfrac{ \text{cov}(x^k, y) }{ \text{var}(x^k) }\\ \hat a =\bar y-\hat b \bar x $$ These are the ordinary least squares estimates of $a$ and $b$.
H: Using TPU without GCS Is there a way to use TPU in Colab/Kaggle notebook without hosting your dataset on a GCS bucket? Does Google/Kaggle provide that functionality yet? P.S.: I am using TensorFlow v2.4.1 AI: According to the docs, it seems impossible. However, you can connect to an external hub to retrieve the necessary data for training. Dagshub is already doing that. https://dagshub.com/ You can create a private repository and create a workflow. https://dagshub.com/martin-fabbri/tpu-workflow/src/0ebfac90c65098150d1fe3b9b3dcf14e45f372cf
H: What is the architecture of ssd_mobilenet_v2_fpnlite_640x640? What is the architecture of ssd_mobilenet_v2_fpnlite_640x640, which is a model available on TensorFlow model zoo. If my understanding is correct, mobilenet is used for feature extraction , while SSD is used for detection. So what is the purpose of FPNlite and where is it used? AI: SSD_mobilenet_v2_fpnlite_640x640 The architecture of ssd_mobilenet_v2_fpnlite_640x640 is a combination of the MobileNet v2 architecture for feature extraction and the Single Shot Detector (SSD) architecture for object detection, with the addition of a Feature Pyramid Network Lite (FPN Lite) module. MobileNet v2 is a lightweight convolutional neural network (CNN) architecture that is designed for efficient computation on mobile and embedded devices. It uses depthwise separable convolutions and residual connections to reduce the number of parameters and improve the performance of the network. MobileNet is used for feature extraction, which means it is responsible for identifying key features in the input image that can be used for object detection. SSD is a popular object detection algorithm that uses a set of default bounding boxes, called anchor boxes, to predict the location and class of objects in an input image. It combines the predictions from these anchor boxes with a non-maximum suppression (NMS) algorithm to produce the final detection results. SSD is used for detection, which means it takes the extracted features and uses them to identify the location of objects in the image. FPN Lite is a lightweight version of the Feature Pyramid Network (FPN) architecture, which is commonly used in object detection tasks to combine high-resolution features from shallow layers with low-resolution features from deep layers to improve detection accuracy. In the ssd_mobilenet_v2_fpnlite_640x640 model, the FPN Lite module is used to combine the output of the MobileNet v2 and SSD layers, providing additional contextual information for the object detection task to improve the accuracy of the object detection by incorporating information from multiple scales.
H: What is the math behind the Keras Tokenizer() function? I am doing an essay on the mathematics behind a text classifier with NLP and neural networks and I would like to know how exactly the TOKENIZER function of Keras works. Whether cosine similarity is involved and how the dictionary creation is carried out taking frequency into account. If anyone knows the answer or a book/article where it is reflected, I will be eternally grateful. MAX_NB_WORDS = 50000 MAX_SEQUENCE_LENGTH = 250 tokenizer = Tokenizer(num_words=MAX_NB_WORDS, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True) tokenizer.fit_on_texts(data['Consumer complaint narrative'].values) AI: Class Tokenizer is used to turn text into a sequence of integers (each integer being the index of a token in a dictionary). The size of the token dictionary is defined when invoking the constructor, with parameter num_words. To build the token dictionary, you usually invoke fit_on_texts. Tokenizer then takes the text, splits it on the occurrence of the space character (or the one you provide in the split parameter when constructing it), assuming the tokens are separated by blanks. Tokenizer builds its token dictionary by keeping the num_words - 1 most frequent tokens, and assigns an integer value to each possible token. Then, you can use Tokenizer to turn text into a sequence of integer numbers, which are indexes to tokens. This is done with texts_to_sequences. It just splits tokens on the space character and looks up the token indexes in its internal token dictionary, returning the sequence of those indexes. After that, you can use other elements to use those indices in your neural network (e.g. Embedding). The documentation contains all of these pieces of information, along with other functionality supported by this class. As you can see, there is little math involved here, just frequency counts. No cosine similarity or anything alike.
H: Need my xgboost model to be more liberal with classifications I have an xgboost model that predicts the likelihood of a sales lead to close (actually to turn into an "opportunity" which is one step before the close but that's beside the point). The dataset is very imbalanced with the minority class only making up around 15% of the train/test sets. I am getting acceptable f-1 scores (macro avg is 81% which will suffice) but because there are so few leads that close relative to the total leads in our system when I run predictions for our entire database of leads only ~2% of the leads end up getting a probability to close of 50% or greater. The sales leadership will not respond favorably if I go back to them with such a small number regardless of it it's the most accurate portrayal of reality or not. Does anyone have any ideas of what to try to make the model more "liberal" (giving higher probabilities) toward the the minority class? Thanks AI: If you have a highly imbalanced dataset, a different model architecture or loss function may be better suited to the task. Although, if you wanna go with xgboost I think you can make your model more "liberal" with its classifications by adjusting the decision threshold for classification. By default, xgboost models use a threshold of 0.5 for binary classification, but you can adjust this threshold to be more or less conservative with your predictions. Why don't you try using a threshold of 0.4 or 0.3, this might make the model more liberal with its classifications. You can also try oversampling the minority class in your training data to give the model more examples to learn from, in this way the model can learn to better distinguish between the minority and majority classes, and may make the model more confident in its predictions of the minority class. Use SMOTE, this might help you.
H: Are there any algorithm to generate a set of data that match some statistic requirements? I was wondering if there are time-efficient algorithms that can reverse the process of basic statistics computation. What I mean is an algorithm that instead of computing the mean, SD, max-min range, median... based on input values, would do the inverse operation, i.e generating a set of data with the required/wanted statistical results. What seems the most optimal way to solve such problems ? Thought about genetical algorithms, but maybe are there better ways. If you want to know why I'm searching to do this : I'm a student in sport science field and my project is about generating an AI that would detect when people fake their data to proove their point. I want to know what would be the best algorithms to fake such results, so maybe I could detect people already doing that. Thanks ! AI: I think the general concept you're describing is sampling from a distribution. If the distribution in question is known and has nice properties, there are typically closed-form equations for doing the sampling based on generating uniform random numbers. For example, if I want to generate numbers that fit a normal distribution with mean $\mu$ and standard deviation $\sigma$, I can use the Box-Muller Transform. For more general cases where you don't necessarily have a handy way of writing down the distribution and dealing with potentially thousands of parameters, we turn to methods like Markov Chain Monte Carlo (MCMC). I'm not sure if that's exactly what you had in mind or not, because it feels like you might not know the distribution you want to sample from here, but there are enough parallels that it's probably worth some time to dive in a little deeper.
H: how to extract common aspects from text using deep learning? Can you suggest me some papers to read about deep learning models that find patterns/similarities between different texts? What I have is a set of reviews with the following categories for each review: Rating, Review title, Review body, date, and helpful votes. What I would like to do is understand whether there exist similarities among the reviews. For instance, there exists a cluster of customers that complains about a specific aspect of a product. Or, as another example, to see if there exists a problem related to the season e.g. a product has many bad reviews in summer because it does not work well with high temperatures. Thank you for your help. AI: The problem is called by several different names: multi-aspect, semi-supervised, or hierarchical topic modeling. One way to approach the problem is to start with a seed or anchor words (e.g., "winter" and "summer") and form clusters around those words. It can be solved with a variety of methods, the most commonly used methods are variations of Latent Dirichlet Allocation (LDA).
H: Keras class_weight for fit_generator I am running a keras model using a fit_generator, with three classes, where each class is of the following: [1,0,0], [0,1,0], [0,0,1] Let's say the examples of each class are: 1000, 500 and 500. Would my class_weight parameter simply be: class_weight=[1,2,2] AI: One common practice is to use the formula: class_weight_i = n_samples / (n_classes * n_samples_with_class) where class_weight_i= class weight for ith class n_samples = total number of samples n_classes= total number of classes (in this case =3) n_samples _with_class = the number of samples in the class You can use this formula to compute the weight for each class. Alternatively, You can use the sklearn library to compute the class_weight vector: from sklearn.utils import class_weight class_weight = class_weight.compute_class_weight('balanced', np.unique(y_train), y_train)
H: How to predict customer churn by a certain date? I have a dataset of inactive users for the last 365 days, with columns: When subscribed (e.g. 10.10.2022) When unsubscribed (e.g. 12.10.2022) and client info I also have a set of data about active users: When subscribed and client info. Question: How can I get a prediction of whether a certain user will unsubscribe within a month from now? What should be the logic of my actions in this case? I tried to calculate an unsubscribe date using linear regression and got a result, for example, 03.15.2023. But in that case, I can't understand with what probability the person will unsubscribe in a month, namely on 01.12.2023 I will be grateful for any hints. Thanks AI: This problem is better framed as survival analysis, the expected time duration until an event occurs. The goal would be to fit a model that predicts the probability of churn as a function of the number of days thus far and other features.
H: How does Back Propagation in a Neural Net Work? I understand that, in a Neural Net, Back Propagation is used to update the model's weights and biases to lower loss, but how does this process actually work? AI: This answer is based on the following series by StatQuest, which I thoroughly recommend. Overview Back Propagation works by calculating the partial derivatives of the loss function, with respect to each weight and bias in the network, and using those derivatives to alter the value of the corresponding weight or bias. Considering how the weights and biases are passed through the network, through linear combinations, and Activation Functions, calculating the partial derivatives may seem daunting. However, each partial derivative can be calculated, with reasonable ease, using the Chain Rule from differentiation. Simple Example For example, let's use a simple neural network like the one below, which uses ReLU activation functions, and the Squared Sum of Residuals for the loss function. At the output node, the Squared Sum of Residuals has the following form: $$\text{Loss} = \sum_{i=0}^{N}(\text{expected}_i - \text{predicted}_i)^2$$ where $i$ represents the $i$th data row to be predicted, $\text{expected}_i$ is the expected value of data row $i$, and $\text{predicted}_i$ is the predicted value of data row $i$. The impact of each parameter (weight or bias) on this loss function can be calculated by by working out its contribution to the loss function. The expected values don't change, so the parameters only come into the function through the predicted values. We'll use $W_5$ as a simple example, first, let's calculate the partial derivative of the loss function with respect to $W_5$ for a single input value: \begin{align} \frac{\partial (\text{Loss})}{\partial W_5} &= \frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial W_5} \\ \end{align} by the chain rule, we have the following: \begin{align} \frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial W_5} = \frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial \text{predicted}_0} \cdot \frac{\partial (\text{predicted}_0)}{\partial W_5} \end{align} where $\text{expected}_0$ is the expected value of our single input row, $\text{predicted}_0$ is the predicted value of our single input row, and $W_5$ is our parameter. The first half of the righthand side simplifies to the following: \begin{align} \frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial \text{predicted}_0} = 2 \cdot (\text{expected}_0 - \text{predicted}_0) \cdot (-1) \end{align} To get the second half, we first notice that $\text{predicted}_0$ has the following form: \begin{align} \text{predicted}_0 = W_5 \cdot y_{1, 0} + W_6 \cdot y_{2, 0} + b_3 \end{align} where $W_5$ and $W_6$ are the weights from the graph, $y_{1, 0}$ and $y_{2, 0}$ are the outputs of hidden layer nodes $1$ and $2$ respectively for the data row $0$, and $b_3$ is the bias applied to the output nodes activation function. Therefore the second half of the righthand side has the following simplified form: \begin{align} \frac{\partial (\text{predicted}_0)}{\partial W_5} = y_{1, 0} \end{align} so: \begin{align} \frac{\partial ((\text{expected}_0 - \text{predicted}_0)^2)}{\partial W_5} = -2 \cdot (\text{expected}_0 - \text{predicted}_0) \cdot y_{1, 0} \end{align} Multiplying this value by a Learning Rate, for example, 0.001, gives you an update value which you can add to $W_5$ to update the Neural Net and improve its predictive ability. Deeper Parameters To extend this process to weights further back in the network you have to perform repeat applications of the chain rule. For example, to get the partial derivative of $\text{Loss}$ with respect to $W_1$ you get the following: \begin{align} \frac{\partial (\text{Loss})}{\partial W_1} &= \frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial W_1} \\[10pt] &= \frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial y_{1, 0}} \cdot \frac{\partial y_{1, 0}}{\partial W_1} \\[10pt] &= \frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial y_{1, 0}} \cdot \frac{\partial y_{1, 0}}{\partial s_{1, 0}} \cdot \frac{\partial s_{1, 0}}{\partial W_1} \end{align} where $s_{1, 0}$ is created by summing the linear functions created by multiplying $x_1$ and $x_2$ by their weights $W_1$ and $W_2$ and adding a bias $b_1$, which gives the following form: \begin{align} s_{1, 0} = W_1 \cdot x_1 + W_2 \cdot x_2 + b_1 \end{align} The form of the $\frac{\partial y_{1, 0}}{\partial s_{1, 0}}$ term is defined by the activation function $y_{1, 0}$, which in our case is ReLU that gives a gradient of 1 if $s_{1, 0} \gt 0$ and 0 if $s_{1, 0} \leq 0$. If we assume that $s_{1, 0} \gt 0$ we get the following: \begin{align} \frac{\partial (Loss)}{\partial \text{predicted}_0} \cdot \frac{\partial \text{predicted}_0}{\partial y_{1, 0}} \cdot \frac{\partial y_{1, 0}}{\partial s_{1, 0}} \cdot \frac{\partial s_{1, 0}}{\partial W_1} = -2 \cdot (\text{expected}_0 - \text{predicted}_0) \cdot (W_5) \cdot (1) \cdot (x_1) \end{align} Wrapping Up The method above is used to find the gradient for every parameter, then all the parameters are updated to move the Neural Net towards a solution with a lower loss. This process is repeated until, either, you reach an acceptable loss level, or repeat iterations are making negligible improvements to the loss function.
H: How do I interpret results from scipy pdf() function in concerns with the probability density function I am working on a personal project and want to understand what the output of Scipy's pdf() function. I understand that the PDF is for one singular line under a distribution curve so finding a point under that function would be zero. But when working with scipy, i get a number im not quite understanding. from scipy.stats import norm norm.pdf(0) #0.3989422804014327 norm.pdf(0, loc=5, scale=10) #0.03520653267642995 r = np.random.randn(10) norm.pdf(r) #array([0.39739838, 0.31770265, 0.06576943, 0.252755 , 0.38386191, #0.31203894, 0.34385005, 0.39185893, 0.39848757, 0.38723303]) AI: Putting it simply, pdf() returns the y value upon the distribution curve (gaussian if you use a normal distribution) given the x and other parameters you supply. Probability for a given interval is defined as the respective area under curve.
H: Standardised plot of multiple points with varying thresholds and values I would like to plot the points below to show which points are inside their thresholds and which are outside their thresholds. I intend to do another set of analysis with other parameters which will change the values of all columns except for Point. The reason I want to do this is so I can have a visual representation at a high level of how changing parameters results in better performance (more in bounds) or worse performance (more outside bounds). The best I can come up with is setting each point to a ratio of their bounds, but I cannot figure out what I would put on the x axis here, maybe the Point column? What other options would I have at my disposal to be able to visually represent this data? (at a high level). AI: One approach would be to put the "Point" column on the x-axis and the percent on the y-axis. Given that the "Point" column is a nominal level of measurement, the x-axis has to be consistent with that (e.g., bar chart). The y-axis / percent is a numeric level of measurement from -25% to +300%. "Inside bounds" can be represented by color (i.e., green and red).
H: I get 100% on my test set using random forest. What is wrong? I am getting 100% accuracy on my test set when trained using random forest. Is there something wrong with my model? Code: import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score from sklearn import preprocessing from sklearn.metrics import classification_report, confusion_matrix from sklearn.preprocessing import OneHotEncoder from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler ds = pd.read_csv('census-income.test(no unk.).csv') df = pd.read_csv('census-income.data(no unk.).csv') X = df y = df['income'] X_T = ds y_T = ds['income'] categorical_preprocessor = Pipeline(steps=[ ("onehot", OneHotEncoder(handle_unknown="ignore")) ]) preprocessor = ColumnTransformer([ ("categorical", categorical_preprocessor, ['workclass','education','martial-status','occupation','relationship','race','sex', 'native-country','income']), ],remainder='passthrough') pipe = Pipeline(steps=[ ("preprocessor", preprocessor), ("classifier", RandomForestClassifier(n_estimators=128, max_depth=7)) ]) X_train = X X_test = X_T y_train = y y_test = y_T pipe.fit(X_train, y_train) y_pred = pipe.predict(X_test) print(classification_report(y_test, y_pred, digits=4)) print(confusion_matrix(y_test, y_pred)) Confusion matrix [[11360 0] [ 0 3700]] Training data Test data AI: I think you leak the answer when u defining this variable. X_T = ds y_T = ds['income'] So when u predicting the test set the data perfectly predict 100% accuracy y_pred = pipe.predict(X_test) You can try it like this for train and test data: X_T = ds.drop(['income'],axis=1) y_T = ds['income']
H: How to interpret differences between 2D and 3D T-SNE visualization of similar words from Word2Vec embedding? I have created a Word2Vec model based on the transcript of the Office. I am now trying to visualize the embedding space for the top similar words of an input word with t-SNE in 2D and 3D. I additionally color the similarity value of a word to the input word (colored in red). When I plot the t-SNE output the similar words seem to be clustered/laying around near the input word (paper in this case), which makes sense. However, when I use t-SNE again for a 3D plot the similarity is not as well visible anymore. Why are the words that are close in the 2D plot and of high similarity not also very close in the 3D plot? Thanks already for the help! AI: How many iterations did you applied? Normally, the end result for 2D and 3D should be quite similar in terms of cluster groups. If there are many differences, it could be explained by different factors: The data is not enough differentiable. In this case, you would need more data to make a better generalization. There are not enough iterations during training. t-SNE often requires thousands of iterations to reach a good balance. Subcategories could require starting t-SNE again. Even if the t-SNE model is well trained, it is designed to differentiate in main clusters, but sub-clusters may not be well differentiated. In this case, if there is a "zoom" around a word, restarting t-SNE should be necessary. It is something done already in the TensorFlow Projector (select a point, isolate and start t-SNE). The initialization is not random enough. You could set a random seed to make the results more reproducible. The perplexity is too high. In many cases, a low perplexity (around 5) could be necessary to make clear clusters. According to your screenshots, I think the first or the third options are the most possible ones. Note: UMAP is also interesting because it has a logic between clusters, in addition to good clusterizing.
H: Is it sensible to use the ROC curve with an KNN model? And if so why? I am a beginner doing my first ML project. I am doing a binary supervised classification on an unbalanced dataset and want to use the ROC curve as a performance metric of my models. I am using Logistic Regression, Support Vector Machine and K Nearest Neighbors as classifiers. For Logistic Regression I understand what the ROC Curve is and how the threshold could be adjusted. When predicting the probabilities for K Nearest Neighbours however I don't understand where the ROC Curve comes in handy, as the threshold can not be varied anyways. So is there a reason to still implement a ROC Curve for K Nearest Neigbors? AI: ROC curves (and the AUC metric) are used for evaluating model performance on a classification task. If you use KNN for classifying, then you can evaluate your model on it. Probability, in the context of KNN can be the number of neighbours that correctly classify an instance (the threshold), divided by the total number of neighbours used (the k parameter). E.g., you have a KNN model f with k=5, and for data instance x, f(x) = 4 / 5 if 4 of its 5 closest neighbours belong to the same class as x. If for data instance y you have f(y) = 3 / 5, then both x and y would be assigned the same class, but x should rank higher than y because the model is more confident about it (4/5 vs 3/5). If you can implement your KNN algorithm to work like this, than ROC curve / AUC metric is straightforward to implement. I think that this is what's implemented in the predict_proba method of the sklearn's KNN classifier.
H: How can I Implement Dropout in SciKit-Learn? I am working on an air-gapped PC and can only access the SciKit-Learn machine learning library. Unfortunately, its MLPClassifier doesn't include a Dropout hyperparameter, which severely limits the size of network I'm able to build. How can I extend the Class and add Dropout to the algorithm/ model? AI: To implement this correctly you need to understand exactly how Dropout works, and where to change SciKit-Learn's MLPClassifier class to implement it. I'll start with an explanation of Dropout followed by the specifics of implementation. Skip to the end if you just want the code snippet. Understanding Dropout The image below shows dropout visually, and is taken from the canonical Dropout paper by Srivistava, and Hinton et al. The method is a form of regularisation that approximates averaging the predictions of hundreds to hundreds of thousands of smaller networks without massive computational overhead. For more details, read the paper above, for even more details read this paper and this paper. For a simple overview, watch Andrew Ng's video on the topic. Put simply, Dropout switches off nodes in a Neural Net's layers, creating a series of smaller networks for your model to train on. It does this whilst preserving the overall architecture of your Neural Net. To create the smaller networks, each node is switched off randomly with a probability, $P$, a new hyperparameter, which makes the architecture of the smaller networks random. Training on varying, randomised architectures prevents nodes from Co-Adapting and overfitting the noise in your data. Dropout is only applied during training, once the network is trained it uses its complete architecture to make predictions. Technical Details From a technical point of view there are two things to consider, the Forward Pass, and Backpropagation. Forward Pass In the forward pass the smaller network is created by building a Dropout Mask that sets the activation function of each switched off node to $0$, and multiplies the activation function of the remaining nodes by $1/P$. Multiplying the remaining nodes' activation functions by $1/P$ is known as Inverted Dropout. This method ensures that the network sees the same scale of activations (nodes' output values) in training as it does in testing. Without this scaling, switching off nodes with a probability $P$ will scale the size of the activations by $(1-P)$. If this isn't counteracted, the network will be trained to make predictions on lower activation levels (node output values). This becomes an issue at test time, when the entire network is used, which makes the activations, on average, $1/P$ times larger, ruining the networks predictive ability. Once the Dropout mask has been applied, the input is passed through the network normally. Backpropagation To understand how backpropagation works with Dropout, it's worth adding an excerpt from the canonical Dropout paper. It states that backpropagation should be performed on the thinned network only, which means we have to prevent the algorithm updating the weights of switched off nodes' forwards and backwards connections as well as their biases (intercepts). This prevents nodes being trained on predictions that they contributed nothing towards. Implementing in SciKit-Learn To implement this model, you need to understand SciKit-Learn's model design. This webpage from SciKit-Learn goes over the development of new estimators. However, developing a whole new estimator is not necessary, instead we'll create a subclass of MLPClassifier called MLPDropout. I have assumed a base level of understanding in Python in Classes, Objects, Methods, Functions, etc. so if you don't understand Python I suggest you learn that first. I will post the entire code snippet at the end of this post, so skip to the end if you just want the answer. Keep reading if you're interested in the specific changes you have to make. 1. Imports To subclass MLPClassifier you will need the following imports, add them to the top of your file or Jupyter notebook. from sklearn.neural_network import MLPClassifier from sklearn.neural_network._stochastic_optimizers import AdamOptimizer from sklearn.neural_network._base import ACTIVATIONS, DERIVATIVES, LOSS_FUNCTIONS from sklearn.utils import shuffle, gen_batches, check_random_state, _safe_indexing from sklearn.utils.extmath import safe_sparse_dot import warnings from sklearn.exceptions import ConvergenceWarning 2. Adding a Dropout Hyperparameter To control dropout in your new model you'll have to add a Dropout hyperparameter to the class's initialisation. This can be done by adding a new parameter to your subclass's initialisation function, which you can see at the bottom of def __init__(...) in the following code snippet. class MLPDropout(MLPClassifier): def __init__( self, ... ( Parameters not included, see full code below.) dropout = None, ): ''' Additional Parameters: ---------- dropout : float in range (0, 1), default=None Dropout parameter for the model, defines the percentage of nodes to remove at each layer. ''' self.dropout = dropout super().__init__( ... see parameters in full code below.) The additional parameter is set as an internal attribute of the MLPDropout estimator, and super().__init__(...) calls MLPClassifiers initialisation function with the remaining parameters, as normal. 3. Passing The Architecture (layer_units) This next step is not strictly necessary, but seems to follow SciKit-Learn's design principles. layer_units is a variable instantiated by MLPClassifer that defines the node architecture of the Neural Net. To create the Dropout mask we need to pass this variable to the forward pass and backpropagation methods. Their are a couple of class methods that call forward pass and backpropagation, but, assuming you're using Stochastic Gradient Descent, the method we're interested in changing is _fit_stochastic, which is passed the layer_units parameter. In _fit_stochastic's call to _backprop, the function used by MLPClassifer to perform the forward pass and backpropagation, we add layer_units parameter. # (DROPOUT ADDITION) layer_units passed forward to help build dropout mask. batch_loss, coef_grads, intercept_grads = self._backprop( X_batch, y_batch, activations, layer_units, deltas, coef_grads, intercept_grads, ) accumulated_loss += batch_loss * ( batch_slice.stop - batch_slice.start ) To see this in context, skip to the complete code snippet below. You also need to edit _backprop so that it can accept layer_units: def _backprop(self, X, y, activations, layer_units, deltas, coef_grads, intercept_grads): """Compute the MLP loss function and its corresponding derivatives with respect to each parameter: weights and bias vectors. Parameters ---------- ... (Parameters above cut off, see full code below) layer_units (DROPOUT ADDITION) : list, length = n_layers The layer units of the neural net, this is the shape of the Neural Net model. This is used to build the dropout mask. ... (Parameters below cut off, see full code below) 4. Creating the Dropout Mask To create the dropout mask you can use the layer_units variable, and the self.dropout hyperparameter to create a list of numpy arrays that mask the activation functions of the nodes. # Create the Dropout Mask (DROPOUT ADDITION) if self.dropout != None: if 0 < self.dropout < 1: keep_probability = 1 - self.dropout dropout_masks = [np.ones(layer_units[0])] # Create hidden Layer Dropout Masks for units in layer_units[1:-1]: # Create inverted Dropout Mask, check for random_state if self.random_state != None: layer_mask = (self._random_state.random(units) < keep_probability).astype(int) / keep_probability else: layer_mask = (np.random.rand(units) < keep_probability).astype(int) / keep_probability dropout_masks.append(layer_mask) else: raise ValueError('Dropout must be between zero and one. If Dropout=X then, 0 < X < 1.') This creates a dropout mask that matches the size of each layer and adds it to a list. The first mask is for the input, which is set to all 1's in this implementation. Sometimes dropout is performed on the input nodes, but usually only in tasks where loss of a percentage of input data is minimally important. For example, image recognition. For tabular data this should be avoided. 5. Applying the Mask in the Forward Pass The _forward_pass method is called from inside of _backprop and must be updated to accept a dropout_masks variable. First within _backprop: # Forward propagate activations = self._forward_pass(activations, dropout_masks) and then in the method definition: def _forward_pass(self, activations, dropout_masks=None): """Perform a forward pass on the network by computing the values of the neurons in the hidden layers and the output layer. Parameters ---------- activations : list, length = n_layers - 1 The ith element of the list holds the values of the ith layer. dropout_mask : list, length = n_layers - 1 The ith element of the list holds the dropout mask for the ith layer. """ dropout_masks has a default of None so that MLPDropout can be used exactly like MLPClassifier if you wish to have a dropout of zero. Assuming you are using Dropout, the dropout_masks are applied within _forward_pass by adding the following code to the forward pass for loop: # Apply Dropout Mask (DROPOUT ADDITION) if (i + 1) != (self.n_layers_ - 1) and dropout_masks != None: check1 = activations[i].copy() activations[i+1] = activations[i+1] * dropout_masks[i+1][None, :] _forward_pass's for loop iteratively assigns activations to a list. Multiplying the layer's activations by its corresponding mask makes the algorithm behave as if the masked nodes have been switched off. 6. Applying the mask in Backpropagation Finally, the mask must be applied again in backpropagation, as MLPClassifier includes a regularisation term that will update the weights of every node in a network regardless of their inclusion in the forward pass. The _backprop function calculates these gradients for you, and then passes them back to _fit_stochastic, which updates the model's coefficients (weights and biases). To follow the algorithm exactly, and update the thinned network only, you must zero out the coefficient gradients that correspond to switched off nodes. You can do this using the dropout_masks by adding the following code to the end of _backprop: # Apply Dropout Masks to the Parameter Gradients (DROPOUT ADDITION) if dropout_masks != None: for layer in range(len(coef_grads)-1): mask = (~(dropout_masks[layer+1] == 0)).astype(int) coef_grads[layer] = coef_grads[layer] * mask[None, :] coef_grads[layer+1] = (coef_grads[layer+1] * mask.reshape(-1, 1)) intercept_grads[layer] = intercept_grads[layer] * mask The first line, in the for loop, defines a new mask that no longer includes a scaling factor. The second line zeroes out the coefficient gradients for the weights connecting to switched off nodes in the current layer from the previous layer. The third line zeroes out the coefficient gradients of all connection weights passing from switched off nodes in the current layer to the next layer. The fourth line zeroes out the bias (SciKit-Learn calls it intercept) gradients of all switched off nodes in the current layer. 7. Complete Code At this stage, all the changes you need to make are complete. Please see the complete code for a new MLPDropout class below. Basic tests show that this model is marginally better than MLPClassifier. # Creating a custom MLPDropout classifier from sklearn.neural_network import MLPClassifier from sklearn.neural_network._stochastic_optimizers import AdamOptimizer from sklearn.neural_network._base import ACTIVATIONS, DERIVATIVES, LOSS_FUNCTIONS from sklearn.utils import shuffle, gen_batches, check_random_state, _safe_indexing from sklearn.utils.extmath import safe_sparse_dot import warnings from sklearn.exceptions import ConvergenceWarning class MLPDropout(MLPClassifier): def __init__( self, hidden_layer_sizes=(100,), activation="relu", *, solver="adam", alpha=0.0001, batch_size="auto", learning_rate="constant", learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=1e-4, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-8, n_iter_no_change=10, max_fun=15000, dropout = None, ): ''' Additional Parameters: ---------- dropout : float in range (0, 1), default=None Dropout parameter for the model, defines the percentage of nodes to remove at each layer. ''' self.dropout = dropout super().__init__( hidden_layer_sizes=hidden_layer_sizes, activation=activation, solver=solver, alpha=alpha, batch_size=batch_size, learning_rate=learning_rate, learning_rate_init=learning_rate_init, power_t=power_t, max_iter=max_iter, shuffle=shuffle, random_state=random_state, tol=tol, verbose=verbose, warm_start=warm_start, momentum=momentum, nesterovs_momentum=nesterovs_momentum, early_stopping=early_stopping, validation_fraction=validation_fraction, beta_1=beta_1, beta_2=beta_2, epsilon=epsilon, n_iter_no_change=n_iter_no_change, max_fun=max_fun, ) def _fit_stochastic( self, X, y, activations, deltas, coef_grads, intercept_grads, layer_units, incremental, ): params = self.coefs_ + self.intercepts_ if not incremental or not hasattr(self, "_optimizer"): if self.solver == "sgd": self._optimizer = SGDOptimizer( params, self.learning_rate_init, self.learning_rate, self.momentum, self.nesterovs_momentum, self.power_t, ) elif self.solver == "adam": self._optimizer = AdamOptimizer( params, self.learning_rate_init, self.beta_1, self.beta_2, self.epsilon, ) # early_stopping in partial_fit doesn't make sense early_stopping = self.early_stopping and not incremental if early_stopping: # don't stratify in multilabel classification should_stratify = is_classifier(self) and self.n_outputs_ == 1 stratify = y if should_stratify else None X, X_val, y, y_val = train_test_split( X, y, random_state=self._random_state, test_size=self.validation_fraction, stratify=stratify, ) if is_classifier(self): y_val = self._label_binarizer.inverse_transform(y_val) else: X_val = None y_val = None n_samples = X.shape[0] sample_idx = np.arange(n_samples, dtype=int) if self.batch_size == "auto": batch_size = min(200, n_samples) else: if self.batch_size < 1 or self.batch_size > n_samples: warnings.warn( "Got `batch_size` less than 1 or larger than " "sample size. It is going to be clipped" ) batch_size = np.clip(self.batch_size, 1, n_samples) try: for it in range(self.max_iter): if self.shuffle: # Only shuffle the sample indices instead of X and y to # reduce the memory footprint. These indices will be used # to slice the X and y. sample_idx = shuffle(sample_idx, random_state=self._random_state) accumulated_loss = 0.0 for batch_slice in gen_batches(n_samples, batch_size): if self.shuffle: X_batch = _safe_indexing(X, sample_idx[batch_slice]) y_batch = y[sample_idx[batch_slice]] else: X_batch = X[batch_slice] y_batch = y[batch_slice] activations[0] = X_batch # (DROPOUT ADDITION) layer_units passed forward to help build dropout mask. batch_loss, coef_grads, intercept_grads = self._backprop( X_batch, y_batch, activations, layer_units, deltas, coef_grads, intercept_grads, ) accumulated_loss += batch_loss * ( batch_slice.stop - batch_slice.start ) # update weights grads = coef_grads + intercept_grads self._optimizer.update_params(params, grads) self.n_iter_ += 1 self.loss_ = accumulated_loss / X.shape[0] self.t_ += n_samples self.loss_curve_.append(self.loss_) if self.verbose: print("Iteration %d, loss = %.8f" % (self.n_iter_, self.loss_)) # update no_improvement_count based on training loss or # validation score according to early_stopping self._update_no_improvement_count(early_stopping, X_val, y_val) # for learning rate that needs to be updated at iteration end self._optimizer.iteration_ends(self.t_) if self._no_improvement_count > self.n_iter_no_change: # not better than last `n_iter_no_change` iterations by tol # stop or decrease learning rate if early_stopping: msg = ( "Validation score did not improve more than " "tol=%f for %d consecutive epochs." % (self.tol, self.n_iter_no_change) ) else: msg = ( "Training loss did not improve more than tol=%f" " for %d consecutive epochs." % (self.tol, self.n_iter_no_change) ) is_stopping = self._optimizer.trigger_stopping(msg, self.verbose) if is_stopping: break else: self._no_improvement_count = 0 if incremental: break if self.n_iter_ == self.max_iter: warnings.warn( "Stochastic Optimizer: Maximum iterations (%d) " "reached and the optimization hasn't converged yet." % self.max_iter, ConvergenceWarning, ) except KeyboardInterrupt: warnings.warn("Training interrupted by user.") if early_stopping: # restore best weights self.coefs_ = self._best_coefs self.intercepts_ = self._best_intercepts def _backprop(self, X, y, activations, layer_units, deltas, coef_grads, intercept_grads): """Compute the MLP loss function and its corresponding derivatives with respect to each parameter: weights and bias vectors. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. y : ndarray of shape (n_samples,) The target values. activations : list, length = n_layers - 1 The ith element of the list holds the values of the ith layer. layer_units (DROPOUT ADDITION) : list, length = n_layers The layer units of the neural net, this is the shape of the Neural Net model. This is used to build the dropout mask. deltas : list, length = n_layers - 1 The ith element of the list holds the difference between the activations of the i + 1 layer and the backpropagated error. More specifically, deltas are gradients of loss with respect to z in each layer, where z = wx + b is the value of a particular layer before passing through the activation function coef_grads : list, length = n_layers - 1 The ith element contains the amount of change used to update the coefficient parameters of the ith layer in an iteration. intercept_grads : list, length = n_layers - 1 The ith element contains the amount of change used to update the intercept parameters of the ith layer in an iteration. Returns ------- loss : float coef_grads : list, length = n_layers - 1 intercept_grads : list, length = n_layers - 1 """ n_samples = X.shape[0] dropout_masks = None # Create the Dropout Mask (DROPOUT ADDITION) if self.dropout != None: if 0 < self.dropout < 1: keep_probability = 1 - self.dropout dropout_masks = [np.ones(layer_units[0])] # Create hidden Layer Dropout Masks for units in layer_units[1:-1]: # Create inverted Dropout Mask, check for random_state if self.random_state != None: layer_mask = (self._random_state.random(units) < keep_probability).astype(int) / keep_probability else: layer_mask = (np.random.rand(units) < keep_probability).astype(int) / keep_probability dropout_masks.append(layer_mask) else: raise ValueError('Dropout must be between zero and one. If Dropout=X then, 0 < X < 1.') # Forward propagate # Added dropout_makss to _forward_pass call (DROPOUT ADDITION) activations = self._forward_pass(activations, dropout_masks) # Get loss loss_func_name = self.loss if loss_func_name == "log_loss" and self.out_activation_ == "logistic": loss_func_name = "binary_log_loss" loss = LOSS_FUNCTIONS[loss_func_name](y, activations[-1]) # Add L2 regularization term to loss values = 0 for s in self.coefs_: s = s.ravel() values += np.dot(s, s) loss += (0.5 * self.alpha) * values / n_samples # Backward propagate last = self.n_layers_ - 2 # The calculation of delta[last] here works with following # combinations of output activation and loss function: # sigmoid and binary cross entropy, softmax and categorical cross # entropy, and identity with squared loss deltas[last] = activations[-1] - y # Compute gradient for the last layer self._compute_loss_grad( last, n_samples, activations, deltas, coef_grads, intercept_grads ) inplace_derivative = DERIVATIVES[self.activation] # Iterate over the hidden layers for i in range(self.n_layers_ - 2, 0, -1): deltas[i - 1] = safe_sparse_dot(deltas[i], self.coefs_[i].T) inplace_derivative(activations[i], deltas[i - 1]) self._compute_loss_grad( i - 1, n_samples, activations, deltas, coef_grads, intercept_grads ) # Apply Dropout Masks to the Parameter Gradients (DROPOUT ADDITION) if dropout_masks != None: for layer in range(len(coef_grads)-1): mask = (~(dropout_masks[layer+1] == 0)).astype(int) coef_grads[layer] = coef_grads[layer] * mask[None, :] coef_grads[layer+1] = (coef_grads[layer+1] * mask.reshape(-1, 1)) intercept_grads[layer] = intercept_grads[layer] * mask return loss, coef_grads, intercept_grads def _forward_pass(self, activations, dropout_masks=None): """Perform a forward pass on the network by computing the values of the neurons in the hidden layers and the output layer. Parameters ---------- activations : list, length = n_layers - 1 The ith element of the list holds the values of the ith layer. dropout_mask : list, length = n_layers - 1 The ith element of the list holds the dropout mask for the ith layer. """ hidden_activation = ACTIVATIONS[self.activation] # Iterate over the hidden layers for i in range(self.n_layers_ - 1): activations[i + 1] = safe_sparse_dot(activations[i], self.coefs_[i]) activations[i + 1] += self.intercepts_[i] # For the hidden layers if (i + 1) != (self.n_layers_ - 1): hidden_activation(activations[i + 1]) # Apply Dropout Mask (DROPOUT ADDITION) if (i + 1) != (self.n_layers_ - 1) and dropout_masks != None: check1 = activations[i].copy() activations[i+1] = activations[i+1] * dropout_masks[i+1][None, :] # For the last layer output_activation = ACTIVATIONS[self.out_activation_] output_activation(activations[i + 1]) return activations
H: What will happen if we apply Gradient Ascent? I have built a simple neural network on MNIST, but instead of moving toward the opposite direction of gradients, I moved in the same direction of it just by applying( pytorch ): For m in model.parameters(): m.weight.grad *= -1 I just wanted to know what will happen in prediction. I got these results as confusion matrix. Are we able to interpret the results? AI: Well, let's think it through: You're calculating the gradient at each step, which points you in the direction of lower loss. Then, you're reversing the direction, going in the opposite direction, away from whatever minima the gradient was pointing you towards. If you're near the global minimum, this means that you're moving further from the best solution with every step. If you're near local minima, you'll move away from them, which may or may not move you in the direction of the global minimum. It doesn't really matter, though, because if it does move you such that the nearest minima is now the global minimum, at the next step you'll start moving away from it. To put it simply, you're designing a model that is as wrong as possible. This is even more clear looking at the confusion matrix, where you see that the model is returning the same class for every sample, and not even the highest frequency class (you get 982 correct, largest class is 1135). The model is performing worse than the simplest possible baseline, selecting the majority class.
H: Best image format for face detection and face recognition with the DeepFace library I'm using the DeepFace library for face recognition and detection. I was wondering if there is a better format (png, jpg, etc) than others to get better results. Is there a preferred image format for face recognition and face detection generally? and specifically in this library? AI: In python images are processed as bit bitmaps using the colour depth of the graphic system. Converting a PNG image to a bitmap is really fast(20 x times) when compared to jpegs. Deepace - Currently accepting only 2 types of image input formats. PNG/Jpeg. there is no way you can use other formats images directly as you are using thier libraries. If you want to use another input formats so then atlast you need to convert either to PNG or Jpeg to give input to the functions. Which may costs you extra execution time while bringing other format images to PNG/Jpegs. If want to improve the face recognition and face detection with deepface library then use some preprocessing filters Some of the filters you can try for better results. utlimate guide Grayscale conversions Face straightening Face cropping (#Deepcae automatically do this while processing so no need to this) Image resizing Normalization Image enhancement with PIL like sharpening. image equalization.
H: Whats the advantage of He Intialization over Xavier Intialization? For Weights initialization, I read that He doesn't consider linear activation of neurons as Xavier Initialization; in this context, what does linear initialization mean? AI: Yeah, it is true actually that He doesn't consider linear activation of neurons as Xavier Initialization as He initialization is specifically designed to work well with ReLU activation functions, which are non-linear. Xavier initialization, on the other hand, is based on the assumption that the activation function used by the neuron is linear. Therefore, He initialization does not consider linear activation of neurons in the same way that Xavier initialization does. That means He initialization can be a good choice for initializing the weights of deep neural networks with ReLU activation functions. And for your question regarding Linear initialization, it refers to initializing the weights of a neural network such that the output of each neuron is a linear function of its inputs. Maybe you can give a try reading the following: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
H: How to train machine learning on sales forecasting problems of almost 10,000 shops? I have a dataset of almost 10,000 shops, 'dates', 'shop ID' and 'sales amounts' as their features almost 2 years of data. I want to forecast each shop, the sales amount for 30 next days. I want to perform this on a CPU (laptop), no cloud, and no GPU. I can do this if there are 10 to 20 shops, but I am not getting how to do this for 10,000 shops. I am adding an example dataset here. The above fig shows shop ID, date and sale amount columns. Now for each shop ID there are almost 2 years of data. And there are such 10,000 shops. Current situation: I have aggregated all the shop's data over date and predicted the next 30 days using ARIMA. But not getting any idea how to do it at each shop level. Please help. Thank you. AI: Predicting 30 days for 10000 without GPU nor CPU is difficult, but you can try different approaches to get the most out of the data and make consistent predictions: Start with a correlation map to find out similar shops. If you find groups of shops with similar behavior, you may just have to train a model for each group. In addition to that, you could visualize 10000 shops using dimensional reduction algorithms like UMAP and detect relevant clusters. This is also interesting to extract features and dependencies of other important variables that could help you improve predictions. I recommend to start with 1000 shops to have the right parameters and prepare the data correctly. Once you have clear clusters, you can apply ARIMA to some of them as they probably have seasonality (= where ARIMA performs the best). For the shops out of clusters, you may apply to them another algorithm for noisy data. Potentially Exponential Smoothing or Random Forest. In conclusion, starting from a general approach to group shops with similar data, extracting knowledge from them, and applying a prediction model for each group could be a good solution in case of low computing power.
H: How does the Kalman Filter actually work? I know that the Kalman filter can be used whenever we have a measurement and transition equation. I also know that the Kalman filter can handle missing data. From my course at university, I know that Kalman filters are used all the time to make GDP forecasts. What I don't know is how the Kalman filter actually does this. I know how the HP filter works and I guess that both approaches are similar. Does someone know exactly how it works and can explain it, please? Thank you in advance for your reply. AI: The Kalman filter is a mathematical method used to estimate the state of a system based on noisy, indirect measurements. It consists of two steps: prediction and update. In the prediction step, the Kalman filter uses the system's transition model to predict the state at the next time step based on the current state and any control inputs. The transition model is a mathematical representation of how the system is expected to evolve over time. In the update step, the Kalman filter uses the measurement model to incorporate new information from the measurements into the prediction. The measurement model is a mathematical representation of how the system's state is related to the measurements. An intuitive explanation of the Kalman filter can be found here An Explanation of the Kalman Filter How a Kalman filter works, in pictures kalman filter: understanding the mathematical part
H: Is it possible to combine models in pytorch and pytorch geometric? I am working on a node classification problem with graphical data. I've used PyTorch to classify nodes by simply applying a network to the individual nodes (e.g., ignoring graphical structure), and I've used PyTorch Geometric to classify nodes by applying a GNN (e.g., GCN). Is it possible to apply a model from PyTorch as the final layer in PyTorch geometric? I am a bit confused about how this would work because the inputs to the torch.nn.Module are a graph for the GNN case but feature vectors for the MLP case. I want to combine a PyTorch model and a PyTorch Geometric model into a single model, which I can train. Is this possible? AI: Continuing your comment, Yes, that's correct! This is just one way of doing it. You can create 3 models: 1 in PyTorch, 1 in PyTorch Geometric, and then use each of those models as layers in a 3rd model. The parameters for each of the first 2 models will be updated while training the 3rd model, as long as you have defined the 3rd model to contain the first 2 models as submodules and called their forward methods in the forward pass of the 3rd model. You can refer to the following example: import torch import torch.nn as nn import torch.nn.functional as F import torch_geometric # Define the PyTorch model class MLP(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(MLP, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.fc2 = nn.Linear(hidden_size, output_size) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return x # Define the PyTorch Geometric model class GNN(torch_geometric.nn.Module): def __init__(self, input_size, hidden_size, output_size, mlp): super(GNN, self).__init__() self.conv1 = torch_geometric.nn.GCNConv(input_size, hidden_size) self.conv2 = torch_geometric.nn.GCNConv(hidden_size, output_size) self.mlp = mlp def forward(self, x, edge_index): x = self.conv1(x, edge_index) x = F.relu(x) x = self.conv2(x, edge_index) x = self.mlp(x) return x # Define the combined model class CombinedModel(nn.Module): def __init__(self, mlp, gnn): super(CombinedModel, self).__init__() self.mlp = mlp self.gnn = gnn def forward(self, x, edge_index): # Apply the MLP to the feature vectors x = self.mlp(x) # Apply the GNN to the graph and feature vectors x = self.gnn(x, edge_index) return x # Create instances of the PyTorch and PyTorch Geometric models mlp = MLP(input_size=10, hidden_size=20, output_size=5) gnn = GNN(input_size=20, hidden_size=30, output_size=10, mlp=mlp) # Create an instance of the combined model combined_model = CombinedModel(mlp, gnn)
H: How can I write a Python program, that recognizes clapping sounds in an audio file, and saves them to disk? I would like to write a Python program that takes an audio file as input, recognizes clapping sounds in it, then extracts these sounds into a file. My idea is the following: Get (or create) a dataset containing many audio clips with clapping sounds in them, Train a machine learning model to recognize these sounds, Test the model on new audio files. How can I achieve point 2? I already have a dataset ready, so I just need to run a ML algorithm on them. I've seen several alternatives: Keras PyTorch Scikit-learn But I have no idea which to choose for this particular task and how I would go about writing a Python script that does the training and testing. Can someone please help? AI: Claps have specific waves that could be detected with several solutions, including ML. Here is a code using keras: https://www.kaggle.com/code/carlolepelaars/bidirectional-lstm-for-audio-labeling-with-keras In addition to that, there is an eating sound project that is quite similar and there is probably useful code like this one with wav2vec: https://github.com/m3hrdadfi/soxan/blob/main/notebooks/Eating_Sound_Collection_using_Wav2Vec2.ipynb On the other hand, non-ML could have even better results, as they follow simple rules that detect claps efficiently. For instance: https://github.com/nikhiljohn10/pi-clap https://github.com/xSparfuchs/clap-detection/blob/master/clap-detection.py https://github.com/tom-s/clap-detector
H: How can I observe my CSV files better? I'm running a lot of experiments that give their output as CSV files. An experiment might be running for hours, with a new line being added to the CSV every 10 seconds. Right now I'm opening these CSV files in a text editor, which isn't too convenient. I'm looking for a better way. Here are some features I want: View CSV files as a table. Automatically display numbers in a reasonable way (i.e. instead of 10 significant digits after the dot, show 2. Automatically tail the file, i.e. show new lines as the file is updated. Allow me to hide columns, and remember my selection for new CSV files of the same format. Allow me to show the data in a plot, and remember the plot format I used for the new CSV files of the same format. Does anything like that exist? AI: With pandas to format your file as you want Visidata (https://github.com/saulpw/visidata) is a must have ! it is a open source project, run in terminal and rocks ! more info here https://www.visidata.org
H: Day number as a feature in Linear regression Goal - To train a Linear regression model for climatic studies. Planned features: - Temperatures, Latitude, Longitude, Day Number (1st February = 32) Would it be correct to include day number like that? Can we assume ordinality in day numbers? 2nd January isn't greater than 1st January in any way, so would it be correct to put 2nd January as 2 and 1st January as 1. How can we have a day number feature keeping this in mind? Or something that would let us control which date for which we want to predict. AI: It is generally not correct to assume ordinality in day numbers in this context. Day number (i.e., the number of days since the beginning of the year) is not a meaningful feature for predicting temperature or other climatic variables. One way to incorporate the date into your model is to use the month and year as separate features. For example, you could use two separate categorical variables: one for the month (e.g., January, February, etc.) and one for the year. This would allow you to control which date you want to predict for. Alternatively, you could use a continuous variable for the date, such as the number of days since a reference date (e.g., 1/1/1970). This would allow you to capture any trends or patterns in the data that may depend on the elapsed time since a specific point in the past. It's also worth considering whether other variables, such as altitude, land cover, or ocean currents, might be relevant for predicting temperature or other climatic variables. These variables could be important for understanding and predicting local climate patterns.
H: Creating new features as linear combination of others as part of a scikit-learn pipeline? I have a number of raw features that go into a scikit-learn model. I've already got a number of preprocessing steps (such as PolynomialFeatures) that creates additional features as part of my pipeline. However, I know that certain linear combinations of my raw features are also likely to be important in their own right. My question is whether such hand-crafted features (such as the sum of feature1 and feature2) can be created as part of scikit-learn's pipeline setup? Obviously I can create them at the Pandas dataframe level before passing them into the pipeline, but trying to figure out the cleanest way to do this in my codebase. AI: Yes, you can create custom features as part of a scikit-learn pipeline by using a custom transformer class. A transformer is a class that implements the fit() and transform() methods, and can be used to perform any custom preprocessing or feature engineering steps that you need. Here's an example of how you might create a transformer class to create custom features as part of a pipeline: from sklearn.base import BaseEstimator, TransformerMixin class CustomFeatureTransformer(BaseEstimator, TransformerMixin): def __init__(self): # Initialize any parameters you need here pass def fit(self, X, y=None): # Fit the transformer to the data (e.g. compute any necessary statistics) return self def transform(self, X): # Create the custom features X['custom_feature1'] = X['feature1'] + X['feature2'] X['custom_feature2'] = X['feature3'] - X['feature4'] return X You can then include this transformer in your pipeline like any other preprocessing step: from sklearn.pipeline import Pipeline # Create the pipeline pipeline = Pipeline([ ('custom_features', CustomFeatureTransformer()), # Other preprocessing steps go here ('model', MyModel()) ]) When you call fit() on the pipeline, the transformer's fit() and transform() methods will be called in sequence, allowing you to create and include the custom features in your model's training data.
H: What does the ellipse in Gradient descent describes? I saw multiple articles describes GD or SGD with the following diagram: I didn't saw any explanation about the ellipses. What does each ellipse describe ? What is the meaning if the ellipse has large or small diameter ? Does the wights (which are the ellipse in the image) must has the ellipse shape ? (Are there any other shapes you can show as an example) ? AI: These ellipse represent contour of the Gradient Descent The ellipse line have same altitude/height in the Gradient. lets understant it with help of mountain analogy. The mountain is your objective function which you want to maximize or minimize. You want to reach the top of mountain, how is the top represented in countour plot - its the center of ellipses. The Diameter tells us about the shape of the function and the distance between two ellipse tells us how fast or slow the slope decreases. For example lets take the slope between two successive ellipse be 5 meters in height so if the ellipses are closer together we can conclude that altitude is falling very sharply and vice-versa. Other shape is also possible. It depends upon the objective function which you are trying to minimize/maximize.
H: IllegalArgumentException at org.apache.xbean.asm5.ClassReader.(Unknown Source) when training an ALS implementation of spark in scala I was following this tutorial trying to write a collaborative recommender system using the alternating least squares algorithm in spark. I am using the movie lens dataset which can be found here. My code is as follows: case class Rating(userId: Int, movieId: Int, rating: Int, timestamp: Long) object Main { def main(args: Array[String]): Unit = { val spark = SparkSession .builder() .appName("Alternating least squares") .master("local[*]") .getOrCreate() def parseRating(str: String): Rating = { val fields = str.split("\\t") new Rating(fields(0).toInt, fields(1).toInt, fields(2).toInt, fields(3).toLong) } import spark.implicits._ val df: DataFrame = spark.read.text("./ml-100k/u1.base") .map(row => parseRating(row.mkString)) .toDF() //"userId", "movieId", "rating", "timestamp" df.show(5, false) val Array(train, test) = df.randomSplit(Array(.8, .2)) val als = new ALS() .setMaxIter(5) .setRegParam(.01) .setUserCol("userId") .setItemCol("movieId") .setRatingCol("rating") val model = als.fit(train) // <-- error here } } Everything is running fine, until it gets to the line where the als algorithm is supposed to be training. It throws this exception: Exception in thread "main" java.lang.IllegalArgumentException at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46) at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449) at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134) at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432) at org.apache.xbean.asm5.ClassReader.a(Unknown Source) at org.apache.xbean.asm5.ClassReader.b(Unknown Source) at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262) at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159) at org.apache.spark.SparkContext.clean(SparkContext.scala:2299) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073) at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1364) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.RDD.take(RDD.scala:1337) at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply$mcZ$sp(RDD.scala:1472) at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply(RDD.scala:1472) at org.apache.spark.rdd.RDD$$anonfun$isEmpty$1.apply(RDD.scala:1472) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.RDD.isEmpty(RDD.scala:1471) at org.apache.spark.ml.recommendation.ALS$.train(ALS.scala:918) at org.apache.spark.ml.recommendation.ALS.fit(ALS.scala:674) at Main$.main(Main.scala:58) at Main.main(Main.scala) Can someone please exmplain what I am doing wrong, as I am pretty much following the tutorial line by line. EDIT: I am using java SDK version 11.0.11 with scala version 2.11.12. AI: Upon further research (searching stackoverflow for the line org.apache.xbean.asm5.ClassReader.(Unknown Source) from the stack trace) I found multiple answers such as this where in a nutshell it says that the reason for this error is that spark, or mllib from spark only supports java 8. Indeed when I switched to java 8 in my project, the code ran as expected.
H: How to translate text automatically using Google Translate API (or any other approach) in python I have a dataset of reviews from TripAdvisor and I would like to translate non-English reviews into English. The reviews are in many different languages: my dataset contains reviews in 44 different languages including English reviews which are around a quarter of the total, followed by Italian, German, Spanish and French reviews which are respectively 14%, 11% 6%, and 5.5% of the reviews. I detect the different languages using the langdetect package. from langdetect import detect language = detect(title) Then I tried to use the package googletrans to translate the text. However, when I tried to perform a small test I get the following error: from googletrans import Translator translator = Translator() translator.translate('Buongiorno mi chiamo Alberto') Does anyone know a way to translate reviews (knowing or not the language ) to English using python for a dataset of around 10k reviews? AI: This seems to be a known error in the package that can be solved by installing googletrans==4.0.0rc1 as mentioned in the github issue. This indeed seems to solves the issue when trying it myself: from googletrans import Translator translator = Translator() translator.translate('Buongiorno mi chiamo Alberto').text # Good morning my name is Alberto
H: NLP Question: Where can I find a list of all open compound words (with spaces in them), like "peanut butter" or "high school"? [close I already have a list of "1-gram" words, which include closed compound words like "skyscraper" or "weatherman." However, I'm also interested in compiling a list of "2-gram" words that take on a meaning completely different than their two halves, such as "rubbber band." Using any modules/resources/methods, how can I accomplish this? (I tried downloading Google's Ngram files, but lots of the "2-gram" were nonsensical, like "and I" or "what was.") AI: A common way to frame this type of task is as collocation extraction, i.e., extracting bigrams or trigrams and then filtering them by the mutual information between their constituent tokens (this filtering solves the issue of noise you described when talking about the Google n-gram dataset). You can find more information on collocation extraction using nltk here. Note, though, that choosing the right mutual information cut-off threshold to separate collocations from non-collocations (=n-grams) is not trivial, will require tweaking, and you will probably never be fully happy with the results :D Generally speaking, you want it to be as high as possible, as long as that doesn't cause too many candidates to be discarded. You will probably only manage to keep about 5%-20%, which may or may not be enough for your particular needs. Note, though, that some equally-highly-collocated non-compound-noise will persist, e.g. n-grams containing verbs, phrasal verbs, some frequent adverbial phrases, and so on. So, the options below are probably better, either on their own or combined with the information-theoretical approach (they are not mutually exclusive, for the most part): Part of speech tagging: Run a part-of-speech tagger on your data and retain only candidate n-grams whose tokens are tagged as sequences of nouns or adjectives (NN(S) and JJ tags), or where the n-gram, at the very least, starts and ends with one such part of speech (if you're really interested only in compounds, then I think that you'll be fine with sequences like {("NN", "NN"), ("NN", "NNS"), ...} only). With this additional condition, you could lower the mutual information threshold substantially and still get informative terms. Stopword removal: t's always easier to compile a list of all stopwords (or use/extend nltk's) than a list of all compounds, so you could also try generating all n-grams, then filtering those that i) only contain stopwords, ii) start with one, iii) or end with one. You would still end up with collocations containing verbs, so this option is not as good as #1, but it might be a working compromise. Using gensim's phrases phrase extractor. The documentation is great so I won't expand on it here. "Good old scraping": download a knowledge base, e.g. a dump of Wiktionary (https://en.wiktionary.org/wiki/peanut_butter) and parse it to get all dictionary lemmas a) that contain an underscore (=blank) and b) at least one dictionary entry listed as a Noun. In this way, you can filter out verbs, while at the same time validating any n-gram hypothesis as a noun against the external ground truth. Of course, this approach is limited to the coverage of the ground truth dataset (at least initially), but this may already be enough for your purposes. For a comprehensive overview, motivation and how-to of methods #1 and #3, you can check this excellent post: https://towardsdatascience.com/word2vec-for-phrases-learning-embeddings-for-more-than-one-word-727b6cf723cf Hope any of this helps!
H: What is the right processing order when working with a dataset that already consists of test and train data? I want to work on the following task: Text Classification using Deep Learning models and a Transfer Learning model. The notebook that I'm creating should include the following steps: Data Preparation Cleaning EDA Embedding/Preprocessing Training Evaluation The data set that I want to use is the yahoo_answer_topics from Hugging Face and comes with an already defined train and test split. And here comes the point I'm not sure about: How should I deal with this predefined train and test split? Consulting other similar questions about when to do the train and test split, it is stated that the cleaning and EDA can be performed on the entire dataset. However, other resources mention that the EDA should only be done on the training data and use the validation and test data to evaluate the quality of any decisions made on the train data set. What is considered the correct way? If the first would be the way to go; would this mean I have to best combine the two datasets at the start, clean, preprocess, and do the EDA before splitting them again into test, validation, and test? If the second would be the way to go; would this mean I have to split the train into validation and train as a first step and then do the cleaning, and EDA only on the train data set, and then apply the same cleaning/transformation steps on the validation and test data right before training the model? Would it, therefore, be a violation if I look at the validation and test data set before the training (e.g. checking the distribution of the different classes in the validation and test set) and what if I want to do K fold Cross Validation? Thank you already for your help! AI: In my opinion, it's simpler to always assume that the model is meant to be applied in the future on some unknown data: In theory, this is always the goal of training a supervised ML model. Even if this is done only for educational purposes, it makes sense to practice with realistic constraints. Practically, this requires to design the system in a way such that any required preprocessing can be re-run with new data. Doing things this way prevents any risk of data leakage and ensures that we properly evaluate the test set. Of course it's common to apply some steps on the whole dataset to save time, but the proper way is always to split first (if needed) and work only with the training set until the step of evaluation, considering the test set like as unknown future data. The preprocessing steps should be coded in a function which can be applied to any subset of data. There's no need to study the distribution of the evaluation or test set: in general, these sets should have the same distribution as the training set, so they don't bring any new information. occasionally the test set is purposefully built with a slightly different distribution because this corresponds to a realistic application: in this case it would clearly be a mistake to use this information before training (data leakage), since the goal is to evaluate the robustness of the model. K-fold cross validation would be a different setting, since it relies on evaluating with different subsets of data. For example you could use CV on the training set in order to tune some hyper-parameters, and later evaluate the final model with its best parameters on the test set.
H: Sequence labels with prior knowledge about valid labels I am working on a text classification problem where on inference, there is a known set of valid classifications which is smaller than the set of all possible labels. Example: There are 5 labels: L1, L2, ..., L5 On inference, the user wants to classify text T1, but we have prior knowledge that T1 can only have labels L2, L3, or L4. In another case, we want to classify T2 with prior knowledge that T2 can only have labels L1, and L2 Do I have to train a separate model / filter results for each combination of applicable labels or is there a way to use one model that can take in the labelset and output the likelihood for only the valid labels? AI: train a separate model ... for each combination of applicable labels You are getting at model performance. You don't have to do anything, but if you train different kinds of models, that is, models drawn from distinct hypothesis sets, you may observe various performance levels. It sounds like you have an existing model that outputs five raw scores. And then you incorporate "extra" knowledge, which the model is ignorant of, to zero out some scores before scaling them into probabilities and choosing argmax maximum likelihood. Training many models is certainly one possible refinement, but augmenting the width of the input training vector is another. One obvious augmentation is to tack on five indicator variables, saying that each one-hot encoded label is {allowed, disallowed}. Logistic regression, decision tree, and other modeling techniques will learn from such hints. But perhaps the business constraint comes from something simpler, like stage within the customer journey. If you have labels like {prospect, initial customer, repeat customer} which induce {allowed, disallowed} over the labels, consider one-hot encoding those instead, to reduce the model complexity. No matter which approach you choose, or how much of the SME's business rules you make available during training, you can always post-filter, there's nothing wrong with that. And simple filters can be implemented very quickly, without the cost of retraining a model to learn new business rules. tl;dr: If you can expose Bayesian priors at training time, do so, expose as much signal within the noise as feasible.
H: Combining sentence embeddings of two different models (sBERT and mBERT) I am working on a chatbot that helps students. So, I wanted to make use of bert model which has better performance on mathematics, which lead to me to math-bert, but the paper on it said that it was trained only on mathematical corpus, which means it wont have great performance on general sentences (example in image), so is there a method to combine sentence-bert and math-bert? [![enter image description here][1]][1] Or, the only way is to train bert model from scratch using corpus used for sentence-bert and math-bert. AI: Another solution is to directly use GPT for maths. Currently GPT-3 is one of the best models to answer maths problems, but it requires a poweful hardware. https://openai.com/blog/grade-school-math/ Nevertheless, you can have good results using GPT-2: https://github.com/openai/grade-school-math https://github.com/openai/grade-school-math/blob/master/grade_school_math/train.py I recommend starting with the smallest model (124M parameters). It should be enough if it is just a chatbot to solve classic maths problems. https://huggingface.co/gpt2
H: Explain notation in Bishop Eq 1.60 I'm starting to work through Bishop's "Pattern Recognition and Machine Learning" book, and have run into unfamiliar notation. Eq. 160 is prefaced by saying "For this purpose, we shall assume that, given the value of $x$, the corresponding value of $t$ has a Gaussian distribution with a mean equal to the value [of the polynomial curve $y(x, \mathbf{w}) = \sum_{j=0}^M w_j x^j$]. Thus we have" $$p(t|x, \mathbf{w}, \beta) = {\cal N}(t|y(x, \mathbf{w}), \beta^{-1}) \tag{1.60}$$ I understand the LHS to read "the probability density as a function of $t$, given $x$, $\mathbf{w}$ and $\beta$". The RHS is indicating a normal distribution with variance $\beta^{-1}$. The bit I don't understand is what is going on with the mean of this distribution - how should I interpret $\mu = t|y(x, \mathbf{w})$? Why isn't this just written as ${\cal N}(y(x, \mathbf{w}), \beta^{-1})$? What is the bar signifying? I only know its meaning within a probability. (This notation continues to be used, e.g. in Eqs. 1.61, 1.64, 1.69, so I'd like to make sure I understand it!) (I don't know if there are multiple editions of this book; the section I am reading here is 1.2.5 "Curve fitting re-visited", and Eq. 1.60 is the first equation in this section.) AI: I expect this notation is just meant to emphasize which variables are random and which are fixed. Earlier in the book Bishop uses the notation $N(x|\mu, \sigma^2)$ for a generic Gaussian distribution (1.46), where $x$ is the random variable, and $\mu$ and $\sigma^2$ are the fixed parameters of the distribution. So it's not that the mean is being written as $t|y(x,w)$, but rather that "$t|$" gives a name to the random variable. (I'm looking at the 2006 edition)
H: Clustering on Raw Image Pixel Array I have an array in the shape of (105, 105, 3). When I do plt.imshow(array) it outputs: How can I run a clustering algorithm directly on this image? Do I need to convert the pixels to cartesian coordinates and if so, how? I have tried: labels = mixture.GaussianMixture(n_components=2).fit(array).predict(array) However this outputs an error. Please help! AI: You are right, a clustering algorithm such as Gaussian Mixture Model would require cartesian coordinates. This can be done as follows to get the coordinates of the dark pixels: import numpy as np binary = np.mean(array, axis=2) < 0.5 X = np.stack(np.where(binary)).transpose() You should now be able to cluster X Note: For images, segmentation is more common than clustering. Both types of processing are related and segmentation can be seen as a special kind of clustering, but often aims for sligtly different outcomes. Not knowing your desired outcome, I just want to hint you to these kind of algorithms in case they fit to your task.
H: Models that are good for long answer generation given context and question and what datasets would be the best for training? Basically I am trying to create a context-needing question and long answer model and I was wondering what model would be best for such tasks, currently I am leaning towards T5, or GPT-NeoX-20B. Additionally for such tasks what datasets would be the most suitable, as of right now I have looked at CoQA, and SQuAD but none provide long answers. AI: After the comment from Nicolas Martin, I found gpt2 for qa pair generation which gave reasonable steps on how to utilize Question and Answering for GPT models and then I can specify min_length and max_length to create a long answer.
H: Should i remove french special characters and apostrophes I am working on a french text preprocessing task, in order to prepare the data to train an NLP model. But I do not know if it is better to remove french special characters and apostrophes or keep them. Example: Malgré que j'ai tellement aimé ce boulot je veut démissionner Becomes Malgre que jai tellement aime ce boulot je veut demissionner I have also noticed that most lemmatization libraries for french text are not at all efficient so i was wondering if I could skip this step, and also skip the stopwords removal step. In general the preprocessing steps will be : Remove URLs and Emails Demojize Emojis Transform number into text (6->six) Removal of all special characters including french special characters AI: It depends on the data volume you have. As far as I know, there are 2 cases to have good NLP models: Either you have plenty of data (>10 GB as a raw order of magnitude) so that you can build accurate models, even if there are special characters. Either you don't have a lot of data (~1GB or less) and you have to simplify it as much as possible, and even improve it (for instance, replace ; by ,). In other words, you compensate the quantity with quality. Keep in mind that data complexity is correlated with data quantity. The more the data is complex, the more data you need. In conclusion, if you have a lot of data, you should keep the accents as they are necessary to make differences between words, and some words in french are different with or without accents (ex: tâche, tache, etc.), but any model would differentiate them according to their context (cf. attention mechanism). If you don't have a lot of data, removing accents would be better, because it would reduce the vocabulary corpus, and hence improve the learning. Note: There are very good NLP spell checkers available to recover the correct spelling with accents.
H: What does the random seed influence in transformers? I've been doing some experiments with setting different seeds on transformer training, and wanted to understand why I see so much variance. Am I correct in thinking the random seed only influences two things when transformer training: The random sorting of the training data; Which connections get removed by dropout. Oh, and the initial value of the randomly-initialized weights. If so, that implies there is no stochastic element at all when using the model for inference? Maybe my question would be better phrased as: what functions use randomness, so I can search a codebase for them, and confirm where they are used? (I'm mainly looking at PyTorch implementations such as fairseq and huggingface; but I am assuming tensorflow implementations handle random numbers the same way.) AI: Your understanding is correct, both regarding the stochastic elements at training time (weight initialization, training data ordering, dropout) and that there are no inherent stochastic elements at inference time in the model itself. Now, assuming we are talking about either a full transformer with encoder and decoder (e.g. for machine translation, like the original formulation) or a mere transformer decoder (e.g. a causal language model like GPT-3): note that the model itself does not define the decoding strategy (i.e. how to generate tokens), so you can choose to use a deterministic strategy (e.g. greedy decoding, beam search) or to use a stochastic decoding strategy (e.g. normal sampling from the output multinomial distribution, nucleus sampling, top-k sampling). Also, there are some practical nuances to take into account, like some non-deterministic behaviors of specific implementations in CUDA (see this).
H: Would adding Elastic Net as an additional Benchmark add any value when LASSO is already an included benchmark? I am currently engaged in a research project with a collaborator in which he is proposing a novel learning algorithm for optimal variable selection, and exploring its computational, statistical, and asymptotic properties; while I am proposing and running the several benchmark methods with which to compare its performance when it and the benchmarks I come up with are all run on the same set of 260k synthetic datasets which my collaborator has generated via Monte Carlo Simulation. I have so far decided on 3 Benchmarks: BM1 - LASSSO Regression, BM2 - Backward Stepwise Regression, and BM3 - Forward Stepwise Regression. I have been considering also adding on Elastic Net as another 4th Benchmark, but something tells me it wouldn't be worth the extra coding and debugging hours. Would including it add any significant value over just including LASSO? AI: No, the reason is because Elastic Net as a cross between the L1 and L2 norms, would only ever select a subset of the variables that LASSO would select, or if the penalty is extremely close to having its lambda penalty set to 1, it might select the exact same set of variables as LASSO does. However, what it will never do is select one or more variables that LASSO does not select, because it has the same objective (loss) function it seeks to minimize, the same type of penalty, and is being fit on the same dataset.
H: If I'm comparing performance between two different datasets should sample and class size be uniform? If I'm comparing performance between classification models on two different datasets should the number of samples per class, the number of classes, and features per sample be the relatively the same between the datasets? Please let me know if you require any additional information. AI: From the comments, OP aims to compare two different kinds of biological data in terms of which one allows the best classification performance. To compare anything reliably, it's always best to keep invariant as many variables as possible and vary only the main objects being compared. So ideally, this comparison would be made between the two different kinds of data for the same subjects, same number and distribution of classes. In my opinion the number of features per sample does not have to be identical, since it's related to the 'kind of data'. If this ideal comparison is not possible (for example if the data is not obtainable), it's crucial at the very least to compare the two options on the same classification problem. This means that the classes (especially number and distribution) must be identical, or at least very close. Otherwise one of the problems might be easier to solve than the other, so there would be no way to be sure that the result of the comparison is due to the 'kind of data' being compared or the different setting of the problems.
H: A question about overfitting and SMOTE So I understand that overfitting is when you have for example a good accuracy for the training dataset and bad one for the testing dataset, but why would I even check the accuracy for the training dataset? If I have a good accuracy on the testing dataset that means I'm most likely not overfitting, right? (Assuming that we make sure that the model doesn't train on any testing data) I have another question: Can oversampling using SMOTE cause overfitting (good accuracy on the testing dataset but in reality it is overfitting?) SMOTE draws a line and makes new points on it so it doesn't duplicate the data. AI: First point: in general it's risky to use accuracy in order to measure performance, especially if there is class imbalance. F1-score would be a better option in general. Significantly lower performance on the test set compared to the training set is indeed the main indication of overfitting. It's worth keeping in mind that Overfitting means that the model captures too many details which happen by chance in the training set, a problem more likely to happen if the training set is too small and/or the model is too complex. but why would I even check the accuracy for the training dataset? Simply to detect overfitting, because it's a very common issue. If I have a good accuracy on the testing dataset that means I'm most likely not overfitting, right? (Assuming that we make sure that the model doesn't train on any testing data) This is correct, but very often we don't know how to define "good": for a very hard task, a F1-score of 0.5 could be very good for instance, but for some very easy task an accuracy of 0.99 might be bad. In other words, unless there is some benchmark performance to compare against, there's no easy way to know if the performance is good. I have another question: Can oversampling using SMOTE cause overfitting (good accuracy on the testing dataset but in reality it is overfitting?) SMOTE draws a line and makes new points on it so it doesn't duplicate the data. Yes, totally: the points artificially created by SMOTE are not real data points, so there is a high chance that they do not have the same variations that real data points would have (whether it's noise or significant patterns). But the number of points can mislead the model, which is likely to assume that there's an important regular pattern, even though it's actually only caused by very few (possibly random) points artificially inflated by SMOTE.
H: How to Train Q&A model using Bert for multiple comma seperated values in a given data I'm using the entire text book data by scraping the information of each chapter. How do I highlight the spacy spancat NER or Bert Q&A based models to train multiple comma separated values in the text as important. For each chapter this behavior is recurring so how do I train the model to detect that it is important and that section is the important part which discusses different topics for each chapter. Eg: After scraping the chapter 1: There is 1 paragraph that describes what topics will be covered in this chapter like x,y,z,a,b,c,d,e. Similarly in chapter 2, There is 1 paragraph that describes what topics will be covered in this chapter like f,g,h,i,j,k. How do I train this model in such a way that if I move to next chapter or even take the next book, It'll recognize these patters as the topics in that chapter or get all important topics discussed in the entire book? SO, it'll be the sum of all such comma separated values in the book. AI: If the paragraphs containing plenty comma separated values are easy to detect, I wouldn't search for a very complex algorithm: After separating each paragraph from each chapter, look for the ones that have higher relative stats of commas, slashes, and/or return chariots. You can compare them easily to other paragraphs. Normally, topic paragraphs should have a much higher quantity of commas than other ones. Then, you can extract content between separators using the split function. Here is an example: s = "element1,element2,element3,element4" # Split the string on the ',' character elements = s.split(',') print(elements) If it is more complex than that, you can use doc2vec to classify according to targets [normal, topic], but you have to train with many samples, so that the model can differentiate normal and topic paragraphs correctly. doc2vec_embs = Doc2VecEmbeddings() x_train_tokens = doc2vec_embs.build_vocab(documents=x_train) doc2vec_embs.train(x_train_tokens) x_train_t = doc2vec_embs.encode(documents=x_train) x_test_t = doc2vec_embs.encode(documents=x_test) from sklearn.linear_model import LogisticRegression model = LogisticRegression(solver='newton-cg', max_iter=1000) model.fit(x_train_t, y_train) y_pred = model.predict(x_test_t) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report print('Accuracy:%.2f%%' % (accuracy_score(y_test, y_pred)*100)) print('Classification Report:') print(classification_report(y_test, y_pred)) Source
H: pands doesn't recognize missing values in CSV In the CSV there's missing values in many columns but the code shows there's no missing values. how can i fix this? AI: As can be seen in the pandas documentation, the values that are used in your csv to indicate missing values (i.e. "None") is not in the default list of values used to mark missing values: Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’. Simply providing the value used in your file for the na_values argument should solve the issue: pandas.read_csv("file.csv", na_values="None")
H: How to decide the most suitable technique to handle Class Imbalance For handling an imbalanced dataset, we have a variety of techniques like adjusting class weights, oversampling, undersampling, SMOTE and its different variations (RCSMOTE, GSMOTE, DBSMOTE). My question is how can I identify the most optimal technique favorable for my model? Is it only by applying each of them and see which one gives the best metrics? Are there any conditions/scenarios where a given technique is preferred over the other ones? AI: In almost all circumstances, sound statistics disputes that class imbalance is a problem. Consequently, you probably should not do anything besides keep doing good data science that is grounded in correct statistics. (After all, this is what you were doing the whole time, right?) Imbalance often appears to be a problem because models can achieve high accuracy scores by always or almost always classifying as the majority class. The trouble with such a statement is that most models do not do classification. Despite what software methods might exist, models like logistic regressions and neural networks do not do classification. They output predictions on a continuum that can be binned into categories, but they do not have to be, and the way to separate the continuous predictions into discrete classes could involve a threshold other than the usual software-default of a probability of $0.5$ or even more discrete categories than there are classes. By evaluating models on statistically sound values like log-loss (“crossentropy loss” or “negative log likelihood” in some circles) or Brier score, almost any apparent issue related to class imbalance turns out not to be a problem. Since class imbalance then is not a problem, there is no need to use methods like oversampling, undersampling, or synthesis of new points in order to solve a non-problem. In fact, since synthesis of new points need not produce reasonable new points, creating synthetic points might create problems just because the user tried to solve a non-problem. We have an entire statistics meta post with oodles of good links related to class imbalance and why it is not the problem it appears to be.
H: revenue forecast using regression - what is the input for future? I have a dataset with quarter wise revenue for past 3 years from Jan 2020 to Dec 2022. I have 4642 customers. Each customer has 1 row of data which includes features based on his purchase frequency, avg revenue, total revenue, min revenue and max revenue etc for each quarter. So, from Jan 2020 to Dec 2022, we have 12 quarters and data for all those quarters. So, now our objective is to predict the revenue for 2023Q1. As most of our customers have zero revenue, I use Zero Inflated regressor(ZIR). So, I built a model with training set (2020Q1 to 2022Q3) and validation set (2022Q4). But the training was done after some feature selection which resulted in selection of important features such as Avg_revenue_2021_Q4, AVG_REVENUE_2022_Q2_trans_rate_2022_Q1, Total_revenue_2022Q3 So, now that model is built and validated, I get a R2 of 89% and 87% in train and test set respectively and decent MAE scores. But my question, how do I predict the revenue for 2023Q1? 2023Q1 hasn't happened yet (and we need to predict in advance for the same). So, I don't have data for 2023Q1 So, should I build my final model by combining my train (2020Q1 to 2022Q3) and validation set (2022Q4)? Once I build this final model, what is the input that I should pass to get predicted values for 2023Q1? there is no test set because 2023Q1 hasn't happened yet. So, now to predict 2023Q1 revenue (for each of the 4642 customers), should I again pass the values of Avg_revenue_2021_Q4, AVG_REVENUE_2022_Q2_trans_rate_2022_Q1, Total_revenue_2022Q3 ? If I pass the same values again, what is the difference between prediction for 2022Q4 (test set) and 2023Q1 (inference) So, now I am stuck on what should I pass as input to predict for 2023Q1. Can you help me please? AI: I would try to formally design the model without any reference to any specific time period, in order to avoid the problem that you mention. The idea is to make the target period relative to the features period, whatever this period is. For example, you could consider that the model is trained with 2 years of data and predicts the next month. This way for every customer you build a training set like this: features target Q1 .. Q8 Q9 Q2 .. Q9 Q10 (I'm using Q1=2020Q1 .. Q5=2021Q1 .. Q12=2022Q4) and as test set (for evaluation) you could have instances like this: features target Q3 .. Q10 Q11 Q4 .. Q11 Q12 After evaluating, the final model could be trained with the whole data, like so: features target Q1 .. Q8 Q9 Q2 .. Q9 Q10 Q3 .. Q10 Q11 Q4 .. Q11 Q12 And finally when you are ready to apply the final model in order to predict future data, i.e. Q13, you build the test set instances like this: features target Q5 .. Q12 ? Of course you could also consider different settings, like only one year of features so that you have more instances as training data.
H: What kind of model can be suitable for predicting 100 values from 25 values by using a Neural Network in TensorFlow? Previously I used Sequential model for this problem, but later I read from the TensorFlow website that it is not suitable to use it. I'd like to have a NN model to predict 100+ values from the given 25 values. The code below is my previous implementation. What kind of changes should I make to achieve better results? import pandas as pd import numpy as np from sklearn.model_selection import train_test_split import tensorflow X_train, X_test, Y_train, Y_test = train_test_split( inputdata, outputdata, test_size=0.2) model = tensorflow.keras.models.Sequential() model.add(tensorflow.keras.Input(shape=(25))) model.add(tensorflow.keras.layers.Dense(64, activation='relu')) model.add(tensorflow.keras.layers.Dense(103)) model.compile(loss="mean_absolute_error", optimizer="adam", metrics=["mean_squared_error"]) model.fit(X_train, Y_train, epochs=20) predictions = model.predict(predict) AI: In the Tensorflow documentation that you linked, "multiple inputs/outputs", refers to multiple input/output tensors. Here you have one input tensor (of size 25) and one output tensor (of size 100+).
H: Is machine learning synonymous with neural networking? What I mean is, when referring to machine learning, does this necessarily mean the use of a neural network? Or are there other ways to implement machine learning that do not use a neural networks? These terms seem largely superficially synonymous. AI: No. Neural nets are one class of machine learning models/algorithms but not the only ones by any measure. ML is a couple decades older than the concept of neural nets, which were first invented in the 60s but not actually viable for use on non-trivial tasks until ~15 years ago when people figured out you can use a GPU to make training viable. Even today there are ML algorithms/models being worked on/used that are not neural nets. Graph learning, clustering, pattern extraction... You could in theory find a NN to perform all of these tasks, but generally if you have the option to use an algorithm that isn't based on the NN architectures and get the same results, you're going to want to use the non-NN approach since they're resource-intensive and not easy to get right.
H: Which specific AWS service to use for running Benchmark Regressions on datasets far too large to run locally on my laptop I am in the middle of a research project with a collaborator in which he has proposed a novel statistical learning processor for optimal variable selection, and I am running the 3 Benchmark Variable Selection Algorithms (LASSO, Backward Elimination Stepwise Regression, and Forward Selection Stepwise Regression) in RStudio on a set of 260,000 synthetic datasets the principal author has created via Monte Carlo methods. How well his new procedure performs is partially evaluated in comparison to the three benchmarks. My problem is that I am just not able to run any of these Benchmark methods on the set of 260k synthetic data sets on my laptop without it taking days even though my laptop has 32 GB of RAM, a quad core i5 processor, and a nice big 1 tb SSD I put in it as well. So, I am well aware of cloud computing services and parallel processing at a higher conceptual level, but I have never used AWS before. I just created an AWS account for myself, but the number of different service or "widget" options is extremely overwhelming at first. And even if I did find a suitable one, the whole way it works is very alien to me. Any tips, links, pointers, or advice will be much appreciated! AI: One option is creating a EC2 instance. Choose a prebuilt AMI (Amazon Machine Image) with RStudio Server installed.
H: Random Forest Classifier is giving me an array of zeroes I used VGG16 as feature extractor on a dataset with 9 classes and trained the Random Forest Classifier on the feature vector. I tried to make prediction on the test feature vector but the prediction is an array of zeroes. What am i doing wrong ? Notebook Link AI: Your model is not learning. The result is constant predictions on the test dataset. There could be many reasons for not learning. A couple of the most common reasons: Not enough data Not expressive enough machine learning algorithm Incorrect hyperparameter choices Not training long enough
H: Are there any practical advantages of LSTMs over transformers? There are a number of articles noting that transformers have significant advantages over "traditional" RNNs like LSTMs. And the industry as a whole have been moving away from LSTMs. My question is, in the domain of NLP, do LSTMs (or GRUs) have any practical benefits/advantages over transformers? AI: LSTMs/GRUs have lower computational and memory requirements than transformers. Depending on the case, using an LSTM instead of a Transformer may make sense due to those factors. For instance, using Transformers on mobile devices or embedded devices with CPU and memory limitations is not easy. Also, in machine translation, having a Transformer encoder and an LSTM decoder may not significantly impact the resulting translation quality.
H: How can I use XGBoost in SciKit-Learn on an Air-gapped Computer? I'd like to use XGBoost in SciKit-Learn, however, I'm on an Air-Gapped computer and can't install it normally using pip. How can I install XGBoost on an air-gapped computer? AI: If you're on an air-gapped computer there are a few things to consider. What version of Python are you using? What Python packages do you have available to you? What are your Python package versions? For example, XGBoost has dependencies, if you don't have any of those dependencies you'll have to repeat the process I detail below for all those packages too. If you follow the process below, and you're missing a package, the pip installer will let you know! As far as I'm aware, the only packages XGBoost relies on are: Numpy Scipy Assuming you have all dependencies Assuming you're air-gapped computer has all of XGBoost's dependencies, this is what you need to do. Figure out which of XGBoosts dependencies is the most out of date. To do this you can check the package's version using the pip list on the command line. This will give you output that looks like the following: Go onto PyPi and search the package's name. On the package's page look for its release history (in the left-hand side bar) and see what date your version was published. E.g. for numpy I can see the most recent release was published on 26/12/2022. Once you know which of your packages is the oldest, go to the PyPi page for XGBoost, look at its release history, and click on the release that falls closest to your oldest package. Say your oldest package is a version released around January 2022, in the picture below you'd click on XGBoost 1.5.2. You do this in case later versions of XGBoost rely on improvements to its dependent packages that don't exist in your current versions. Once you've selected the right release you can go to its download section. You're presented with the option to download from source or download a built distribution. The built distributions are .whl (wheel) files, which are Python's way of simplifying package distribution. If you want to know more about wheels, Real Python have a great overview. You don't need to understand the naming scheme, but it does help you select the right distribution. However, if you don't want to go too deep into wheels, download the amd64 distribution if your air-gapped PC is a windows PC, if you have an air-gapped Linux PC find out your computer's architecture (x86_64 vs aarch64) and download the corresponding manylinux distribution, and if you have an air-gapped MAC download the macosx distribution (newer versions have two options, one for MAC's arm64 chipset, and another for its x86_64 chipset). Finally, put your .whl file onto a memory stick and transfer it to your air-gapped computer. Load up a terminal, locate your file, and type out the following command: > pip install your-wheel-distribution-here This should install the package and you're ready to use XGBoost! Troubleshooting If the last part of the process fails try asking questions on Stack Overflow. If you still run into issues you might have to install XGBoost from its source distribution. This process is exactly the same, transfer the source file to the air-gapped machine and use the pip install your-source-distribution-here. This will take longer as pip has to build the wheel file itself, but should still work. If even that fails, then you're left with building XGBoost from source yourself, which is discussed in the XGBoost documentation (good luck!).
H: Average 5 DataFrames with same index and columns? I have 5 Dataframe with same index and columns . but the numbers is them is different , I want to average it out . Is there any way to do that ? AI: Since in pandas the addition goes by index and columns, it is enough to simply add up all the dataframes and divide by their number. import pandas as pd a=pd.DataFrame([[1,2,3],[4,5,6]]) b=pd.DataFrame([[10,20,30],[40,50,60]],index=[1,0]) list_of_dataframes:list=[a,b] sum(list_of_dataframes)/len(list_of_dataframes)
H: Why to Train Q-function in Reinforcement Learning? I am recently learning RL. I see most algorithms are to estimate the Q function. I would like to know why not simply train a model that takes current states as input and outputs actions. Take ANN for example. Perhaps we can record the action taken and calculate discounted awards for every iteration, and let the optimizer adjust weight of the ANN model. Is this impossible? Why or Why is fitting the Q function better? AI: It is not true that most RL algorithms are to estimate the Q function. A famous RL algorithm is the policy-gradient method which does exactly what you mentioned. There is a neural net that takes in state and outputs distribution over what action should be taken next. The parameters of this network are then trained via policy gradient estimates, so this network is never explicitly modeling the Q-function or the predicted rewards at all. That said, the variance of these policy gradients tends to be extremely high, so in practice people use Actor-Critic training which optimizes both the policy network and a Q-network jointly to stabilize training.