text
stringlengths
83
79.5k
H: BIO tagging software I would like to label character data with BIO tags as part of an active learning process on unlabelled data. I am assuming there are open source GUI tools available which I can use to make this easier - i.e. present the string to be labeled and some way of tagging characters from a predefined set of tags (and probably allow new tags to be added). I have not been able to find anything though - ideally cross-platform (Linux and macOS at a minimum). It could be a cloud service or AWS product but I'd prefer something which is locally installed. Does anyone know of such a thing? Even something as simple as search term would help. AI: Prodigy is a tool for data annotation, I think it shines in NLP tasks. I've also heard about Amazon mechanical turk, I know it can be used for annotation, but it's more general purpose.
H: When to One-Hot encode categorical data when following Crisp-DM I have a dataset that contains 15 categorical features (2 and 3 level factors which are non-ordinal) and 3 continuous numeric features. Seeing as most machine learning algorithms require numerical data as input features, and actually automatically One-Hot encodes them on the fly (random forest, glmnet etc.), should you not perform One-Hot encoding during data pre-processing to allow exploration of the relationship of the encoded feature data? Or is it best to rather explore relationships between raw categorical data and then only encode before running algorithms? Basically my question evolves around data exploration and data understanding, and whether this needs to be performed on the raw or encoded categorical features? AI: To me it depends, because I would separate some types of Categorical Variables : Categorical variables with few classes : OneHot as fast as you can Categorical variable with some highly-represented classes and some low-represented classes : You can pre-process and regroup both low-represented classes in a huge "Other" class, and then OneHot and get a reasonable number of variables Categorical variables with A LOT of low-represented class : If you OneHot directly, you'll create a lot of variables, so this feels impossible. You can, for example, browse those data so you calculate, for each class, the rate of "1" classes on your X_train. You then transform your class by this number, which is continuous, between 0 and 1, and so have information and is accepted by all models. This is called Target Encoding, and some packages built to be compatible with sklearn exist to do it automatically (like TargetEncoder, LeaveOneOut, WeightOfEvidence or JamesStein). These are the kind of changes you can do, the choice of OHE directly, or pre-process before, it depends on the variable... If your question is, for example, to know if you make feature selection before OHE or after, I'd suggest you mainly making it after : Remove useless variable (with no info), then OHE/preprocess remaining ones, and then make feature selection again. Let's take an example : a variable called Age being classes like [0;10], [10;20], ... it's often significative if the value is >80 or <20, but doesn't care if it's 35 or 45, so the OHE will only select Age_[0;10], Age_[10;20], Age[80_90] and Age_90+
H: How to extract true positives data (complete row with data) after training and testing from test dataset? How do you extract true positive data from testing data after training and testing? For example, in the test data, I have two rows and one row is true positives and the other is false negatives. However, I would like rows which only have true positive values. How do you extract the complete row from the testing data after training and testing? AI: One thing you can do is browse your prediction vector, get the indexes of "1" responses, and then check those indexes in y_test. if your y_test[index] is also a "1" class, then select the row by index in X_test I tested this, it works for me. In my case, my X and y are pandas.DataFrame. import pandas as pd from sklearn.linear_model import LogisticRegression import numpy as np X_train = pd.read_csv("saves/cv_sets/X_train1.csv", sep=";", encoding="latin1") X_test = pd.read_csv("saves/cv_sets/X_test1.csv", sep=";", encoding="latin1") y_train = pd.read_csv("saves/cv_sets/y_train1.csv", sep=";", encoding="latin1") y_test = pd.read_csv("saves/cv_sets/y_test1.csv", sep=";", encoding="latin1") clf = LogisticRegression(class_weight="balanced", solver='lbfgs', C=0.1) model = clf.fit(X_train, y_train) pred = model.predict(X_test) pred1 = np.where(pred==1) TP_Indexes = [] for k in pred1[0]: if(y_test.iloc[k][0] == 1): TP_Indexes.append(k) X_test_TP = X_test.iloc[TP_Indexes]
H: Is CNN permutation equivariant? If I use stacked CNN layers with 3x3 kernels, zero padding, and with no pooling layers, the output feature map will consist of feature vectors, each vector of which is related directly to the original 3x3 block of the input image, right? Therefore, for example, I could send the output feature map vectors as timesteps to LSTM and it would supposedly learn dependencies between clusters of the image given the image clusters have a temporal dependency. AI: If you use a single layer CNN, then each vector in the resulting activation maps would be related to the original 3x3 block. However, if you stack multiple CNN layers, you increase the receptive field of each resulting vector, as shown in the image below (taken from here): After the CNNs, you can certainly compute an LSTM. There are, however, some design decisions you would need to take: disregarding the batch dimension, the LSTM takes as input a sequence of vectors (2D tensor) but you have a 3D tensor (height $\times$ width $\times$ num.channels) so, how are you going to make the 3D tensor fit as input to the LSTM? You could compute the LSTM of columns/rows of the image independently, in forward or reverse direction. You could also "collapse" one dimension (e.g. by averaging or taking the maximum value).
H: What is lagrangian? I'm watching an SVM tutorial. At 6:38 he mentions lagrangian, which is a term I'm not familiar with. So I googled it, hoping to find the Wikipedia article about it, but it seems like this term is actually ambiguous, and Wikipedia suggests several articles. Which of the suggested articles should I read in order to understand the meaning of lagrangian in this context? AI: He is referring to Lagrangian Multipliers, an optimization technique for problems with equality constraints.
H: Comparing multi-class vs. binary classifiers in predicting a single class I've pretty much read the majority of similar questions, but I haven't yet found the answer to my question. Let's say we have n samples of four different labels/classes namely A, B, C, and D. We train two classifiers: First classifier: we train a multi-class classifier to classify a sample in data to one of four classes. Let's say the accuracy of the model is %x. Second classifier: now let's say all we care about is that if a sample is A or not A. And we train a binary classifier for classifying samples to either A or non-A. Let's say the accuracy of this models is %y. My question is, can we compare x and y as a way to measure the performance of classifiers on classifying A? In other words, does a high performance in a multi-class classifier mean that the classifier is capable of recognizing the single classes with high performance as well? The real-world example of this is that I've read papers that trained multi-class classifiers on a dataset that contains four different types of text. They achieved pretty high performance. But all I care about is for a model to be able to correctly classify one specific type of text. I trained a binary classifier that achieves a lower accuracy. Does this show that my model is working poorly on that type of text and the multi-class classifier is doing better? Or shouldn't I compare these two? AI: In general we can't compare the performance of a multiclass classifier with the performance of a binary classifier since the former expresses how good the classifier is at classifying any instance of any class. So if there are $n_A$ samples labelled A, there's only a proportion of $n_A/n$ of the global accuracy of the multiclass classifier which is about A. In particular a multiclass classifier usually tends to favor the largest classes, so if class A happens to be a small proportion of the data then the global performance will not reflect how good it is at classifying A: for example it might have 90% accuracy simply because class B is 90% of the data, this doesn't prove anything about class A. By contrast the performance of the binary classifier is by definition solely about class A. However if one has access to the detailed evaluation of the multiclass classifier, typically the confusion matrix, then it becomes possible to calculate the performance of the classifier for a single class, say class A. Actually by merging all the B,C,D rows together and all the B,C,D columns together in the confusion matrix one obtains exactly a binary classification confusion matrix, and from that one can calculate a performance which can be compared against another binary classifier. But in this setting the multiclass classifier is at a disadvantage for the reason mentioned above: it also has to deal with the other classes and this can cause it to "sacrifice" a class, whereas the binary classifier doesn't have this issue.
H: Creating a valid dataset for obtaining results I have created a domain-specific dataset, lets say it is relating to python programming topic posts. I have taken data from various places specific to this topic to create positive examples in my dataset. For example, python related subreddits, stack exchange posts tagged with python, twitter posts hashtagged with python or python specific sites. The data points taken from these places are considered positive data points and then I have retrieved data points from the same sources but relating to general topics, searched if they contain the word python in them and if they do discard them to create the negative examples in my dataset. I have been told that I can use the training set from the dataset as is, but that I need to manually annotate the test set for the results to be valid, otherwise they would be biased. Is this correct? How would they be biased? To be clear the test set contains different entries to the training set. There are close to 200,000 entries in the test set which makes manual annotation difficult. I have seen similar methods been used in papers I have previously read without mention of manual annotation. Is this technique valid or do I have to take some extra steps to ensure the validity of the test sets? AI: There are two potential biases: With this automatic method, you might have a few erroneous labels. For example it happens regularly here on DataScienceSE that a user tags a question "python" but actually the question is not specific to python at all. Same thing for the opposite case: for example it's possible that some content contains some python code but doesn't mention "python" anywhere. The distribution between positive and negative classes is arbitrary. Let's assume you use 50% positive / 50% negative: if later you want to apply your classifier on a new data science website where only 10% of the content is about python, it's likely to predict a lot of false positive cases so the true performance on this data will be much lower than on your test set. It's rare to have a perfect dataset, so realistically in my opinion the first issue is probably acceptable because the noise in the labels should be very limited. The second issue could be a bit more serious but this depends on what is the end application. Keep in mind that a trained model is meant to be applied to the same kind of data as it was trained/tested on.
H: What are the true error and the sample error? I am a student and I am studying machine learning. I am focusing on the concept of evaluation of an hypotesis. What I have seen is that there are two types of error: true error and sample error. The true error of an hypotesis $h$ with respect to a target function $f$ and a distribution $D$ is the probability that an hypotesis $h$ misclassifies an instance $x$ drawn according to $D$, and it is computed as: $error_D(h)=Pr_{x\in D}[f(x)\neq h(x)]$ while the sample error of an hypotesis $h$ with respect to a target function $f$ and data sample $S$ is the proportion of examples that $h$ misclassifies: $error_S(h)=\frac{1}{n}\sum _{x\in S}\delta (f(x)\neq h(x))$ where $\delta (f(x)\neq h(x))=1$ if $f(x)\neq h(x)$ and $0$ otherwise. I ask this question because I have not clear what these errors are. Moreover, I have seen that the true error cannot be computed, while we can compute only the sample error. I don't understand why. Can somebody please help me understand? AI: The true error represents the probability that a randomly drawn instance from the entire distribution is misclassified while the sample error is the fraction of sample which is misclassified. As true error represents entire population it becomes difficult to calculate hence we use sample to check our hypothesis and use evaluation methods to check it's confidence level. A sample might not be a true representation of population so the difference in results are sample error. We try different sampling methods so that there is no bias in choosing a sample like randomised and stratified sampling. For more you can refer this
H: Why my weights are being the same? To understand how neural networks, and backpropagation are actually working, I've built a small program to do the calculations, but something is definitely wrong, as my weights are the same after gradient descent. In this example the inputs will have two neurons, the outputs will also have two neurons, there will be two hidden layers, with 3 neurons each. I'll work with two observations. I'll initialise both weights, and biases with 1. (I'm new at machine learning, and super confused with the different notations, and formatting, indexes, so to clarify, in case of activations, the upper index will show the layers index which is in, the lower index before will show the index of the observation, and the lower index after will show the index of it in the layer, so the third neuron in the second layer in the first observation will look like $_0a^1_2$ (indexes are 0 starting). In weights, the upper index will show which is the index of layer it's "going to", the first index of the lower index will show which is the index of neuron it's "coming from", and the second index of the lower index will show which is the index of neuron it's "going to", so the third layers first neurons weight, which connects it with the second layers first neuron will look like $w^1_{0,0}$.) As far as I learned: $a_n = \sigma (a^{n-1} * w^{n-1} + b^{n-1})$ where $n$ is the index of the layer, and where $*$ is actually matrix multiplication. In other visualisation, calculating the second activation from the input is: $$\sigma(\begin{pmatrix}1 & _0a_{0}^0 & _0a_{1}^0\\\ 1 & _1a_{0}^0 & _0a_{1}^0\end{pmatrix} * \begin{pmatrix}b^0_0 & b^0_1 & b^0_2\\\ w^0_{0,0} & w^0_{0,1} & w^0_{0,2}\\\ w^0_{1,0} & w^0_{1,1} & w^0_{1,2}\end{pmatrix}) = \begin{pmatrix}_0a_{0}^1 & _0a_{1}^1 & _0a_{2}^1\\\ _1a_{0}^1 & _1a_{1}^1 & _1a_{2}^1\end{pmatrix}$$ The input of the first observation will be [0.8, 0.4], the second will be [0.3, 0.3] (they are completely random - just as the expected outputs). So based on the above equation (sorry couldn't find other solutions to display it) $n^1$ (the neuron value, before the activation function) is: +-----+-----+-----+ | 1 | 1 | 1 | | 1 | 1 | 1 | | 1 | 1 | 1 | +---+-----+-----+-----+-----+-----+ | 1 0.8 0.4 | 2.2 | 2.2 | 2.2 | | 1 0.3 0.3 | 1.6 | 1.6 | 1.6 | +---+-----+-----+-----+-----+-----+ After taking the sigmoid, $a^1$ is: [[0.9002495, 0.9002495, 0.9002495], [0.8320184, 0.8320184, 0.8320184]] This will get a "column" of 1s to get the bias added alone, will get matrix-multiplied with its weights ($w^1$) (containing the bias as well, in the top "row"), and "sigmoided", where we get $a^2$, and the same process continues until we get the output, $a^3$. They look like: $a^2$: [[0.9758906, 0.9758906, 0.9758906], [0.9705753, 0.9705753, 0.9705753]] $a^3$: [[0.9806908, 0.9806908], [0.9803864, 0.9803864]] So far it all makes sense, as I initialised the weights with 1. Now the hard part. Calculate how much each weight affected the cost ($\sum C$), by following the chain rule: $$\frac{\partial\sum C}{\partial w^n} = \frac{\partial n^{n+1}}{\partial w^{n}}\frac{\partial a^{n+1}}{\partial n^{n+1}}\frac{\partial\sum C}{\partial a^{n+1}}$$ Where if $a^{n+1}$ in $\frac{\partial\sum C}{\partial a^{n+1}}$ is the output activation layer, because it has direct access to the expected output ($y$), it can be solved by (3 is just the index of the layer here as well): $$\frac{\partial\sum C}{\partial a^{3}} = 2(a^{3} - y)$$ I don't care about splitting the cost, because the overall cost ($\sum C$) consists of $_0C_0$, that can only be affected by $_0a_0$, and so on, so for example $$\frac{\partial\sum C}{\partial a^{3}_0} = \frac{\partial C_0}{\partial a^{3}_0} + \frac{\partial C_1}{\partial a^{3}_0} = 2(a^3_0 - y_0) + 0$$ Because our expected output is (random as well): [[0.2, 0.4], [1 , 0 ]] It will end up in: $$\begin{pmatrix}0.9806908 & 0.9806908\\\ 0.9803864 & 0.9803864\end{pmatrix} - \begin{pmatrix}0.2 & 0.4\\\ 1 & 0\end{pmatrix} = \begin{pmatrix}0.7806908 & 0.5806908\\\ -0.0196136 & 0.9803864\end{pmatrix} * 2 = \begin{pmatrix}1.5613816 & 1.1613816\\\ -0.0392272 & 1.9607728\end{pmatrix}$$. Next step is $\frac{\partial a^{n+1}}{\partial n^{n+1}}$, where I get how $n$ affected $a$. Because $a$ came from applying the sigmoid function to $n$, and knowing that the derivative of the sigmoid function is: $$\sigma (x)(1 - \sigma (x)) = (\sigma (x) - 1)\sigma (x) * -1$$ In our case is (the transformation is just to make it easier to work with it with TensorFlow): $$\frac{\partial a^{3}}{\partial n^{3}} = (\sigma (n^{3}) - 1)\sigma (n^3) * -1 = (a^3 - 1)a^3 * -1$$ We can transform $\sigma (n^{3})$ to $a^3$ because that's exactly how we made it. In our case it will end up in: $$\begin{pmatrix}0.9806908 & 0.9806908\\\ 0.9803864 & 0.9803864\end{pmatrix} - 1 = \begin{pmatrix}-0.0193092 & -0.0193092\\\ -0.0196136 & -0.0196136\end{pmatrix} * \begin{pmatrix}0.9806908 & 0.9806908\\\ 0.9803864 & 0.9803864\end{pmatrix} * -1 = \begin{pmatrix}0.01893635 & 0.01893635\\\ 0.01922891 & 0.01922891\end{pmatrix}$$ Third step is $\frac{\partial n^{n+1}}{\partial w^{n}}$, which is the previous activation (we multiplied $a^n$ with $w^n$ to get $n^{n+1}$). The goal is to find how much effect a weight layer had on the cost, so the shape of the result must match the weight layers shape. The shape of the weights is: it must have as much number of columns, as the next layers number of neurons it must have as much number of rows as the previous layers number of neurons plus 1 for bias In the equation of: $$\frac{\partial\sum C}{\partial w^n} = \frac{\partial n^{n+1}}{\partial w^{n}}\frac{\partial a^{n+1}}{\partial n^{n+1}}\frac{\partial\sum C}{\partial a^{n+1}}$$ the first part will give the "rows" of the weights, because it's related to the previous activations, and the second, and third part will give the "columns", because it's related to the current activations/neurons. They are connected through the weights, what we're investigating, how much costs they have, so with matrix multiplication, we will get back how the weights should be modified. So first we do: $$\frac{\partial a^{n+1}}{\partial n^{n+1}}\frac{\partial\sum C}{\partial a^{n+1}} = \begin{pmatrix}1.5613816 & 1.1613816\\\ -0.0392272 & 1.9607728\end{pmatrix} * \begin{pmatrix}0.01893635 & 0.01893635\\\ 0.01922891 & 0.01922891\end{pmatrix} = \begin{pmatrix}0.02956687 & 0.02199233\\\ -0.0007543 & 0.03770352\end{pmatrix}$$ So far, it's promising that all the values seem different. For $\frac{\partial n^{n+1}}{\partial w^{n}}$ (previous activation layer) we have to add 1s (for biases), as an extra first column, and transpose it (in order to do the matrix multiplication). And the problem: +-------------+-------------+ | 0.02956687 | 0.02199233 | | -0.0007543 | 0.03770352 | +-----------+-----------+-------------+-------------+ | 1 1 | 0.0288126 | 0.0596958 | | 0.9758906 0.9705753 | 0.028122 | 0.0580562 | | 0.9758906 0.9705753 | 0.028122 | 0.0580562 | | 0.9758906 0.9705753 | 0.028122 | 0.0580562 | +-----------+-----------+-------------+-------------+ Because I've initialised the weights with the same values, as you can see above as well, all the activations (for each observation) are the same at each level. Not sure if the whole problem is caused by the same initial weights, but I'm expecting the network to adjust itself to the right direction, no matter what the previous weights are. You could think that it's just the first layer, just the first iteration, but trust me. It goes like this over and over, no matter how much iterations. Probably this is the reason why it produces the output for the above example after 1000 iterations: [[0.5992708, 0.1997655], [0.5993174, 0.2004307]] Expected: [[0.2, 0.4], [1 , 0 ]] So. What am I doing wrong? AI: As I mentioned in my comment this is a likely a result from initializing your network's weights to the same values. This is why it's important to use random weights as it breaks the symmetry. See also this post or this more in-depth deeplearning.ai post on weights initialization methods for neural networks.
H: Preparing Dataset Minority Class vs Majority Class I'm currently doing a binary classification for sentiment prediction. Currently I have the majority class (~90% of the data) as my positive class (labelled 1) and the minority class (~10% of the data) as my negative class (labeled 0). What I'd like to maximize in this experiment is the detection of negative sentiments, hence I'd like to maximize the precision (and recall) of my minority class. However, in many similar datasets (in terms of prioritizing the detection of minority class) out there like credit card fraud detection, cancer detection, usually the minority class is set as the positive class and the majority class set as the negative class. My question is: Does it matter if the minority class is set as the positive or negative label in relation to performance of training a model or affecting a loss function such as cross entropy? AI: My question is: Does it matter if the minority class is set as the positive or negative label in relation to performance of training a model or affecting a loss function such as cross entropy? No it doesn't. However in binary classification it's customary to call "positive" the main class of interest, so be careful to be clear about which one is positive/negative when/if you present your results to other people. Also be careful that precision and recall are usually calculated for whatever is called the positive class, so don't inadvertently use the results of the majority class instead of the one you're interested in.
H: Which GUI library to use with Deep Learning I have completed basics Deep Learning course from coursera using Tensorflow and Keras. Now I want to apply GUI to it. So which library should i learn: 1.PyQt 2.Kivy 3.Tkinter Are there libraries which can help to easily create deep learning projects. AI: It really depends on how far you want to go. If you are very serious about GUI apps, PyQt is the only way to go. Qt5 is the gold standard for cross-platform GUI right now. But, for basic applications, you are good to go with Tkinter. I have never used Kivy, and I don't know many people who use it.
H: Calculating distance between data points when there are more than 3 features in KNN algorithm I've been reading about K-nearest neighbors algorithm and want to clarify few things. If we have 2 features we could simply plot it on 2-d plane and calculate distance by using euclidean distance or Manhattan distance. When there are more than 3 features, exceeding 3-d going into 4-d and more I've read that we use PCA to reduce dimension to 2-D and then calculate distance on PCA plot. But my question is, is that only way? so in order to use KNN for more than 3 features we must use PCA? AI: No, you can definitely search for k-NN with more than 2-dimension data. Here is an example based on sklearn: X = [[0, 0, 0], [3, 3, 3], [1, 2, 3]] from sklearn.neighbors import NearestNeighbors neigh = NearestNeighbors(n_neighbors=2) neigh.fit(X) print( neigh.kneighbors([[2,2,2]]) ) PCA is used to reduce the input dimensionality but this is not mandatory before searching k nearest neighbors (it is often used in tutorials so the data could be visualized on a 2-d plot). One thing to know/understand about k-NN is that if you plan to use it for classification, it will handle features with a lot of information the same way as the features with no information (if you normalize them). PCA could be used to handle this problem (but this is not the only way and would not always work, but I think this is another question :) ).
H: What are "downstream models"? In the ResNeSt paper they say on page 4: "despite their great success in image classification, the meta network structures are distinct from each other, which makes it hard for downstream models to build upon." What are downstream models in this context? Paper can be found at https://arxiv.org/abs/2004.08955 AI: Downstream models are simply models that come after the model in question, in this case ResNet variants. Models for various topics within the computer vision domain often use a backbone to extract features from images, after which a downstream model is used to help to fit the model better to the task at hand. Tables 5, 6, and 7 in the linked paper give a good overview of the different ways backbones are often combined for topics such as object detection and segmentation.
H: How can I compare the grammatical complexity between two texts using their sentences dependency length? This is a continuation to the following thread. I have two texts, common English texts such as news articles and informative texts versus a technical textbook. I want to compare the grammatical complexity between those texts using their sentences dependency length in order to conclude whether they both have the same level of complexity or not. I was thinking about using the p-value as an evidence against the null hypothesis. Here is how the data would look like: Text 1 ID Dependency Length Sentence Length 0 13 7 1 5 3 2 20 8 Text 2 ID Dependency Length Sentence Length 0 8 5 1 10 7 2 14 7 By the way, I am using python. AI: If your goal is only to determine whether the level of complexity is the same between these two sources based on these particular features, yes you can simply compare the distributions with a Student's t-test if the distributions are normal or a Wilcoxon test if they are not. Spoiler alert: it's very likely that they are different. A statistical significance test doesn't give you much information, it's usually much more useful to try to quantify the level of complexity, but it's also much harder. Based on your previous linked question, you might be interested to read about text complexity/readability, quite a lot of research has been done on this topic (e.g. here, here, or there, among many others). There are also a few general tutorials apparently (e.g. this or that). I also know that there have been a number of readability/complexity metrics proposed in the literature, but I don't have references. Note that looking at the features from the text is unlikely to be very useful on its own. Probably you will need either some kind of corpus annotated by complexity, or to use a metric which has been proved to correlate with sentence complexity.
H: My first Neural Network not working I have just started deep learning and neural networks and when I try the following code, it does not work: #Import Keras for deep learning import tensorflow as tf from tensorflow import keras #Store the data into a variable data = keras.datasets.fashion_mnist #Split the data and create classes (train_images, train_labels), (test_images, test_labels) = data.load_data() class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] #Standardize the data train_images = train_images/255 test_images = test_images/255 #Create the model model = keras.Sequential([ keras.layers.Flatten(input_shape=(28,28)) keras.layers.Dense(128, activation = 'relu') keras.layers.Dense(10, activation = 'softmax') ]) #Compile and fit the model model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) model.fit(train_images, train_labels, epochs=5) #Test the model test_loss, test_acc = model.evaluate(test_images, test_labels) print(test_acc) For some reason this is not working.This is the error: File "<ipython-input-6-90a7b393ff3f>", line 15 keras.layers.Dense(128, activation = 'relu') ^ SyntaxError: invalid syntax Thanks!! AI: You missed a comma at the end of line 15, replace the part with this code: model = keras.Sequential([ keras.layers.Flatten(input_shape=(28,28)), # comma here keras.layers.Dense(128, activation = 'relu'), # comma here keras.layers.Dense(10, activation = 'softmax') ]) Then it works
H: Data visualization on three factors I have three factors, Income(from source A),continuous variable Income(from source B), continuous variable Happiness index,continuous variable Suppose I have 500 samples. My goal is to show the influence of both Income(from source A) and Income(from source B) on Happiness index. Meanwhile, I want to show the distribution of Happiness index. I think a boxplot on Happiness index is a good choice. I can use color to represent Income(from source A), however, how can I represent another continuous variable, i.e, Income(from source B) here? AI: What you can do is discretize happiness index using cut() and assign labels to it. For plotting then you can use groupby boxplot which will show effect of both income sources on a particular happiness index bin. Refer GroupbyBoxplot If you want distribution of happiness index then you can try: Color Scales
H: Normal equation for linear regression is illogical Currently I'm taking Andrew Ng's course. He gives a following formula to find solution for linear regression analytically: $θ = (X^T * X)^{-1} * X^T * у$ He doesn't explain it so I searched for it and found that $(X^T * X)^{-1} * X^T$ is actually a formula of pseudoinverse in a case where our columns are linear independent. And this actually makes a lot of sense. Basically, we want to find such $θ$ that $X * θ = y$, thus $θ = y * X^{-1}$, so if we replace $X^{-1}$ with our pseudoinverse formula we get exactly the $θ = (X^T * X)^{-1} * X^T * y$. What I don't understand is why nobody mentions that this verbose formula is just $θ = y * X^{-1}$ with pseudoinverse. Okay, Andrew Ng's course is for beginners and he didn't want to throw a bunch of math at students. But octave, where the assignments are done, has function pinv() to find a pseudoinverse. Even more, Andrew Ng actually mentions pseudoinverse in his videos on normal equation, in the context of $(X^T * X)$ being singular so that we can't find its inverse. As I mentioned above, $(X^T * X)^{-1} * X^T$ is a formula for pseudoinverse only in case of columns being linear independent. If they are dependent (e.g. some features are redundant), there are other formulae to consider, but anyway octave handles all these cases under the hood of pinv() function, which is more than just a macro for $(X^T * X)^{-1} * X^T$. And Andrew Ng instead of saying to use pinv(X) * y gives this: pinv(X' * X) * X' * y, basically we use a pseudoinverse to find a pseudoinverse. Why? AI: Hello Oleksii and welcome to DSSE. The formula you are asking about is not for a pseudoinverse. $\theta = (X^TX)^{-1}X^Ty$ Where $\theta$ is your regressor $X$ is a matriz containing stacked vectors (as rows) of your features/independent variables $y$ is a matriz containing stacked vectors (or scalars) of your predictions/dependent variables This equation is a solution to a linear set of equations: $Ax = B$ that occurs in trying to minimize the least squares loss. The reason why you see this pinv() on the code is because if X has not enough linearly independent rows, $X^TX$ (also known as $R$, the autocorrelation matriz of the data, and it's inverse is called the precision matriz) will result in a singular (or near singular) matriz, which inversion might not be possible. Even if this singularity only happens because of the working precision of your computer/programming language. Using pinv() is usually not recommended, because even if it allows to compute a regressor it will overfit the training data. Alternative solutions to working with singular matriz is adding $\delta~I$ to $R$ where $\delta$ is a small constant (usually 1 to 10 eps) and $I$ is the identity.
H: Suspiciously good accuracy using neural network I have a dataset from EEG data that is 24 features (24 electrodes) and 88000 samples with 3 classes, it is normalised and everything and had some noise filtered out via bandpassing. When I classify with anything but a neural network the accuracy is pretty bad and I am using a 60/40 for training/test set just to make sure I can trust the result. For example: Gaussian naive bayes: 42% Logistic Regression: 52% Linear Discriminant Analysis: 51% However I played around with a neural network achieving 95%+ averaged with: 3 hidden layers: 100, 200, 100 Activation: relu Learning rate: adaptive I think this is super fishy so I did a PCA analysis And plotted it with dimension reduction to two dimensions As you can see, there is nothing significantly separable, which really confuses me. I am definitely using the test set to run the cross validation which is 40% of the sample data. Can someone please advise on what's happening and whether I can trust this result? And what further steps I can do to make this result more concrete? I don't want to celebrate too early!!! AI: As you can see, there is nothing significantly separable, which really confuses me. I am definitely using the test set to run the cross validation which is 40% of the sample data. Well, your interpretation is wrong: there is nothing linearly separable on the two first directions. The other components in PCA may be more relevant. Remember that PCA gives you the direction where the data has most variance, which doesn't necessarily mean it is the direction in which it is most easily separated/classified. I see you tryed LDA, which follows on that assumption that the most variable direction is not the most significant for classification, but since it is linear we can conclude that your data is not Linearly separable. Your model seem pretty large, it is considerably more powerful than LDA, LR, etc. If you are not confident with the results, try using other validation methods such as K-FOLD or Leave-one-out.
H: Different representations of dendrograms I have a dendrogram represented in a format I don't understand: (K_5:1.000030e+00,((K_1:2.000000e-05,(K_2:1.000000e-05,K_3:1.000000e-05):1.000000e-05):1.000000e-05,K_4:3.000000e-05)0.806:1.000000e+00):0.000000e+00; I am not sure how to interpret the above. It is an output of hierarchical clustering. K_1, K_2, K_3, K_4, K_5 are the data points. I have other dendrograms represented in the following format: [x_1,x_2,x_3,x_4,x_5] (we start with one big cluster and split a cluster at each step) [x_1,x_2][x_3,x_4,x_5] [x_1,x_2][x_3,x_5][x_4] [x_1][x_2][x_3,x_5][x_4] [x_1][x_2][x_3][x_5][x_4] I want a way to convert between these two representations. AI: This output represents the dendogram as a tree. The innermost parentheses represent the deepest parts of the tree. For instance the top (root) of the tree start with the pair K5 and a subtree, then this subtree is made of another subtree and K4, and so on. If we ignore the numerical values (distances I assume?) we have this: (K_5, ( (K_1, ( K_2,K_3 ) ) K_4 ) ) Which represents this tree: -------------------- | | | ------------- K_5 | | ------- K_4 | | K_1 ----- | | K_2 K_3 Then it can be converted to the desired format: [K_1 , K_2 , K_3 , K_4 , K_5] [K_1 , K_2 , K_3 , K_4] [K_5] [K_1 , K_2 , K_3] [K_4] [K_5] [K_1] [K_2 , K_3] [K_4] [K_5] [K_1] [K_2] [K_3] [K_4] [K_5]
H: Which colour channel from a TIFF image do I have to use? I'm going to use the following dataset to do semantic segmentation with U-Net network. LGG Segmentation Dataset This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks. The images were obtained from The Cancer Imaging Archive (TCIA). They correspond to 110 patients included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available. Tumor genomic clusters and patient data is provided in data.csv file. I have found that brain images are in TIFF format, and they are in RGB (with three channels). I have opened one with Gimp: I don't know if there is any special information on each colour channel because this question is strange (I don't understand it). Which channel do I have to use if I want to use greyscale images? Or maybe I can convert them into greyscale. AI: From the readme of your kaggle link; All images are provided in .tif format with 3 channels per image. For 101 cases, 3 sequences are available, i.e. pre-contrast, FLAIR, post-contrast (in this order of channels). For 9 cases, post-contrast sequence is missing and for 6 cases, pre-contrast sequence is missing. Missing sequences are replaced with FLAIR sequence to make all images 3-channel. Masks are binary, 1-channel images. They segment FLAIR abnormality present in the FLAIR sequence (available for all cases). So you could use all three channel.
H: How would you encode missing pixels in image data? I am working through an example on the MNIST dataset, and was just curious, if your image input data were missing some pixels, how would you encode it. Since the values are always positive, and normalized between 0 and 1, would it make sense just to encode it as -1 or something. Thanks AI: You'd substitute a statistical aggregator, such as the median or mean, for the missing data. Calculating the aggregator for a neighborhood region would "smooth out" the missing pixels. You wouldn't set it to -1 because lost pixels are effectively noise, which you want to suppress, not exaggerate.
H: Can someone explain to me the structure of a plain Recurrent Neural Network? I have seen pictures of RNNs and LTSMs, and they usually look like this: Here the task is to take a sentence and make a prediction of some sort. What are each of the green squares? Are each of them layers, or does each green square have it's own set of layers and neurons? Additionally, in such a scenario, the input data is the entire sequence of words. When dealing with time series data, would the input be the entire data set since the entire dataset is a sequence? If you have 1million observations, you would need to construct 1million green squares? AI: The architecture of a RNN is called recurrent because it applies the same function at each step. So all the cells on the graph actually represent the same computation, but not the same state. Each green square in your figure represent the computation. $$ s^{(t)} = f(s^{(t-1)}, x^{(t)}, \theta) $$ Where $f$ is the function of the RNN, $\theta$ are parameters, $s^{(t)}$ is the RNN state at step $t$ and $x^{(t)}$ is the input at step $t$ in the input sequence. What you see represented in the figure is actually what is called the unfolded representation, that is the same RNN cell applied to the input sequence one input vector at a time. I recommend you to read the chapter 10 on RNN Deep Learning book. The following figure is from this book and summarize the idea.
H: Tensorflow take ages for tf.cond and eval() - python code (sorry but i asked on Stackoverflow but none answer me) I got a problem with TensorFlow and need your help. My need is calculating tensordot between a vector: 1x512 named face in my code and a faces data: N x 512 named input_faces_data. The code will return the max_value_index if the max_value >= 0.1. I printed out time stamp to timing each step of function i use: tf.tensordot() tf.math.argmax() tf.cond() tf.Session() .eval() -> return last value My questions: Why tf.tensordot() and tf.math.argmax() take just 1ms or 5ms with any length of faces data array (3.000 or 1.000.0000 - my examples) but time cost a lot with .eval and tf.cond()? Why the duration of tf.cond() and .eval() is more longer with longer face data array? I'm using TensorFlow 1.13.1 and my GPU is GTX 2080 (11GiB). My Python code: sess = tf.Session() with tf.device(Config.GPU.GPU_DEVICE): start = time.time() dot_array = tf.tensordot(input_faces_data, face, axes=1) print("Data length {}".format(len(faces_data))) print("Compatition time {}".format(time.time()-start)) start_max_index = time.time() max_index = tf.math.argmax(dot_array) print("get max_value_index time {}".format(time.time()-start_max_index)) start_condition_time = time.time() new_max_index = tf.cond(dot_array[max_index] > tf.constant(0.1), lambda: max_index,lambda: tf.constant(-1,dtype=tf.int64)) print("tf.cond time {}".format(time.time()-start_condition_time)) temp_max_index = -1 start_seesion = time.time() with sess: print("Session time {}".format(time.time() - start_seesion)) start_eval_time = time.time() temp_max_index = new_max_index.eval() print("Eval time {}".format(time.time() - start_eval_time)) print("Total time {}, max_index {}".format(time.time()-start,temp_max_index)) And my outputs: AI: The reason why tf.tensordot(), tf.math.argmax() doesn't change running regardless of the input size is because they don't make the operations, but rather declare it. It is similar to passing a partial function using functools.partial, the function is not really executed, it just determines it's inputs (or part of it). from functools import partial def tensordot(a,b): return partial(lambda a,b: a*b,a,b) While running eval will run the previously mounted model dot = tensordot(a,b) #dot here is a function dot() # <- done inside eval Actually, on tensorflow these "partials" are objects and when you eval one of then, the previously declared objects that it depends of will be evaluated
H: Combining Two CSV's in Jupyter Notebook I want to combine both CSV files based on Column1, also when combined each element of Column1 of both csv should match and also each row or Please suggest how to reorder Column1 according to another csv. In Jupyter Notebook Thank You! AI: You can try the below code to merge two file: import pandas as pd df1 = pd.read_csv(‘first.csv’) df2 = pd.read_csv(‘second.csv’) df = df1.merge(df2, on=‘Column1’)
H: How to interpret skimage orientation to straighten images? I have a bunch of images that I am trying to straighten so the images are horizontal (major axis is horizontal) but I don't understand the orientation output from regionprops method in skimage. How to convert it into degrees ? What is the axis reference for the output angle ? Here is the skimage doc: orientation : float. Angle between the 0th axis (rows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise. My question: Given the orientation, how do I calculate the angle in degrees to rotate the image so that the major axis is horizontal with skimage? Code sample Basically, the main major and minor axis of this object belong to index 1 of the pandas dataframe. The orientation of the object is -1.184075 and should belong to the major axis. from skimage.measure import label, regionprops_table # connected pixels of same label get assigned a value label_img = label(binary_image_here) props = regionprops_table(label_img, properties=('centroid', 'bbox', 'orientation', 'major_axis_length', 'minor_axis_length')) df = pd.DataFrame(props) AI: Based on the doc you provide, orientation is in radians, ranging from -pi/2 to pi/2 counter-clockwise: orientation : float. Angle between the 0th axis (rows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise. Moreover, as it is said in the regionpropos doc, since 0.16.0, they use "rc" corrdinates everywhere. They say for the majority of computation you could find the old results with a simple transformation, but for others, it is more complicated, such as the orientation: For example, the new orientation is π/2 plus the old orientation. Which means that the right formula to get the angle you want is this one: angle_in_degrees = orientation * (180/np.pi) + 90 And the orientation refers to this angle on the image: Now: If you want your major axis and the 0th axis align, then rotate your image by -angle_in_degrees: from skimage.transform import rotate rotate(image, -angle_in_degrees, resize=True)
H: Pandas - Avoid boolean result when using groupby() I have this script: sectors = df.groupby(['company_sector']).mean()['investment_in_millions'] Output: I wanted to keep the same groupy() but having a result in "investment_in_millions" column filtered as mean > 10 or another value. If apply this: sectors = df.groupby(['company_sector']).mean()['investment_in_millions']>10 I keep the groupby() but it returns a boolean into the investment column. If I use: filtered = df[df['investment_in_millions']>10] I get the filtered values mean>10 but the groupby() is not there anymore and I get all the other columns in the excel. How can I get the groupby() together with the mean>10 without getting a boolean result? Thanks! AI: First filter your results: filtered_df = df[df['investment_in_millions']>10] And then group it by company_sector import numpy as np sectors = filtered_df.groupby(['company_sector']).agg({"investment_in_millions":np.mean}) You can do it in one line: sectors = df[df['investment_in_millions']>10].groupby(['company_sector']).agg({"investment_in_millions":np.mean})
H: How to best read large dataset from disk I want to solve a task using a ResNet in keras and tensorflow. My Dataset is big, and right now I'm considering my data loading options and trying to determine which one suits the task best. About the Dataset: x: arrays of 200x700 cells in range -1.0...1.0, I don't want to downsample them; they are currently saved as matlab or npz file y: the label consists of two floats per x. I have 1.2Million of these (x, y) which are currently saved in 1000 npz files, each with 1GB, totalling to 1TB of data. Problem: I don't have 1TB RAM in my system, so I can't keep all the data in memory. Thus I need a suitable solution to read my data from disk while training my neural network. Solutions that I found so far: save these files as images and use keras dataset io "load_images_from_directory", downside: I need to save the images on disk which would probably take even more than 1TB. And what about the labels? Plus probably additional preprocessing to from range 0..1 to -1..1 tfrecords which feels like an overkill, since my dataset is not really a structured one but it's just (array, label) hdf files which is also more for structured/hierarchical data. Things that I also want to take into account: Do I save my data as is, or do I need to save shuffled batches? But according to this I should also shuffle the mini-batches in each epoch new. This would mean that the order and filesizes (e.g. one file is one mini-batch) whith which I save my files is not important - the mini-batches should be shuffled anyways. Later, I will most likely also need to transfer the whole project to pytorch, so a data storage which is supported by both (kears/tensorflow and pytorch) can save me some time later. If I store each (x,y) sample as one small .bin file, this file is smaller than the block size of my disk, thus using more disk size than necessary. So the question is: What are the pros and cons specific to my dataset/task, thus which dataloading should I use? Are there more options that I haven't discovered yet? AI: A common way is to create a class inheriting tf.keras.utils.Sequence. This class implements a function __getitem__ which is called when you use model.fit() method. In this method, you can simply load one batch at a time, so no need to load the whole dataset. See the documentation. You can also directly use the .npz files when calling __getitem__. Shuffle data Do I save my data as is, or do I need to save shuffled batches? You would implement the method on_epoch_end() that does the shuffling of a list of indices. Then when you load a .npz file, use something like data[indices[i]] which load the index i in the shuffled list of indices. on_epoch_end is called at the end of each epoch by .fit() method, and you can also use it in __init__ of your Sequence class to initialize the shuffle. From Keras to Pytorch Pytorch has similar module called torch.utils.data.Dataset. The conversion is straightforward. See this tutorial.
H: Class label prediction in keras sequential model showing different results in confusion matrix With Keras Sequential Model Prediction To get Class Labels we can do yhat_classes1 = Keras_model.predict_classes(predictors)[:, 0] #this shows deprecated warning in tf==2.3.0 WARNING:tensorflow:From <ipython-input-54-226ad21ffae4>:1: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01. Instructions for updating: Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation). or yhat_classes2 = np.argmax(Keras_model.predict(predictors), axis=1) With the first class labels if i create confusion matrix, i get matrix = confusion_matrix(actual_y, yhat_classes1) [[108579 8674] [ 1205 24086]] But with the second class labels with the confusion matrix, i get 0 for True Positive and False Positive matrix = confusion_matrix(actual_y, yhat_classes2) [[117253 0] [ 25291 0]] May I know whats the issue here? AI: Model.predict_classes gives you the most likely class only (highest probability value), therefore it is of dimension (samples,), for every input in sample there is one output - the class. To be more precise Model.predict_classes calls argmax on the output of predict. See the code (of predict_classes below) def predict_classes(self, X, batch_size=128, verbose=1): proba = self.predict(X, batch_size=batch_size, verbose=verbose) if self.class_mode == 'categorical': return proba.argmax(axis=-1) else: return (proba > 0.5).astype('int32') As the warning suggest, Please use instead: np.argmax(model.predict(x), axis=-1), if your model does multi-class classification (e.g. if it uses a softmax last-layer activation). (model.predict(x) > 0.5).astype("int32"), if your model does binary classification (e.g. if it uses a sigmoid last-layer activation).
H: Why training of a neural network will require multiple iterations? I can't understand why training of a neural network will require multiple iterations (theoretically)? Can anyone explain why, please? AI: Solving optimisation problems is difficult, and finding a closed-form solution that finds the optimal point for the cost function is complicated. Consequently, optimisation problems are solved using iterative steps. This means people choose solutions which are guaranteed to decrease the cost or objective function with each step. This idea is somehow used in neural networks.
H: Getting 0 accuracy and NaN mae for all epochs training my NN Background: I made a simple game using python library 'Turtle' in which there is a long plank with a ball balanced on top of it. I can press right or left arrow keys to rotate the plank (either clockwise or anticlockwise) which makes the ball roll to the either side. Issue: I wanted to train a neural network to make this whole process automated(to balance the ball on plank). So I myself played the game and recorded the states(conditions) and my corresponding actions(buttons pressed) for a period of 2 minutes in the form of a csv. At the end, I got a dataset of 1300 conditions( eg. velocity of ball, direction of velocity, inclination of plank, etc) and the corresponding 1300 actions i took to balance the ball on the plank. Now when I try to train my neural network on this data, I get 0 accuracy and 'nan' mae for all the epochs. How do I resolve this? I've attached the code below #all necessary imports df=pd.read_csv('train.txt') model= Sequential([ Dense(units=16,activation='relu',input_shape=(5,)), Dense(units=32,activation='relu'), Dense(units=10,activation='relu'), Dense(units=1,activation='softmax') ]) model.compile(optimizer=Adam(learning_rate=0.0001),loss='sparse_categorical_crossentropy',metrics=['accuracy','mae']) X_train=df[['x','y','theta','u','v']] Y_train=df['action'] X_train= X_train.astype('float32')/10 Y_train= Y_train.astype('float32')/10 model.fit(X_train,Y_train,batch_size=40,epochs=20) And this is what I'm getting for all the epochs : loss: nan - accuracy: 0.0000e+00 - mae: nan AI: Maybe I'm not interpreting this correctly (I don't use Keras), but why does your softmax layer only have a single unit? The softmax function normalizes a vector to represent a probability distribution: $ f : ℝ^n \rightarrow (0,1)^n $ such that $\sum f(\vec x)=1 $. I would have assumed the softmax layer to use the same number of units as the previous, but again, maybe I'm not understanding Keras. Usually, gradient explosions cause nan. If you are unfamiliar with it, you could read about the related vanishing gradient problem here. However, in your case, I think the final layer is more suspicious.
H: Bidirectional vs. Traditional LSTM I'm working on image captioning problem, where I need to have an encoder for image and decoder for caption generation. Regarding the decoder, I've found a reference that uses Pytorch LSTM where bidirectional parameter is False. However, I know that bidirectional LSTM is more accurate. So, what do you think about this comparison? AI: Bidrectional LSTMs are still traditional and so I believe you refer to unidirectional LSTM models. Concept Unidirectional LSTM layers only preserve information of the past, as inputs are processed at each time point in a sequential forward pass. This means that, at each time-point the sequence model only reads information from the past. However bidirectional LSTM layers, are able to process inputs from the future in a backward pass, additional the the forward. In this way, the sequence model is able to preserve and "memorise" information from the past but also from the future. Hidden state The hidden state of a bidirectional LSTM layer is double in size (forward and backward pass) to enable allows at any point in time the alleged preservation of information the past and future. In short In essence, bidirectional LSTM models generally show better results as they can understand context better.
H: How can be proved that the softmax output forms a probability distribution and the sigmoid output does not? I was reading Nielsen's book and in this part of chapter 3 about the softmax function, he says, just before the following Excercise, that the output of a neural network with a output softmax layers forms a probability distribution and the sigmoid output does not always forms it. Now I've been wondering about the output of a neural network, if I have a sigmoid output layer, say for one observation the output is 0.7 for class 0, should the probability for class 1 be 0.3? Or, in this binary classification example, using a softmax output, the first output neuron would be 0.7 for class 0 and 0.3 for class 1 in that particular observation? AI: Softmax maps $ f:ℝ^n\rightarrow (0,1)^n$ such that $\sum f(\vec x) =1$. Therefore, we can interpret the output of softmax as probabilities. With sigmoidal activation, there are no such constraints for summation, so even though $ 0<S(\vec x)<1$, it is not guaranteed that $\sum S(\vec x)=1$. The sigmoidal function does not normalize the outputs, so in your example where class 0 has output $0.7$, class 1 could have any value in $(0,1)$, which might not be $0.3$. Here's an example: $\vec x=[-5,\pi,\frac{1}{3},0] $ $ f(\vec x)\approxeq [2.6379\times10^{-4},0.9059,0.05464]$ $ S(\vec x)\approxeq [6.693\times10^{-3},0.9586,0.5826,0.5] $ Because $0<f(\vec x)<1$ and $\sum f(\vec x)=1$, the softmax output vector can be interpreted as probabilities. On the other hand, $ \sum S(\vec x) > 1$, so you cannot interpret the sigmoidal output as a probability distribution, even though $ 0<S(\vec x)<1$ (I chose the above $\vec x$ arbitrarily to demonstrate that the inputs need not be negative, non-negative, rational, etc., hence $\vec x\in ℝ^n$)
H: Understanding declared parameters in my Conv2d layer of my convolutional neural network I am trying to understand the architecture of my keras model implemented by the sequential model. Here is a piece of the code : model = Sequential([ #block1 layers.Conv2D(nfilter,(3,3),padding="same",name="block1_conv1",input_shape=(64,64,3)), layers.Activation("relu"), layers.BatchNormalization(), ....... My question is why the two parameters input_shape and name are declared in the layer conv2D while they are not included in the set of defined parmeters for Conv2D() in this link https://keras.io/api/layers/convolution_layers/convolution2d/ AI: Bot the name and input_shape come from the Layer class which Conv2D inherited. In the doc you provide, they are implicitly in **kwargs
H: multiple linear regression with 5 records and 25 features X1 X2 X[...] X25 Y Q1_2019 23 65 18 32 1,6 Q2_2019 87 32 23 46 1,2 Q3_2019 34 15 63 78 3,2 Q4_2019 85 45 43 65 3,9 Q1_2020 85 43 78 35 1,1 Q2_2020 37 78 54 78 1,5 I have a very expensive dataset which shows aggregated survey data. These are probably means. I am trying to get the individual data but at the moment that is all I have. The shape of data frame is 5x26 Y data so far is collected data calculated at the end of each quarter via other means The survey is done at the beginning of the quarter. Y is my dependent variable and I would like to derive a polynom to predict the exact number based on future X data or at least the probable trend it will be going in the next quarter once new survey data is available. Up, down, stable would be enough I have done correlation analysis (all vs all) and there are strong pairwise correlation between several X and Y Questions Y comes as a one digit before the comma and one digit after the comma. Since all other values are 2 digits before the comma I would like to multiply it with 10 to transform it into 2 digits before the comma.Is that ok from math/data science perspective? 5 records is not much but there are a lot of features. I would like to do multiple linear regression. Do you think this feasible with this data set? What would be objections and risks doing that? Would upsampling the dataset help me with anything here? Or could I just work with the five records? With the strange shape of the dataset especially the low number of records do you think that sufficient precision can be reached? How could I calculated the maximum possible precision/discriminative power possible with this dataset? (I am looking for strong arguments why they should give me access to the complete dataset) AI: Y comes as percent in the format. To put it into the same dimension as X I multiplied it with 10. Is that ok from math/data science perspective? As far as I can tell there's no reason to do that, and why multiply by 10? 5 records is not much but there are a lot of features. I would like to do multiple linear regression. Do you think this feasible with this data set? What would be objections and risks doing that? The fact that there are lots of features makes it harder to work with few instances, not easier. There is a very high risk of overfitting, that is of the model catching patterns which appear by chance in the features. This leads to predictions being also affected by chance, so bad performance. Would upsampling the dataset help me with anything here? Or could I just work with the five records? Upsampling is unlikely to work since it's going to reproduce the patterns in the small dataset, so it's also going to reproduce patterns which appear by chance. With the strange shape of the dataset especially the low number of records do you think that sufficient precision can be reached? It depends what the data represents, if the features happen to be really good predictors for the dependent variable and are not affected by chance, it might work. But these are very optimistic assumptions, in general it's not reasonable to expect good predictions from such a small set of instances. How could I calculated the maximum possible precision/discriminative power possible with this dataset? (I am looking for strong arguments why they should give me access to the complete dataset) In general I would suggest doing a leave-one-out experiment: use 4 instances as training set, 1 instance as test set, repeat 5 times with a different instance as test set every time. Measuring the average performance should give you an idea how far off the predictions are going to be (you could use a very simple evaluation measure such as mean absolute error). However what you have is actually a time series apparently, so it might be worth looking at methods which take time evolution into account.
H: Can a linear regression model without polynomial features overfit? I've read in some articles on the internet that linear regression can overfit. However is that possible when we are not using polynomial features? We are just plotting a line trough the data points when we have one feature or a plane when we have two features. AI: It sure can! Throw in a bunch of predictors that have minimal or no predictive ability, and you’ll get parameter estimates that make those work. However, when you try it out of sample, your predictions will be awful. set.seed(2020) # Define sample size # N <- 1000 # Define number of parameters # p <- 750 # Simulate data # X <- matrix(rnorm(N*p), N, p) # Define the parameter vector to be 1, 0, 0, ..., 0, 0 # B <- rep(0, p)#c(1, rep(0, p-1)) # Simulate the error term # epsilon <- rnorm(N, 0, 10) # Define the response variable as XB + epsilon # y <- X %*% B + epsilon # Fit to 80% of the data # L <- lm(y[1:800]~., data=data.frame(X[1:800,])) # Predict on the remaining 20% # preds <- predict.lm(L, data.frame(X[801:1000, ])) # Show the tiny in-sample MSE and the gigantic out-of-sample MSE # sum((predict(L) - y[1:800])^2)/800 sum((preds - y[801:1000,])^2)/200 I get an in-sample MSE of $7.410227$ and an out-of-sample MSE of $1912.764$. It is possible to simulate this hundreds of times to show that this wasn't just a fluke. set.seed(2020) # Define sample size # N <- 1000 # Define number of parameters # p <- 750 # Define number of simulations to do # R <- 250 # Simulate data # X <- matrix(rnorm(N*p), N, p) # Define the parameter vector to be 1, 0, 0, ..., 0, 0 # B <- c(1, rep(0, p-1)) in_sample <- out_of_sample <- rep(NA, R) for (i in 1:R){ if (i %% 50 == 0){print(paste(i/R*100, "% done"))} # Simulate the error term # epsilon <- rnorm(N, 0, 10) # Define the response variable as XB + epsilon # y <- X %*% B + epsilon # Fit to 80% of the data # L <- lm(y[1:800]~., data=data.frame(X[1:800,])) # Predict on the remaining 20% # preds <- predict.lm(L, data.frame(X[801:1000, ])) # Calculate the tiny in-sample MSE and the gigantic out-of-sample MSE # in_sample[i] <- sum((predict(L) - y[1:800])^2)/800 out_of_sample[i] <- sum((preds - y[801:1000,])^2)/200 } # Summarize results # boxplot(in_sample, out_of_sample, names=c("in-sample", "out-of-sample"), main="MSE") summary(in_sample) summary(out_of_sample) summary(out_of_sample/in_sample) The model has overfit badly every time. In-sample MSE summary Min. 1st Qu. Median Mean 3rd Qu. Max. 3.039 5.184 6.069 6.081 7.029 9.800 Out-of-sample MSE summary Min. 1st Qu. Median Mean 3rd Qu. Max. 947.8 1291.6 1511.6 1567.0 1790.0 3161.6 Paired Ratio Summary (always (!) much larget than 1) Min. 1st Qu. Median Mean 3rd Qu. Max. 109.8 207.9 260.2 270.3 319.6 566.9
H: Does mini-batch gradient descent nullify the effect of stratification on the training data set? In data pre-processing, stratified shuffle is used to ensure that the distribution of the original dataset is reflected in the training, test and validation dataset. Mini-batch gradient descent uses random shuffling to ensure randomness in the mini-batches. My doubt is- Why should we implement stratified shuffle on our dataset if it is going to be shuffled in a random manner later during training? AI: It doesn't, the workflow when training a model is like that: Create 10 evenly distributed splits from the dataset using stratified shuffle train set = 8 splits; validation set = 1 split; test set = 1 split Shuffle the train set and the validation set and create minibatches from them Train for one epoch using the batches Repeat from step 3 until all epochs are over Evaluate the model using the test set If we skip the stratified shuffling in step 1 the classes of the train set, validation set and test set wont be evenly distributed. If we skip the shuffling before each epoch in step 3 the mini-batches in each epoch will be the same. The proportions of the train set, validation set and test set can of course vary.
H: NER evaluation metric I'm trying to compare two NER tools on an annotated corpus and I'm not sure which is the best metric to use, as I haven't worked with NER models before. To be more specific, I'm interested in one class only, so I want to evaluate them on that particular class. AI: A good starting point is to look at the evaluation measures used in the NER shared tasks: https://nlpprogress.com/english/named_entity_recognition.html. Generally the F1-score can be used for one specific class, but there are different options regarding what is counted as an instance: every occurrence of the full NE. In this case any difference between the predicted and the gold is counted as false, even if it's only one token difference. every token in an entity. In this case a partially matched entity counts as "partially correct": if a word is predicted outside instead of inside, it's a false negative and conversely. Other variants: count only unique entities, in order to observe the diversity of the entities recognized. count only entities which didn't appear in the training set, to observe the generalization power. (writing this from memory, I could miss something)
H: What's Joint Training in Neural Networks? I'm having a hard time trying to find a good explanation of the process of Joint Training in Neural Networks. I already understand the concepts of Fine Tuning and Feature Extraction, and i know it has to do with the practice of taking a network model that has already been trained for a given task, and make it perform a second similar task. AI: You are incorrect that is called Transfer learning, which focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. Joint Training is basically when the neural network trains on several different task simultaneously so optimizing more than one loss function rather than one loss function as the case with Transfer learning. It is also called Multi task learning. Refer this article to know more.
H: How keras.layers.embedding learn word embeddings? I was trying some tensorflow tutorials and see that in all of them they use layers.embedding to learn these word embeddings, but how are these learned? , with a NN? which arquitecture? , or word2vec? Thanks AI: The keras embedding layer is initialized with random weights and will learn an embedding for all of the words in the training dataset. So,the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. You could also use pre trained word2vec embedding. Refer this to know more.
H: NotFittedError says this StandardScaler instance is not fitted yet while using inverse_transform() I have a dataset and i have used Support Vector Regression.So i needed to use StandardScaler module from sklearn.preprocessing fro Feature Scaling. After training my model when i came to predict it was giving a prediction which was Feature scaled.That's why i used inverse_transformfrom StandardScaler() and getting a error saying NotFittedError: This StandardScaler instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator. I have tried several solutions but it's getting the same error. What can i do now? My dataset Here is my code : import numpy as np import pandas as pd import seaborn as sbn import matplotlib.pyplot as plt df = pd.read_csv('Position_Salaries.csv') x = df.iloc[:,1:2].values y = df.iloc[:,2:].values from sklearn.preprocessing import StandardScaler x = StandardScaler().fit_transform(x) y = StandardScaler().fit_transform(y) from sklearn.svm import SVR regressor = SVR(kernel = 'rbf') regressor.fit(x,y) y_pred = regressor.predict(StandardScaler().fit_transform(np.array([[6.5]]))) y_pred = StandardScaler().inverse_transform(y_pred) Error log : AI: You are trying to scale just one record, so you need to save the Scaler fitted on the training data sc = StandardScaler() sc.fit(x) x = sc.transform(x) y_pred = regressor.predict(sc.transform(np.array([[6.5]])) Make sure that the number of features is same in both cases otherwise you will get other errors. A Working example from sklearn.preprocessing import StandardScaler import numpy as np from sklearn import datasets iris = datasets.load_iris() X = iris.data sc = StandardScaler() sc.fit(X) x = sc.transform(X) #On new data, though data count is one but Features count is still Four sc.transform(np.array([[6.5, 1.5, 2.5, 6.5]]))
H: How does TF-IDF classify a document based on "Score" alloted to each word I understand how TF-IDF "score" is calculated for each word in a document, but I do not get how can it be used to classify a test document. For example, if the word "Mobile" occurs in two texts, in the training data, one about Business (like the selling of Mobiles) and the other about Tech, then how does the "score" for word "Mobile", in both training and test document over the given dataset, help the algorithm to classify whether the text (a new test document) belongs to "Business" category or "Tech" category? I'm new to NLP, thanks in advance! AI: It's not a single TFIDF score on its own which makes classification possible, the TFIDF scores are used inside a vector to represent a full document: for every single word $w_i$ in the vocabulary, the $i$th value in the vector contains the corresponding TFIDF score. By using this representation for every document in a collection (the same index always corresponds to the same word), one obtains a big set of vectors (instances), each containing $N$ TFIDF scores (features). Assuming we have some training data (labelled documents), we can use any supervised method to learn a model, for instance Naive Bayes, Decision Trees, SVM, etc. These algorithms differ from each other but they are all able to take into account all the features for a document in order to predict a label. So in the example you give maybe the word "mobile" only helps the algorithm eliminate the categories "sports" and "literature", but maybe some other words (or absence of other words) is going to help the algorithm decide between categories "Business" and "Tech".
H: Multiple linear regression for multi-dimensional input and output? Assume that I have $N$ points $x_i,i=1,...,N$ in some $A>1$-dimensional space $\mathbb{R}^A$ with pointwise evaluations of some function $f:\mathbb{R}^A \rightarrow \mathbb{R}^B$, i.e. $f(x_i),i=1,...,N$ where $f(x_i) \in \mathbb{R}^B$. It is my goal to find a multiple linear regression between $x_i$ and $f(x_i)$. Now sklearn has a function (sklearn.linear_model.LinearRegression) for a multiple linear regression for functions of the type $f:\mathbb{R}^A \rightarrow \mathbb{R} $, but my output is $B$-dimensional. I assume that I could make independent multiple linear regressions for each output dimension and then combine the results, but there must be an easier way of achieving this. Do you know of a more efficient way? AI: You are asking about multioutput regression. The class you talked about sklearn.linear_model.LinearRegression supports this out of the box. import numpy as np from sklearn.linear_model import LinearRegression # features A = 10 # number of values to predict B = 15 # number of rows in dataset m = 100 x = np.ones((m, A)) y = np.ones((m, B)) model = LinearRegression() model.fit(x, y) sklearn.linear_model.LinearRegression actually just creates B models. However it optimises calculations using vectorisation. It actually is exactly the same as a fully connected layer in a neural network which has no activation function. You can read more about it here: https://machinelearningmastery.com/multi-output-regression-models-with-python/
H: Is Flatten() layer in keras necessary? In CNN transfer learning, after applying convolution and pooling,is Flatten() layer necessary? I have seen an example where after removing top layer of a vgg16 ,first applied layer was GlobalAveragePooling2D() and then Dense(). Is this specific to transfer learning? This is the example without Flatten(). base_model=MobileNet(weights='imagenet',include_top=False) #imports the mobilenet model and discards the last 1000 neuron layer. x=base_model.output x=GlobalAveragePooling2D()(x) x=Dense(1024,activation='relu')(x) #we add dense layers so that the model can learn more complex functions and classify for better results. x=Dense(1024,activation='relu')(x) #dense layer 2 x=Dense(512,activation='relu')(x) #dense layer 3 preds=Dense(3,activation='softmax')(x) #final layer with softmax activation This example is with Flatten(). vgg = VGG16(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False) # don't train existing weights for layer in vgg.layers: layer.trainable = False # useful for getting number of classes folders = glob('Datasets/Train/*') # our layers - you can add more if you want x = Flatten()(vgg.output) # x = Dense(1000, activation='relu')(x) prediction = Dense(len(folders), activation='softmax')(x) # create a model object model = Model(inputs=vgg.input, outputs=prediction) What is the difference if both can be applied? AI: Although the first answer has explained the difference, I will add a few other points. If the model is very deep(i.e. a lot of Pooling) then the map size will become very small e.g. from 300x300 to 5x5. Then it is more likely that the information is dispersed across different Feature maps and the different elements of one feature map don't hold much information. So you are reducing the dimension which will eventually reduce the number of parameters when joined with the Dense layer. Excerpt from Hands-On Machine Learning by Aurélien Géron the global average pooling layer outputs the mean of each feature map: this drops any remaining spatial information, which is fine because there was not much spatial information left at that point. Indeed, GoogLeNet input images are typically expected to be 224 × 224 pixels, so after 5 max pooling layers, each dividing the height and width by 2, the feature maps are down to 7 × 7. Moreover, it is a classification task, not localization, so it does not matter where the object is. Thanks to the dimensionality reduction brought by this layer, there is no need to have several fully connected layers at the top of the CNN (like in AlexNet), and this considerably reduces the number of parameters in the network and limits the risk of overfitting. Is this specific to transfer learning? You can apply this concept to your own model too and test the result/parm count for different cases i.e. small/large Model. Anyway, Transfer learning is just a special case of Neural Network i.e. you are continuing to use an already trained model. Adding a depiction for the difference in approach
H: Predicting financial data (choosing a model) it is my first time doing something with financial data. I have a dataset with account numbers and some other information about each client (some clients span more than one row since we have info for each month in a different row). I managed to clean and create some models, here are the confusion matrices, the classification reports and AUC: Logistic regression [[185847 62897] [ 1 1061]] precision recall f1-score support not buy 1.00 0.75 0.86 248744 buy 0.02 1.00 0.03 1062 accuracy 0.75 249806 macro avg 0.51 0.87 0.44 249806 weighted avg 1.00 0.75 0.85 249806 AUC train = 0.9168592981611143 AUC test = 0.9150300677458543 Random Forest Classifier: [[245503 3241] [ 960 102]] precision recall f1-score support not buy 1.00 0.99 0.99 248744 buy 0.03 0.10 0.05 1062 accuracy 0.98 249806 macro avg 0.51 0.54 0.52 249806 weighted avg 0.99 0.98 0.99 249806 AUC train = 0.9996866568080237 AUC test = 0.9139101966925902 Gradient Boosting Classifier: [[184940 63804] [ 3 1059]] precision recall f1-score support not buy 1.00 0.74 0.85 248744 buy 0.02 1.00 0.03 1062 accuracy 0.74 249806 macro avg 0.51 0.87 0.44 249806 weighted avg 1.00 0.74 0.85 249806 AUC train = 0.8800353734759541 AUC test = 0.8657829269466372 Voting Classifier (from all the three above): [[211316 37428] [ 213 849]] precision recall f1-score support not buy 1.00 0.85 0.92 248744 buy 0.02 0.80 0.04 1062 accuracy 0.85 249806 macro avg 0.51 0.82 0.48 249806 weighted avg 0.99 0.85 0.91 249806 AUC train = 0.9987531510931085 AUC test = 0.9160262741936392 Since I do not have any experience I am not sure which model is producing better results. Can you help me understand which one and why? Thank you! AI: Well, evaluating your model just by comparing matrixes can be pretty hard. In your case, your matrixes are not done with the same number of row classified 1 or 0, so comparing them is really hard. Let's take an example : your logistic regression classifies around 64000 as 1, while your RandomForest only classifies around 4500, so comparing these data by this matrix is pretty hard. I'd suggest you to use the ROC AUC metric, that is really useful to compare models. You might find many info on this subject in the Internet. The closer your AUC is than 1, the better your model is. If it's 0.5 or less, your model is less efficient than a random classifier.
H: How to serialize/pickle a spacy ner model? I have trained a custom SpaCy named entity recognition model. I saved the model to disk using: nlp.to_disk() which results in keeping the model in a folder. Is it possible to make the nlp object to a pickle file? AI: Yes - Here is how to pickle in Python: import pickle pickle.dump(nlp, open( "nlp.p", "wb" ))
H: Improving misclassification for one class in a multi-class classification task Here I am trying to use 3 convolution layer neural network to classify a set of images (train data: (3249) , validation data: (487), test data: (326)) I have one class which is misclassified and I cannot understand what to do next. I have tried to reduce the value of dropout layer, but results got worst. I know that the next solutions could be useful if I had bad classification for all classes : Get more data Try New model architecture, try something better. Decrease number of features (you may need to do this manually) Introduce regularization such as the L2 regularization Make your network shallower (less layers) Use less number of hidden units What do you thing could be a good choice if I have only misclassifcation of one class? Number of total samples per class : Black rot: 1180 Esca: 1383 healthy: 423 leaf blight: 1076 I had split the two datasets as follow: x_train, _x, y_train, _y = train_test_split(x,y,test_size=0.2, stratify = y, random_state = 1) x_valid,x_test, y_valid, y_test = train_test_split(_x,_y,test_size=0.4, stratify = _y, random_state = 1) AI: The main problem is that there are too many Escas in your dataset. If you look at the confusion matrix, the Esca column gets predicted (wrongly and correctly) much more that the others. This is clearly a symptom of a skewed data set. Try augmenting your images to generate a larger 'super-dataset'. Then sample a 'sub-dataset' from that 'super-dataset', such that it has an equal distribution across all classes. Train on the 'sub-dataset'. If you can't augment your data to have a better distribution across the 4 classes, here are some ideas: Modify the loss function to more aggressively penalize Black Rots classified as Escas. Split into two networks; the first differentiates between Leaf Blight-Healthy-Black Rot/Esca, and the second differentiates between Black Rot-Esca.
H: When a dataset is huge, what do you do to train with all the images on i t? I'm using Python 3.7.7. I'm trying to load a lot of NIFTI images using SimplyITK and Numpy from the [BraTS 2019 dataset][1]. This is the code I use to load the images into a numpy array. import SimpleITK as sitk def read_nifti_images(images_full_path): """ Read nifti files from a gziped file. Read nifti files from a gziped file using SimpleITK library. Parameters: images_full_path (string): Full path to gziped file including file name. Returns: SimpleITK.SimpleITK.Image, numpy array: images read as image, images read as numpy array """ # Reads images using SimpleITK. images = sitk.ReadImage(images_full_path) # Get a numpy array from a SimpleITK Image. images_array = sitk.GetArrayFromImage(images) # More info about SimpleITK images: http://simpleitk.github.io/SimpleITK-Notebooks/01_Image_Basics.html return images, images_array This code works fine with smallest dataset but here I'm trying to load 518 nii.gz files with 155 images each file. To run the code I'm using PyCharm latest version on a Windows 7. How do you do it to train with all the images if all of them can't be in memory because memory limits? AI: When you use Keras, you can use the generator function, which essentially loads images in batches. See this post for a discussion on how to use (and predict) with the data generator: https://stackoverflow.com/questions/52270177/how-to-use-predict-generator-on-new-images-keras/55991598#55991598 See this code snippet for a full implementation of binary image classification using a pre-trained model and incorporating a data generator function: https://github.com/Bixi81/Python-ml/blob/master/keras_pretrained_imagerec_binaryclass.py More details can be found in the Keras docs: https://keras.io/api/preprocessing/image/
H: Isn't (steps_per_epoch = total training data/batch size)? Suppose i have 1000 dog images and my batch size is 10. It will take 1000/10=100 steps to complete 1 epoch. So doesn't it mean steps_per_epoch=100 ? Then why do we have to specify it separately in keras while applying .fit(). AI: As clearly mentioned in the documentation: Steps_per_epoch is total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. The default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. It is a optional parameter and is useful when passing an infinitely repeating dataset.
H: How can access to modify feature_importances of Random Forest Classifier model? My goal is to extract the feature importances from already trained random forest classifier and transfer them to another classifier. How this can be done? and How can access to modify feature_importances of Random Forest Classifier model? using scikit learn. AI: I'm not sure it's possible to "transfer" the feature importances from model to model in a Random Forest Classifier. Although I think there are two work-arounds you may be able to use. The first option you have is to retrain the second model on the exact same training data. This will have the effect of both models having close to the same feature importances, though I think doesn't guarantee they will be exactly the same. Another way would be to try knowledge distillation (https://arxiv.org/abs/1503.02531). Here you're basically making a target, or student, model train to not only match the hard labels, but also a set of soft labels, generated from a teacher network. In this case, the model that you want to transfer the knowledge to would be the student network. However, I believe this will involve writing a custom loss function, which I don't think is possible in sklearn. The paper I linked is in the context of neural networks so I also don't know if there would be any gaps in the performance between model types, as presumably, you would doing this with a random forest. Just to confirm, I ran this short script that trains two models on the same data and it seems to be doing what you want it to: from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification X, y = make_classification(n_samples=1000, n_features=4, n_informative=2, n_redundant=0, random_state=0, shuffle=False) clf = RandomForestClassifier(max_depth=2, random_state=0) clf.fit(X, y) print('model 1 feature importances: ', clf.feature_importances_) clf2 = RandomForestClassifier(max_depth=2, random_state=0) clf2.fit(X, y) print('model 2 feature importances: ', clf2.feature_importances_) The results I got were: model 1 feature importances: [0.14205973 0.76664038 0.0282433 0.06305659] model 2 feature importances: [0.14205973 0.76664038 0.0282433 0.06305659] Here you can see that the results were the exact same. I would imagine that the integrity of the transfer may change as the data changes. Make sure both models are initialized with the same random state if you want the feature importance values to match exactly.
H: What are bias and variance in machine learning? I am studying machine learning, and I have encountered the concept of bias and variance. I am a university student and in the slides of my professor, the bias is defined as: $bias = E[error_s(h)]-error_d(h)$ where $h$ is the hypotesis and $error_s(h)$ is the sample error and $error_d(h)$ is the true error. In particular, it says that we have bias when the training set and the test set are not independent. After reading this, I was trying to get a little deeper in the concept, so I searched on internet and found this video , where it defines the bias as the impossibility to capture the true relationship by a machine learning model. I don't understand, are the two definition equal or the two type of bias are different? Together with this, I am also studying the concept of variance, and in the slides of my professor it is said that if I consider two different samples from the sample, the error may vary even if the model is unbiased, but in the video I posted it says that the variance is the difference in fits between training set and test set. Also in this case the definitions are different, why? AI: What are Bias and Variance? Let's start with some basic definitions: Bias: it's the difference between average predictions and true values. Variance: it's the variability of our predictions, i.e. how spread out your model predictions are. They can be understood from this image: (source) What to do about bias and variance? If your model suffers from a bias problem you should increase its power. For example, if the prediction of your neural network is not good enough, add more parameters, add a new layer making it deeper, etc. If your model suffers from a variance problem instead, the best possible solution is coming from ensembling. Ensembles of Machine Learning models can significantly reduce the variance in your predictions. The Bias-Variance tradeoff If your model is underfitting, you have a bias problem, and you should make it more powerful. Once you made it more powerful though, it will likely start overfitting, a phenomenon associated with high variance. For that reason, you must always find the right tradeoff between fighting the bias and the variance of your Machine Learning models. (source) Learning how to do that is more an art than a science!
H: Why is Regularization after PCA or Factor Analysis a bad idea? I have done Factor Analysis on my data and applied various machine learning models on it. I particularly find it giving high MSE value for Ridge and Lasso Regression compared to other models. I want to know the reason why this happens. AI: In principle, PCA is unsupervised and therefore label agnostic. That means the down projections forced into the PCs may as well not be related to what the model is trying to predict. That may be able to measure with the amount of variance your PCs are capturing. In essence, PCA shall never be used as a means for regularisation but rather for dimensionality reduction. You could alternatively try a VIF approach, although not sure what your exact goal is.
H: Are my features enough? I am trying to fit a regression model on a non linear data. The features I have are around 12 and around 800 samples. With the help of PyCaret, i tried to fit the data on to around 22 model, and then selected the best one (Ada Boost) and then tried further to tune it to get better result. However, none of the models gave a positive R2 score, Ada Boost was the least worst performing algorithm. This is the test (red) and predicted test output (green) from the selected algorithm. After trying all various techniques and still not getting a decent result, can we infer that the features are not enough to account for the variation of the target variable ? In other words the provided features don't explain best the target variable. It may sound silly but am a beginner in Data Sciences, so please dont mind. AI: I don't know much about R2 score, but having it negative all the time seems pretty strange to me. Maybe you should try to use AUC as a metric ( <0.5 classifier is worse than a random classifier, and the closer you're to 1, the best is your algorithm). If it appears that you still can't find a model giving decent results, the direct conclusion is not that your features are not enough, because there can be plenty of other reasons. I'd suggest you to try SMOTE, which will create new data based on the ones you already have, and try applying your models again. Sometimes, this is a way to tackle the issue of not having enough data
H: Why does it has a constant val_loss:? I am working on somr dataset and am implementing a deep neural network. There are some typos that I am not familiar with. strong text AI: Change the last layer's Neuron count to 2. model.add(keras.layers.Dense( 2, activation="softmax")) OR Change your last layer's activation to Sigmoid and keep y_train with single column model.add(keras.layers.Dense( 1, activation="sigmoid"))
H: Which is better: Cross validation or a validation set for hyperparameter optimization? For hyperparameter optimization I see two approaches: Splitting the dataset into train, validation and test, and optimize the hyperparameters based on the results of training on the train dataset and evaluating on the validation dataset, leaving the test set untouched for final performance estimation. Splitting the dataset into train and test, and optimize the hyperparameters using crossvalidation on the train set, leaving the test set untouched for final performance estimation. So which approach is better? AI: The $k$-fold cross-validation (CV) process (method 2) actually does the same thing as method 1, but it repeats the steps on the training and validation sets $k$ times. So with CV the performance is averaged across the $k$ runs before selecting the best hyper-parameter values. This makes the performance and value selection more reliable in general, since there is less risk to obtain the best result by chance. However it takes much longer (since repeating $k$ times), so if the training process is long it's not always practical to use CV.
H: KNN Regression: Distance function and/or vector representation for datetime features Context: Trying to forecast some sort of consumption value (e.g. water) using datetime features and exogenous variables (like temperature). Take some datetime features like week days (mon=1, tue=2, ..., sun=7) and months (jan=1, ..., dec=12). A naive KNN regressor will judge that the distance between Sunday and Monday is 6, between December and January is 11, though it is in fact 1 in both cases. Domains hours = np.arange(1, 25) days = np.arange(1, 8) months = np.arange(1, 13) days >>> array([1, 2, 3, 4, 5, 6, 7]) type(days) >>> numpy.ndarray Function A custom distance function is possible: def distance(x, y, domain): direct = abs(x - y) round_trip = domain - direct return min(direct, round_trip) Resulting in: # weeks distance(x=1, y=7, domain=7) >>> 1 distance(x=4, y=2, domain=7) >>> 2 # months distance(x=1, y=11, domain=12) >>> 2 distance(x=1, y=3, domain=12) >>> 2 However, custom distance functions with Sci-Kit's KNeighborsRegressor make it slow, and I don't want to use it on other features, per se. Coordinates An alternative I was thinking of is using a tuple to represent coordinates in vector space, much like we represent the hours of the day on a round clock. def to_coordinates(domain): """ Projects a linear range on the unit circle, by dividing the circumference (c) by the domain size, thus giving every point equal spacing. """ # circumference c = np.pi * 2 # equal spacing a = c / max(domain) # array of x and y return np.sin(a*domain), np.cos(a*domain) Resulting in: x, y = to_coordinates(days) # figure plt.figure(figsize=(8, 8), dpi=80) # draw unit circle t = np.linspace(0, np.pi*2, 100) plt.plot(np.cos(t), np.sin(t), linewidth=1) # add coordinates plt.scatter(x, y); Clearly, this gets me the symmetry I am looking for when computing the distance. Question Now what I cannot figure out is: What data type can I use to represent these vectors best, so that the knn regressor automatically calculates the distance? Perhaps an array of tuples; a 2d numpy array? Attempt It becomes problematic as soon as I try to mix coordinates with other variables. Currently, the most intuitive attempt raises an exception: data = df.values Where df is: The target variable, for simple demonstration purposes, is the categorical domain variable days. TypeError Traceback (most recent call last) TypeError: only size-1 arrays can be converted to Python scalars The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) <ipython-input-112-a34d184ab644> in <module> 1 neigh = KNeighborsClassifier(n_neighbors=3) ----> 2 neigh.fit(data, days) ValueError: setting an array element with a sequence. I just want the algorithm to be able to process a new observation (a coordinate representing the day of the week and temperature) and find the closest matches. I am aware the coordinate is, of course, a direct representation of the target variable, and thus leaks the answer, but it's about enabling the math of the algorithm. Thank you in advance. AI: I like your idea of converting to 2d (the unit circle), 2d numpy array would be the way to go here Specifically, try puting the days_x and days_y into separate columns if you take the unit circle approach. An alternative idea - it looks like there's a 'precomputed' option for distance, which will let you use the distance you "really" want, and should not be slow, since there's no computation to be done.
H: Significance of Object-Oriented Programming (OOP) in Data Science Can someone please explain to me the role of Object-Oriented Programming (OOP) and Object-Oriented Design (OOD) in Data Science? I am from a non-computer science background. Do I need to learn these as well to become a Data Scientist? Also, please tell me if I should be learning Python or R for the same. AI: It actually depends on the role you get as a Data Scientist. If you have to write production-quality code at a large software company, then you need to be knowing the basics of Object-Oriented Programming (OOP). Object-Oriented Design (OOD), however, is something you need not necessarily know in a Data Science role. Learning OOD in case you plan a switch to software engineering roles in the future is one thing you can consider, though. Regarding the choice of language for doing Data Science, I would suggest you prefer Python over R as it's more versatile and a much more general-purpose language. While R, on the other hand, is limited only to applications in statistical programming.
H: Tensorflow-keras Image Classifier error while fitting I was building an image classifier with TensorFlow but I got stuck while fitting the model. Can somebody help me out? python import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D X = pickle.load(open("X.pickle", "rb")) y = pickle.load(open("y.pickle", "rb")) X = X/255.0 model = Sequential() model.add(Conv2D(64, (3,3), input_shape = X.shape[1:])) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size= (2,2))) model.add(Conv2D(64, (3,3))) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size= (2,2))) model.add(Flatten()) model.add(Dense(64)) model.add(Dense(1)) model.add(Activation(('sigmoid'))) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy']) model.fit(X, y, batch_size=32, epochs=3, validation_split=0.1) Error message: ValueError Traceback (most recent call last) <ipython-input-47-497337dde332> in <module> ----> 1 model.fit(X, y, batch_size=32, epochs=3, validation_split=0.1) ~/Downloads/yes/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs) 106 def _method_wrapper(self, *args, **kwargs): 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access --> 108 return method(self, *args, **kwargs) 109 110 # Running inside `run_distribute_coordinator` already. ~/Downloads/yes/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1038 (x, y, sample_weight), validation_data = ( 1039 data_adapter.train_validation_split( -> 1040 (x, y, sample_weight), validation_split=validation_split)) 1041 1042 if validation_data: ~/Downloads/yes/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py in train_validation_split(arrays, validation_split) 1374 raise ValueError( 1375 "`validation_split` is only supported for Tensors or NumPy " -> 1376 "arrays, found following types in the input: {}".format(unsplitable)) 1377 1378 if all(t is None for t in flat_arrays): ValueError: `validation_split` is only supported for Tensors or NumPy arrays, found following types in the input: [<class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>, <class 'int'>] AI: The error appears to be related to the type of your input data, may be worth checking for it with type(X). I would suggest loading pickle with pandas import pandas as pd X = pd.read_pickle(r'filepath') X = X.astype('uint8') Also for info, in your code above you are using the Keras API which is meant to be a high-level API for TensorFlow.
H: Creating an "unclassified" class in Random Forest I am trying to classify satellite based images by creating a region of interest and then classifying according to it. I am using a Jupyter notebook using python to do that. I used a Random forest classifier and got a nice model and result, but the problem is that the image is "over classified" meaning that all the pixels et value and force to be classified. I would like to define level of similarity that a pixel has to have in order to be classified, otherwise, it will not get any class. For example, the black suppose to be asphalt: However, in the RGB, you can see it's not asphalt: Is there any way to define in random forest or any other algorithm "level os similarity"? (For example something similar to n-D angle to match pixels to reference like ised in SAM, but under random forest, or another algorithm that allows define that) SAM- https://www.harrisgeospatial.com/docs/SpectralAngleMapper.html My end goal: to get "unclassified" values based on similarity level to the calibration data AI: It seems you can use RandomForest to get probabilities of being in both class by using predict_proba(X). You could just get thoses probs, get the higher one (which is currently the class assigned to your data), and manually set a threshold, for example setting that if the most probable class is at less than 75%, then you manually classifies it as "Unknown"
H: Should I keep common stop-words when preprocessing for word embedding? If I want to construct a word embedding by predicting a target word given context words, is it better to remove stop words or keep them? the quick brown fox jumped over the lazy dog or quick brown fox jumped lazy dog As a human, I feel like keeping the stop words makes it easier to understand even though they are superfluous. So what about for a Neural Network? AI: In general stop-words can be omitted since they do not contain any useful information about the content of your sentence or document. The intuition behind that is that stop-words are the most common words in a language and occur in every document independent of the context. Therefore they contain no valuable information which could hint to the content of the document.
H: Does convergence of loss function is always guarnteed? Which of the following is true, given the optimal learning rate? (i) For convex loss functions (i.e. with a bowl shape), batch gradient descent is guaranteed to eventually converge to the global optimum while stochastic gradient descent is not. (ii) For convex loss functions (i.e. with a bowl shape), stochastic gradient descent is guaranteed to eventually converge to the global optimum while batch gradient descent is not. (iii) For convex loss functions (i.e. with a bowl shape), both stochastic gradient descent and batch gradient descent will eventually converge to the global optimum. (iv) For convex loss functions (i.e. with a bowl shape), neither stochastic gradient descent nor batch gradient descent are guaranteed to converge to the global optimum Which option is correct and why? AI: (iii), if you add this clause provided an optimum or smaller than optimum learning rate and the training dataset is shuffled Why When we get the Gradient of the full batch, it's towards the global minima. So with a controlled LR, you will reach there. With stochastic GD, the individual gradients will not be towards the global minima but it will be with each set of few records. Obviously, it will look a bit zig-zag. For the same reason, it might miss the exact minima point and bounce around it. In a theoretical worse case, if the dataset is sorted on Class, then it will move in the direction one Class and then the other and most likely miss the global minima. Reference excerpt from Hands-On Machine Learning On the other hand, due to its stochastic (i.e., random) nature, this algorithm is much less regular than Batch Gradient Descent: instead of gently decreasing until it reaches the minimum, the cost function will bounce up and down, decreasing only on average. Over time it will end up very close to the minimum, but once it gets there it will continue to bounce around, never settling down (see Figure 4-9). So once the algorithm stops, the final parameter values are good, but not optimal." When using Stochastic Gradient Descent, the training instances must be independent and identically distributed (IID) to ensure that the parameters get pulled toward the global optimum, on average. A simple way to ensure this is to shuffle the instances during training (e.g., pick each instance randomly, or shuffle the training set at the beginning of each epoch). If you do not shuffle the instances—for example, if the instances are sorted by label—then SGD will start by optimizing for one label, then the next, and so on, and it will not settle close to the global minimum.
H: Autoencoder feature extraction plateau I am working with a large dataset (approximately 55K observations x 11K features) and trying to perform dimensionality reduction to about 150 features. So far, I tried PCA, LDA, and autoencoder. The autoencoder that I tried was 12000-8000-5000-100-500-250-150-, all layers were Dense with sigmoid activation, except the final layer, which had a linear activation in order to reproduce the continuous data from the input. The autoencoder loss effectively plateaus after 10-15 epochs, regardless of the learning rate (here, I used the ReduceLROnPlateau feature in Keras). For the record, I am normalizing each feature by z-score prior to the training. I'm not sure how to get this loss to stop reaching a plateau. Should my next attempt be to use a convolutional neural network on this dataset to see if I can reduce the dimensionality more successfully? Are there any pre-trained convolutional autoencoders that I could use? Training a convolutional autoencorder from scratch seems to require quite a bit of memory and time, but if I could work off of a pre-trained CNN autoencoder this might save me memory and time. AI: A convolutional autoencoder will only make sense if you work with images (2D signals) or time series (1D signals). Convolutions identify local patterns in data, if this is not the case in your data it will most likely not solve your problem. Using pre-trained AE will only help, if it was trained on similar data. Similar data in this case does not refer to the data type, but rather to what the data represents. If you have an AE which was trained to compress images of cats it will not work well on images of chairs, because cats and chairs do not share the same features. Although if you like to compress images of dogs you could use the weights of the AE for cats as a starting point (Transfer Learning). What kind of loss are you using? MSE or Cross-entropy? Speaking from my experience, using cross-entropy yields better results (although this is problem dependent). Another issue could be vanishing gradients which can happen in very deep networks and with activation functions like sigmoid. What you can do is reduce the depth of your network, replace the sigmoid with ReLU and maybe try a different optimizer. In any case PCA is a safe bet. It's linear, deterministic, well studied and quicker to use than to train a NN. Whatever method you use, you can use PCA as benchmark to see if your method beats it. Although with the size of your data you may run into memory issues.
H: Classifiers and accuracy I would like to ask you how to use classifier and determine accuracy of models. I have my dataset and I already cleaned the text (remove stopwords, punctuation, removed empty rows,...). Then I split it into train and test. Since I want to determine if an email is spam or not, I have used the common classifiers, I.e. Naive Bayes, SVM and logistic regression. Here I just included my train and test datasets: nothing else! I am using Python to run this analysis. My question is: should it be enough or should I implement new algorithms? If you could provide me with an example of how an already existing algorithm was improved it would be also good. I read a lot of literature regarding accuracy of text classification and in all the papers the authors use SVM, Naïve Bayes, logistic regression to classify spam. But I do not know if they built their own classifier or just used the existing one in Python. Any experience on this? AI: The question mixes two different notions: models (or algorithms) and accuracy. Let me clarify them. Model (or Algorithm) is a classification technique and 'Accuracy' is one of the ways to evaluate the performance of the models. You can choose any models(Naive bayes, SVM or other deep learning techniques) to implement your classifier. They are independent of 'Accuracy' or 'F1' or any other measures by which you want to test the performance. At first, you shall pre-process the text (remove stopwords, punctuation, etc.) , and pre-processing is a choice that says how the data should look like before getting into the model. They do influence the model's performance, but not to a great extent when done right. Usually, pre-processing is applied on both the train and test set. Model performance: Once you implement you model, you may want to see how well it generalises (performamce on unseen data). So, you shall split the dataset into two halves: training and test set. (Usually most of the authors split into 3 portions: training set, validation set(to avoid over-fitting) and test set). You shall train the model with training set, and the test set is used to evaluate the performance of the model. Model evaluation: Once the model is trained on training data, predict the labels on the test set. So, you have two set of labels on test set: 1: ground truth (the actual labels indicated by the test set) and 2: predicted labels (the labels predicted by the model). Now, use the evaluation metric of your choice (Let's assume that you want to choose 'accuracy' as evaluation metric). Accuracy can simply be calculates as: (#Number of correctly predicted samples / #Total number of samples) * 100 . Where #Number of correctly predicted samples is the count of samples for which the ground truth label and predicted labels are the same.
H: Not enough memory for operations with Pandas Wes McKinney, the author of Pandas, writes in his blog that "... my rule of thumb for pandas is that you should have 5 to 10 times as much RAM as the size of your dataset. So if you have a 10 GB dataset, you should really have about 64, preferably 128 GB of RAM if you want to avoid memory management problems." I frequently use Pandas with datasets not much smaller than my RAM (16GB). So I wonder, what are some practical implications of these "memory management problems"? Could anyone provide more insights into this? Does it mean it will store data in virtual memory on disk and therefore be very slow? AI: When Pandas finds it's maximum RAM limit it will freeze and kill the process, so there is no performance degradation, just a SIGKILL signal that stops the process completely. Speed of processing has more to do with the CPU and RAM speed i.e. DDR3 vs DDR4, latency, SSD vd HDD among other things. Pandas has a strict memory limit but there are options other than just increasing RAM if you need to process large datasets. 1.- Dask Here are certain limitations in dask. Dask cannot parallelize within individual tasks. As a distributed-computing framework, dask enables remote execution of arbitrary code. So dask workers should be hosted within trusted network only. A Dask tutorial: https://medium.com/swlh/parallel-processing-in-python-using-dask-a9a01739902a 2.- Jax 3.- Feather Format Language agnostic so it's usable in R and Python, can reduce the memory footprint of storage in general. 4.- Decreasing memory consumption natively in Pandas Reducing the number of bits of memory to encode a column help specially when you use tree-based algorithms to process the data later. This is a script popularized by Kaggle. import pandas as pd def reduce_mem_usage(df, verbose=True): numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] start_mem = df.memory_usage().sum() / 1024**2 for col in df.columns: col_type = df[col].dtypes if col_type in numerics: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == 'int': if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max: df[col] = df[col].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max: df[col] = df[col].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max: df[col] = df[col].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max: df[col] = df[col].astype(np.int64) else: if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max: df[col] = df[col].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max: df[col] = df[col].astype(np.float32) else: df[col] = df[col].astype(np.float64) end_mem = df.memory_usage().sum() / 1024**2 print('Memory usage after optimization is: {:.2f} MB'.format(end_mem)) print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem)) return df
H: Is a test set necessary after cross validation on training set? I'd like to cite a paragraph from the book Hands On Machine Learning with Scikit Learn and TensorFlow by Aurelien Geron regarding evaluating on a final test set after hyperparameter tuning on the training set using k-fold cross validation: "The performance will usually be slightly worse than what you measured using cross validation if you did a lot of hyperparameter tuning (because your system ends up fine-tuned to perform well on the validation data, and will likely not perform as well on unknown datasets). It is not the case in this example, but when this happens you must resist the temptation to tweak the hyperparameters to make the numbers look good on the test set; the improvements would be unlikely to generalize to new data." -Chapter 2: End-to-End Machine Learning Project I am confused because he said that when the test score is WORSE the cross validation score (on the training set), you should not tweak the hyperparameters to make the testing score better. But isn't that the purpose of having a final test set? What's the use of evaluating a final test set if you can't tweak your hyperparameters if the test score is worse? AI: In "The Elements of Statistical Learning" by Hastie et al the authors describe two tasks regarding model performance measurement: Model selection: estimating the performance of different models in order to choose the best one. Model assessment: having chosen a final model, estimating its prediction error (generalization error) on new data. Validation with CV (or a seperate validation set) is used for model selection and a test set is usually used for model assessment. If you did not do model assessment seperately you would most likely overestimate the performance of your model on unseen data.
H: Is there any model agnostic way to calculate the weight importance for neural networks given a set of inputs? I was curious if it's possible to calculate what weights are important and what weights are redundant (or have high redundancy) for separate tasks in neural networks? And if this is doable in a model agnostic way (maybe also differentiable..)? AI: You should look into fisher matrices. Elastic weight consolidation finds weights which are important for dataset 1 when training dataset 2 by finding the fisher matrix and slows the tuning of them down by adding a regularization term to the loss function. It's non trivial though and I got a lot of trouble implementing it in PyTorch. The paper which proposed elastic weight consolidation is here: https://arxiv.org/pdf/1612.00796.pdf
H: High Cross Validation Score on Training Set, High Score on Test Set, But Low Score on Kaggle? I've been trying to complete this regression task on Kaggle. As usual they gave a train.csv(with response variable) and a test.csv (without response variable) file for us to train the model and compute our predictions, respectively. I further split the train.csv file into a train_set and test_set. I use this subsequent train_set to train a list of models which I will then shortlist to one model only based on 10-fold cross validation scores (RMSLE) and after hyperparameter tuning. Now I have one best model, which is Random Forest (with best hyperparameters) with an average RMSLE score of 0.55. At this point I have NOT touched the test_set. Consequently, when I train the same exact model on train_set data, but evaluate its result on test_set (in order to avoid overfitting the hyperparameters I have tuned), it yields an RMSLE score of 0.54. This is when I get suspicious, because my score on test_set are slightly better than the average score of the train_set (test_set results are supposed to be slightly worse, since the model hasn't seen the test_set data, right?). Finally, I proceed to submit my results using the same model but with the test.csv file (without response variable). But then Kaggle gave me an RMSLE score of 0.77, which is considerably worse than my cross validation scores and my test_set scores! I am very frustrated and confused as to why this would happen, since I believe I've taken every measure to anticipate overfitting my model. Please give a detailed but simple explanation, I'm still a beginner so I might not understand overly technical terms. AI: This "train split" you named train_set and test_set are not guarantee to be clean or even balanced. When your test set has better performance than your training set that might mean that you have data leakage (some examples in the test set are equal to the training set) or just mean your test set is slightly easier than the training set.
H: Dropping one category for regularized linear models While reviewing the sklearn's OneHotEncoder documentation (attached below) I noticed that when applying regularization (e.g., lasso, ridge, etc.) it is not recommended to drop the first category. While I understand why dropped the first category prevents collinearity, I am unsure why it is needed for regularized regression. Wouldn't this this add an additional dimension that will need to be regularized? drop{‘first’, ‘if_binary’} Specifies a methodology to use to drop one of the categories per feature. This is useful in situations where perfectly collinear features cause problems, such as when feeding the resulting data into a neural network or an unregularized regression. However, dropping one category breaks the symmetry of the original representation and can therefore induce a bias in downstream models, for instance for penalized linear classification or regression models. AI: When you do linear regression you have to leave out one column as it's a singular matrix and hence columns are linearly dependent and we cannot calculate the inverse. But when you do regularization it take cares of singularity. The matrix is almost surely nonsingular. Hence we don't need to drop a column and if you drop different columns from each feature it could lead to different prediction as it would lead to bias. Refer this.
H: Dose finding slope/intercept using the formula of m,b gives best fit line always In linear regression? In liner regression We have to fit different lines and chose one with minimum error so What is the motive of having a formula for m,b that can give slope and intercept value in the regression line ,when it cannot give best fit line directly ? 1.Consider i applied the value in dataset on the formula of m,b and found the regression line yhat = 17.5835x+6 and for example just assume error calculated for this line was 3 2.Consider i fit another line randomly (i am not using the formula of m,b to find value of m,b assume m,b value for this random line was 16,3) my 2nd regression line is yhat = 16x+3and for example just assume error calculated for this line was 1.5 Linear Regression Goal : to choose best fit line that has minimum error so my second line is better than the 1st line in this case What is the point of having a formula which gives value for slope "m", intercept "b" when it cannot give best fit line directly ? OR is my understanding incoorect Dose finding slope/intercept using the formula of m,b gives best line always ? if its YES then there is no need to try mulitple lines and calculate error and choose line with min error if its No then whats the point of having a formula for slope m,intercept b when it cannot give the best fit line . dose that mean maths/stats community need to change this forumla for slope,intercept AI: The formulae you mentioned gives the coefficients of the line of best fit.The values are derived using the least squares method, where the goal is to minimize the sum of squared errors. Following is the derivation for the values of m and b. Let the line of best fit be $$\hat{y} = m*x + b$$ We then try to find the coefficients m and b which minimize the sum of squared errors between the actual value y and the observed value $\hat{y}$. \begin{align} SSE &= \sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^2 \\ &=\sum_{i=1}^{n}(y_{i}-m*x_{i}-b)^2 \end{align} Taking the first derivative of SSE with respect to c and equating to zero. \begin{align} \frac{\partial SSE}{\partial b} &= \sum_{i=1}^{n}-2*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}-2*(y_{i}-m*x_{i}-b) \end{align} Therefore we get c as $$ b = \bar{y} - m*\bar{x}$$ Similarly in order to find m we take the partial derivative of SSE with respect to m and equate it to zero. \begin{align} \frac{\partial SSE}{\partial m} &= \sum_{i=1}^{n}-2x_{i}*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}-2x_{i}*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}x_{i}*(y_{i}-m*x_{i}-b)\\ 0 &= \sum_{i=1}^{n}x_{i}*y_{i} - \sum_{i=1}^{n}m*x_{i}^2 - \sum_{i=1}^{n}b*x_{i} \end{align} Substituting b and solving for m we get $$m = \frac{n\sum xy - \sum x\sum y}{n\sum x^2 - (\sum x)^2}$$
H: How to insert two features in a model when a feature only applies to a certain group in the model I'm building a machine learning model in Python to predict soccer player values. Consider the following feature columns of the dataframe: [features] --------------------------------- position | goals | goals_conceded -------- |-------|--------------- Forward | 23 | NaN Defender | 2 | NaN Defender | 4 | NaN Keeper | NaN | 20 Keeper | NaN | 43 Since keepers don't usually score goals, they'll almost always have null values in the "goals" column, but they still can have this statistic, so it would be fine to fill the NaNs with 0. On the other hand, since line players can't have "goals_conceded" stats, they'll also have null values in that column, but in this case, players will never have this statistic, since this is a keeper only stat. How do I build a machine learning model considering these two columns as features? I thought about putting them together in one single column, but that can't happen since for a line player, the more goals he makes the better it is. For goalkeepers it's the opposite, the less goals he conceeds the better. I also can't fill the columns with zeros since it would affect the model prediction in the "goals_conceded" column for example, since 98% of the rows contain in line players info. This happens with many of the columns in my dataframe, such as "clean sheets" (only Keepers will have this stat) and "shots at target" (only line players will have this stat). How do I deal with them? AI: To me the data in the current form seems wrong fro training a model for all players, it is like trying to tell whether an apple is better than a tennis ball. They have completely different characteristics. what you could do instead is group players with similar features in different sets and train the models to predict their scores relevant to their feature set. So for example goalkeepers will be compared against other goalkeepers and assigned scores accordingly. After which you need to set a baseline value for each set and scale the scores for the different classes accordingly.
H: What is major difference between different dimensionality reduction algorithms? I find many algorithms are used for dimensionality reduction. The more commonly used ones (e.g. on this page ) are: Principal component analysis (PCA). Factor Analysis Independent component analysis (ICA). Non-negative matrix factorization (NMF or NNMF) Latent Dirichlet Allocation (LDA) How does one choose between these algorithms? AI: A detailed answer would require many pages of explanation, but I think a brief answer may point to the right direction for further research. First of all the choice of dimensionality reduction algorithm depends on the problem and data at hand. There is no golden standard. Your problem requirements dictate the best option(s) to try out. The main concept of dimensionality reduction is to find alternative representations for the data that have less "dimensions" while at the same time keeping most of the original information contained in the data. Right from the start, one sees that there is some trade-off between space/dimensions used and information kept. The less space/dimensions kept means also that the less original information contained in the data is kept. The trick is to try to leave out redundant and useless information for the problem at hand. This is why the choice of algorithm depends crucialy on the problem and data at hand. Then one can indeed reduce space/dimensions of the data while at the same time keeping all relevant information. In order to do that and depending on the nature and characteristics of data one can try some approaches: 1. PCA and variants This factors the data to "principal/decorrelated components" and keeps those with most variance and discards the rest (as irrelevant and noise). This decomposes the data according to statistical variance (ie 2nd-order statistics are used) usualy performed by variants of EVD/SVD on correlation matrix of data. PCA is the best, in the mean-square error sense, linear dimension reduction technique. 2. ICA This factors the data into "independent components" which means that it uses higher-order statistics than 2nd-order which imply decorrelation only. However depending on algorithm, some data may not work well with ICA, since they must not be normally distributed (since for normal random variables decorrelation implies independece as well). Note: PCA is a pre-processing step in most ICA algorithms (eg JADE, ..) ICA is a higher-order method that seeks linear projections, not necessarily orthogonal to each other, that are as nearly statistically independent as possible. Statistical independence is a much stronger condition than uncorrelatdness. 3. Dictionary and variants Note that the above algorithms result in a set of "basic" components which can form the basis for all the data. Like the basis vectors of a vector space, each datum is a specific combination/sample of the basic components. Or like a dictionary each datum is a combination/sample of elements from this dictionary. But what if one knows the basic dictionary before hand for a certain problem. Then one can try to find the optimum representation of each datum wrt this basic dictionary. Or can try to learn adaptively this basic dictionary using some adaptve learning method. 4. Factor analysis Also note that the first 2 approaches extract a set of basic factors from the data (ie they are equivalent to data factors). But what if one assumes a more general probabilistic setting for extracting (linear or non-linear) data factors (factor analysis). For example PCA/ICA can be seen as a specific example of factor analysis where factors are required to be uncorrelated/independent. This approach is the generic probabilistic form. 5. Other data factorisation methods.. One can get the idea, if the data have certain properties that can be exploited in learning the optimum minimum dimension for represetnation one can try variations of data factorisation methods exploiting those data properies. 6. Unsupervised clustering methods.. Being able to find optimum representations of data automaticaly is a great advantage. Unsuperivised clustering algorithms can be used to this end, in the sense that they can try to cluster the data in an unsupervised manner (no prior information given) and then the cluster representatives can be chosen as the dictionary or basis factors that best represent the data as a whole. This results in dimensionality reduction (eg k-means, vector quantisation, ..) References for further research: A survey of dimension reduction techniques Advances in data collection and storage capabilities during the past decades have led to an information overload in most sciences. Researchers working in domains as diverse as engineering, astronomy, biology, remote sensing, economics,and consumer transactions, face larger and larger observations and simulations on a dailybasis. Such datasets, in contrast with smaller, more traditional datasets that have been studied extensively in the past, present new challenges in data analysis. Traditional statistical methods break down partly because of the increase in the number of observations, but mostly because of the increase in the number of variables associated with each observation. The dimension of the datais the number of variables that are measured on each observation. High-dimensional datasets present many mathematical challenges as well as some opportunities, and are bound to give rise to new theoretical developments. One of the problems with high-dimensional datasets is that, in many cases, not all the measured variables are "important" for understanding the underlying phenomena of interest. While certain computationally expensive novel methods can construct predictive models with highaccuracy from high-dimensional data, it is still of interest in many applications to reduce the dimension of the original data prior to any modeling of the data. A survey of dimensionality reduction techniques Experimental life sciences like biology or chemistry have seen in the recent decades an explosion of the data available from experiments. Laboratory instruments become more and more complex and report hundreds or thousands measurements for a single experiment and therefore the statistical methods face challenging tasks when dealing with such high dimensional data. However, much of the data is highly redundant and can be efficiently brought down to a much smaller number of variables without a significant loss of information. The mathematical procedures making possible this reduction are called dimensionality reduction techniques; they have widely been developed by fields like Statistics or Machine Learning, and are currently a hot research topic. In this review we categorize the plethora of dimension reduction techniques available and give the mathematical insight behind them.
H: Semantic networks: word2vec? I have some doubts on how to represent the relationships between words in texts. Let’s suppose I have two sentences like these: Angela Merkel is a German politician who has been Chancellor of Germany since 2005. What I would expect is a connection between name Angela and Merkel (Angela is the name, Merkel the surname) who is German and politician and Chancellor. I read about the use of word2vec to determine the semantic structure of a sentence. My question is therefore if this model can allow me to determine such semantic structure or if another method would be better. AI: There are a few models that are trained to analyse a sentence and classify each token (or recognise dependencies between words). Part of speech tagging (POS) models assign to each word its function (noun, verb, ...) - have a look at this link Dependency parsing (DP) models will recognize which words go together (in this case Angela and Merkel for instance) - check this out Named entity recognition (NER) models will for instance say that "Angela Merkel" is a person, "Germany" is a country ... - another link
H: Estimating average daily consumption with samples randomly scattered in time I want to estimate my daily water consumption. I have taken pictures of the water meters (total m3 used since last reset) every now and then, but without any regularity. There can be a difference of a few days to several weeks between samples. What would be the best way to estimate this? I have thought of the following approaches: Create a double-entry table with the sample dates in the column and in the row headers. Each cell is the average consumption per day between the corresponding two dates. Finally, average all the cells in the table. Calculate the daily consumption between every two consecutive samples. Finally, average all of them. It seems to me that the first one would give a better estimation given the higher number of samples compared, but I am not sure if this is valid. AI: If you only want the average daily consumption over the whole period of time, you can simply calculate the difference between the last and first reading and divide by the total number of days. As far as I understand your explanation: your method 1 would not give you the true average over the period, unless you multiply each individual average by a weight corresponding to the number of days in the period. your method 2 gives the true average, since it would represent every day individually (if I understand correctly) However if you want to be able to observe the variations across time while smoothing some of the irregularities, you could: calculate the daily average for every day like in method 2. calculate a rolling average over a fixed number of days for every day. This is not perfect if there are long periods with no reading, but it should show the general trend better. Plot a graph based on rolling average.
H: Predicting time series data I have a dataset as following: This is test case 1. My goal is to fill the missing years data. As the age sex and smoking is not changing so I have to predict the condition and percent data for year 0 to all the way 54. I found high correlation between condition and percent variable. This seems easy. But I am a bit confused now. Should I have to use multivariable regression? what would be the most best method to approach? AI: Best approach would be to perform data preparation first: Remove features (columns) with no variance in it (you could use: sklearn feature_selection) one-hot-encoding of categorical features insert a lag column of -t steps If you have more than one explanatory variable, the process is called multiple linear regression. Instead of using a regression model you could also use other learners like XGBoost or LSTMs
H: Train a model to determine that the probability of an event given a set of features is higher than when given a different set of features I have a data set of attempted phone calls. I have a set of features, say, hour of day, and zip code. I have a label indicating whether the callee picked up the phone or not. I want a model to predict the probability of a phone pick up given a instance's feature set My difficulty is that I'm not interested in predicting whether the phone will be picked up or not, which would fit into a standard binary classification model, because I do not expect there to be very strong correlation between the features and the event. I'm merely hoping to discover that there is some boost in probability in a pick up for instance given its feature set. Then I could use that to prioritize phone numbers to attempt calling. I don't think this fits neatly into a binary classifier model. What techniques/models can I look into for this problem. Specifically, I'm looking for a model type to train with the data, that I can evaluate on a test set, and that will hopefully get better with more data. I'm pretty new to this, as I'm sure you can tell, so any help would be greatly appreciated. AI: I'm not sure you need to depart from the binary classification paradigm. If you train a binary classification model using whether or not the phone is picked up as a label, then the trained model will end up sending inputs in the feature space into a mostly monotonic transform of the "actual" probability distribution (the transform will depend on your loss function and sample size). As long as you only care about ordinal optimization (i.e., you aren't bound by significant constraints in the feature space), then you could just utilize an optimization package, using your trained model as the function to be optimized and your feature space as the support. Maybe consider Scipy's optimize package for python and JuMP for julia. If you want to optimize according to a relevant constraint (maybe one of your features costs money, etc), things might get tricky — you probably would need to use a |y_true-y_est| loss function along with a large sample size in order to push the implicit transform image toward the actual probability distribution, and the former could make convergence difficult. If this is the case, I'm not sure trying a distribution-based approach (like you seem to be hinting at) would be worth it — you may just want to bake your constraints into the ML loss function.
H: What is an autoencoder? I am a student and I am studying machine learning. I am focusing on deep generative models, and in particular to autoencoders and variational autoencoders (VAE). I am trying to understand the concept, but I am having some problems. So far, I have understood that an autoencoder takes an input, for example an image, and wants to reduce this image into a latent space, which should contain the underlying features of the dataset, with an operation of encoding, then, with an operation of decoding, it reconstrunct the image which has lost some information due to the encoding part. After this, with a loss function, it reconstruct the latent space and so get the latent features. about the VAE, it uses a probabilistic approch, so we have to learn the meand anv covariance of a gaussian. So far this is what I have understood. What I have really unclear is what are we trying to learn with autoencoders and VAE? I have seen examples where an image goes feom a non smiling to a smiling face, or to a black and white image to a colored image. But I don't understand the main concept, which is: what does an autoencoder do? I add here some sources of where I studied so that who needs can see them: https://hackernoon.com/autoencoders-deep-learning-bits-1-11731e200694 https://www.youtube.com/watch?v=yFBFl1cLYx8 https://www.youtube.com/watch?v=9zKuYvjFFS8 AI: what does an auto-encoder do? The simplest auto-encoder takes a high dimensional image (say, 100K pixels) down to a low-dimensional representation (say, a vector of length 10) and then uses only those 10 features to try to reconstruct the original image. You can imagine an analogy with humans: I look at someone, describe them ("tall, dark-haired, ...") then after I've forgotten what they look like, I try to sketch them using only my notes. what are we trying to learn? In other words, why bother? A few reasons: dimensionality reduction: 10 features are a lot more convenient than 100K pixels. For example, I can perform classification by clustering in the 10-dimensional space (while clustering in the 100K-dimensional space would be intractable). semantic meaning: if all goes well, each of the 10 features will have some obvious "explanation" -- e.g., tweaking one value will make the subject look older (though it's normally not so simple). As opposed to pixel values, which are impacted by translation, rotation, etc. Exception recognition: if I train my auto-encoder on dogs, it should normally do a good job encoding and decoding pictures of dogs. But if I put a cat in, it will probably do a terrible job -- which I can tell because the output looks nothing like the input. So, looking for places where an auto-encoder does a bad job is a common way to look for anomalies. I have seen examples where an image goes from a non smiling to a smiling face, or to a black and white image to a colored image. There are many different types of auto-encoders. What I described above is the simplest kind. Another common type is a "denoising" auto-encoder -- instead of reconstructing the original image, the goal is to construct an image that is related to the original image, but different. The classic example of this is denoising (hence the name): you can take a clean image, add a bunch of noise, run it through an auto-encoder, and then reward the auto-encoder for producing the clean image. So, the input (noisy image) is actually different from the desired output (clean image). The examples you give are similar. The challenge in designing these types of auto-encoders is normally the loss -- you need some mechanism to tell the auto-encoder whether it did the right thing or not. about the VAE, it uses a probabilistic approch, so we have to learn the mean and covariance of a gaussian. A VAE is a third type of auto-encoder. It's a bit special because it is well-grounded mathematically; no ad-hoc metrics needed. The math is too complicated to go through here, but the key ideas are that: We want the latent space to be continuous. Rather than assigning each class to its own corner of the latent space, we want the latent space to have a well-defined, continuous shape (i.e., a Gaussian). This is nice because it forces the latent space to be semantically meaningful. The mapping between pictures and latent spaces should be probabilistic rather than deterministic. This is because the same subject can produce multiple images. So, the workflow is this: You start with your image as before As before, your encoder determines a vector (say, length 200). But that vector is not a latent space. Instead, you use that vector as the parameters to define a latent space. For example, maybe you choose your latent space to be a 100-dimensional Gaussian. A 100-dimensional Gaussian will require a mean and a standard deviation in each dimension -- this is what you use your length-200 vector for. Now you have a probability distribution. You sample one point from this distribution. This is your image's representation in the latent space. As before, your decoder will turn this vector into a new "output" (say, a vector of length 200K). But, this "output" is not your output image. Instead, you use these 200K parameters to define a 100K-dimensional Gaussian. Then you sample one point from this distribution -- that's your output image. Of course, there's nothing special about a Gaussian, you could just as easily use some other parametric distribution. In practice, people usually use Gaussians. This sometimes gives better results than other auto-encoders. Further, you sometimes get interesting results when you look between the classes in your latent space. An image's distance in the latent space from the cluster center is sometimes related to uncertainty. Moreover, there is the nice property that these high-dimensional Gaussians are probability distributions in a rigorous mathematical sense. They approximate the probability that a given image belongs to a given class. So, there is some thought that VAEs will be able to overcome the "hand waving" of deep learning and put everything back on a firm Bayesian probabilistic grounding. But of course, it is only an approximation, and the approximation involves a lot of deep neural networks, so there is still plenty of hand waving at the moment. By the way, I like to use this question during interviews -- an astonishing number of people claim to have experience with VAEs but in fact do not realize that VAEs are different than "regular" AEs.
H: Categorical cross-entropy works wrong with one-hot encoded features I'n struggling with categorical_crossentropy problem with one-hot encoding data. The problem is in unchanged output of code presenting below: inputs = keras.Input(shape=(1190,), sparse=True) lay_1 = layers.Dense(1190, activation='relu') x = lay_1(inputs) x = layers.Dense(10, activation='relu')(x) out = layers.Dense(1, activation='sigmoid')(x) self.model = keras.Model(inputs, out, name='SimpleD2Dense') self.model.compile( optimizer=keras.optimizers.Adam(), loss=tf.losses.categorical_crossentropy, metrics=['accuracy'] ) Epoch 1/3 1572/1572 - 6s - loss: 5.7709e-08 - accuracy: 0.5095 - val_loss: 7.0844e-08 - val_accuracy: 0.5543 Epoch 2/3 1572/1572 - 6s - loss: 5.7709e-08 - accuracy: 0.5095 - val_loss: 7.0844e-08 - val_accuracy: 0.5543 Epoch 3/3 1572/1572 - 7s - loss: 5.7709e-08 - accuracy: 0.5095 - val_loss: 7.0844e-08 - val_accuracy: 0.5543 Few words about data: 1190 features (10 actual features with 119 categories). The inputs are a dataframe rows with 1190 values per sample. Output is a binary value 0 or 1. Attempts done before: binary_crossentropy used with satisfying results, however, number of samples is not enough to get good results on validation data. Tried to use different activations and layer sizes. Main question is why categorical_crossentropy is not working and how to use it in right way. Also, one concern appears about data representation is it right way to use in one rare row of straightforward one-hot encoded data? AI: For it to work - Change output neurons count to 02 Activation of output to Softmax Keep all the vectors of OHE output This is how Keras is designed internally. Same has been written on the official documentation page BinaryCrossentropy class Use this cross-entropy loss when there are only two label classes (assumed to be 0 and 1). For each example, there should be a single floating-point value per prediction. In the snippet below, each of the four examples has only a single floating-pointing value, and both y_pred and y_true have the shape [batch_size] CategoricalCrossentropy class The shape of both y_pred and y_true are [batch_size, num_classes] And we know that to keep the Classification multi-class you need to make all the num_class output relative to each other, so we use softmax Ref Keras official page Similar SE thread Similar SE thread
H: Setting sparse=True in Scikit Learn OneHotEncoder does not reduce memory usage I have a dataset that consists of 85 feature columns and 13195 rows. Approximately 50 of these features are categorical features which I encoded using OneHotEncoder. I was reading this article about sparse data sets and was intrigued to see how changing the value of the sparse parameter when defining a OneHotEncoder object may reduce memory usage for my dataset. Before applying OneHotEncoding to categorical features in my dataset, I have a memory usage of 9.394 MB. I found this by running this code: BYTES_TO_MB_DIV = 0.000001 def print_memory_usage_of_data_frame(df): mem = round(df.memory_usage().sum() * BYTES_TO_MB_DIV, 3) print("Memory usage is " + str(mem) + " MB") print_memory_usage_of_data_frame(dataset) Setting OneHotEncoder spare=True: from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline numeric_transformer = Pipeline(steps=[ ('knnImputer', KNNImputer(n_neighbors=2, weights="uniform")), ('scaler', StandardScaler())]) categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore', sparse=True))]) preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, selector(dtype_exclude="object")), ('cat', categorical_transformer, selector(dtype_include="object")) ]) Z = pd.DataFrame(preprocessor.fit_transform(X)) print_memory_usage_of_data_frame(Z) Memory usage is 25.755 MB Then running the same code above but setting spare=False like so: OneHotEncoder(handle_unknown='ignore', sparse=False) Memory usage is 25.755 MB According to the linked article, which used the sparse option in pandas get_dummies, this should result in reduced memory storage, is this not the same for Scikit Learn's OneHotEncoder? AI: Based on @BenReiniger's comment, I removed the numeric portion from the ColumnTransformer and ran the following code: from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore', sparse=True))]) preprocessor = ColumnTransformer(transformers=[ ('cat', categorical_transformer, selector(dtype_include="object")) ]) X = pd.DataFrame(preprocessor.fit_transform(X)) print_memory_usage_of_data_frame(X) The result was Memory usage is 0.106 MB, Running the same code above but with sparse option set to False: OneHotEncoder(handle_unknown='ignore', sparse=False) resulted in Memory usage is 20.688 MB. So it is clear that changing the sparse parameter in OneHotEncoder does indeed reduce memory usage.
H: Understanding how convolutional layers work After working with a CNN using Keras and the Mnist dataset for the well-know hand written digit recognition problem, I came up with some questions about how the convolutional layer work. I can understand what the convolution process is. My first question is: What are the filters? I can understand their purpose. They are used to map edges, shapes, etc. on an image. But how are they getting initialized? Do they have random initial value or there are standard image filters that are getting used? If they are getting initialized with random value then the values should get changed on the training process of the network. If that's the case then a new question is created, how does someone backpropagate the filter of the convolutional layer? What is the algorithm behind this process? Secondly, I have noticed that I can add an activation function to the convolutional layer in Keras. Is the entire matrix of the output getting passed through the activation function? How does the usage of an activation function changes the learning process of the convolutional layer? Last but not least, does a convolutional layer have weight and biases like a dense layer? Do we multiply the output matrix after the convolution process with a weight matrix and add some biases before passing it through the activation function? If that's true, then do we follow the same process as we do with the dense layers to train these weights and biases? AI: What are the filters? A filter/kernel is a set of learnable weights which are learned using the backpropagation algorithm. You can think of each filter as storing a single template/pattern. When you convolve this filter across the corresponding input, you are basically trying to find out the similarity between the stored template and different locations in the input. But how are they getting initialized? Do they have random initial value or there are standard image filters that are getting used? Filters are usually initialized at a seemingly arbitrary value and then you would use a gradient descent optimizer to optimize the values so that the filters solve your problem. There are many different initialization strategies. Sample from a distribution, such as a normal or uniform distribution Set all values to 1 or 0 or another constant There are also some heuristic methods that seem to work very well in practice, a popular one is the so-called glorot initializer named after Xavier Glorot who introduced them here. Glorot initializers also sample from distribution but truncate the values based on the kernel complexity. For specific types of kernels, there are other defaults that seem to perform well. See for example this article. If they are getting initialized with random value then the values should get changed on the training process of the network. If that's the case then a new question is created, how does someone backpropagate the filter of the convolutional layer? What is the algorithm behind this process? Consider the convolution operation just as a function between the input image and a matrix of random weights. As you optimize the loss function of your model, the weights (and biases) are updated such that they start forming extremely good discriminative spacial features. That is the purpose of backpropogation, which is performed with the optimizer that you defined in your model architecture. Mathematically there are a few more concepts that go into how the backprop happens on a convolution operation (full conv with 180 rotations). If you are interested then check this link. Is the entire matrix of the output getting passed through the activation function? How does the usage of an activation function change the learning process of the convolutional layer? Let's think of activation functions as just non-linear "scaling" functions. Given an input, the job of an activation function is to "squish" the data into a given range (example -> Relu 'squishes' the input into a range(0,inf) by simply setting every negative value to zero, and returning every positive value as is) Now, in neural networks, activations are applied at the nodes which apply a linear function over the input feature, weight matrix, and bias (mx+c). Therefore, in the case of CNN, it's the same. Once your forward-pass takes the input image, does a convolution function over it by applying a filter (weight matrix), adds a bias, the output is then sent to an activation function to 'squish' it non-linearly before taking it to the next layer. It's quite simple to understand why activations help. If I have a node that spits out x1 = m0*x0+b0 and that is then sent to another node which spits out x2 = m1*x1+b1, the overall forward pass is just x2 = m1*(m0*x0+b0)+b1 which is the same as x2 = (m1*m0*x0) + (m1*b0+b1) or x2 = M*x0 + B. This shows that just stacking 2 linear equations gives another linear equation and therefore in reality there was no need for 2 nodes, instead I could have just used 1 node and used the new M and B values to get the same result x2 from x0. This is where adding an activation function helps. Adding an activation function allows you to stack neural network layers such that you can explore the non-linear model space properly, else you would only be stuck with the y=mx+c model space to explore because all linear combinations of linear functions is a linear model itself. Does a convolutional layer have weight and biases like a dense layer? Yes, it does. Its added after the weight matrix (filter) is applied to the input image using a convolution operation conv(inp, filter) Do we multiply the output matrix after the convolution process with a weight matrix and add some biases before passing it through the activation function? A dot product operation is done between a section of the input image and the filter while convolving over the larger input image. The output matrix, is then added with bias (broadcasting) and passed through an activation function to 'squish'. If that's true, then do we follow the same process as we do with the dense layers to train these weights and biases? Yes, we follow the exact same process in forward pass except that there is a new operation added to the whole mix, which is convolution. It changes the dynamics especially for the backward pass but in essence, the overall intuition remains the same. The crux for intuition is - Do not confuse a feature and a filter. A filter is what helps you to extract features (basic patterns) from the input image using operations such as dot, conv, bias and activations Each filter allows you to extract a 2D map of some simple pattern that exists over the image (such as an edge). If you have 20 filters, then you will get 20 feature maps for a 3 channel image, that are stacked as channels in the output. Many such features, which capture different simple patterns, are learnt as part of the training process and become the base features for the next layer (which could be another CNN or a dense) Combinations of these features allow you to perform your modeling task. The filters are trained by optimizing towards minimizing a loss function using backprop. It follows the backward reasoning: - How can I minimize my loss? - How can I find the best features that minimize the loss? - How can I find the best filters that generate the best features? - What are the best weights and biases which give me the best filters? Here's a good reference image to keep in mind whenever working with CNNs (just to reinforce the intuition) Hope that answers your questions.
H: Build Deep Belief Autoencoder for Dimensionality Reduction I'm working with a large dataset (about 50K observations x 11K features) and I'd like to reduce the dimensionality. This will eventually be used for multi-class classification, so I'd like to extract features that are useful for separating the data. Thus far, I've tried PCA (performed OK with an overall accuracy in Linear SVM of about 70%), LDA (performed with very high training accuracy of about 96% but testing accuracy was about 61%), and an autoencoder (3 layer dense encoder with 13000 - 1000 - 136 units, respectively, which performed about the same as PCA). I've been asked to try a Deep Belief Network (stack of Restricted Boltzmann Machines) in this problem. Thus far, I foresee two challenges. First, I have access to a GPU that can be used, but I don't see many implementation of DBNs from the major players in the neural net community (e.g., TensorFlow/Keras, PyTorch), which means that this will need to be implemented on a CPU, bringing up challenge number two. Second, implementing this will require significant memory and will be pretty slow. This brings up my question: Are there any implementations of DBN autoencoder in Python (or R) that are trusted and, optimally, utilize GPU? If not, what is the preferred method of constructing a DBN in Python? Should I use sklearn? AI: Unlike Autoencoders, Botzmann Machines (restricted or not) do not have an output layer and thus classified as deep generative models. There is a variety of implementations in Pytorch. This one is GPU compatible (https://github.com/GabrielBianconi/pytorch-rbm) and I have found it particularly helpful in the past. RBMs can come quite handy in a variety of tasks such as Dimensionality reduction Collaborative filtering for recommender systems Feature learning and others. This was an interesting read in case you want to find out more about RBMs. https://heartbeat.fritz.ai/guide-to-restricted-boltzmann-machines-using-pytorch-ee50d1ed21a8
H: What common/simple problem would work well as a web app? Context I'm currently writing a simple tutorial to demonstrate a tool to data scientists and analysts that turns Jupyter Notebooks into web apps. Basically, it discusses setting up the web app as a front end, running some code in the notebook and then returning data to the web app. Question My question is, what is a small interesting problem in data science that I could solve in the notebook? I'm looking for something more interesting than doubling an input but smaller/simpler than building a computer vision model. Additional information As you can probably tell, I am new to data science. Apologies if this is the wrong forum for this type of question. Here is the version with a non-interesting problem being solved in the notebook, it may provide more context if needed. Thanks in advance for any help. AI: Maybe you could try solving easy classification problems like with Iris Dataset or Titanic Dataset. You'll find many tutorials dealing with those subjects, and they are basic and famous exercices for someone starting in Data Science.
H: Reduce the risk of numerical underflow We use log-likelihood (called as lambda) to reduce the risk of numerical underflow (in context of sentiment analysis using Naive Bayes). What does "reduce the risk of numerical underflow" means? AI: Arithmetic underflow can happen if the result of a calculation is a number smaller than absolute value than the computer can actually represent as a fixed length (fixed precision) binary digit. Instead returning the actual result of the calculation, the computer returns a zero. Arithmetic underflow can happen in many statistical and machine learning models when many small likelihoods are multiplied together. Taking the log of each likelihood increases the absolute value so arithmetic underflow is less likely to happen.
H: best approach to embed random length sequences of words as a fixed size vector without having a maximum length? I have a dataset of sentences in a non-English language like: word1 word2 word3 word62 word5 word1 word2 and the length of each sentence is not fixed. Now, I want to represent each sentence as a fixed sized vector and give it to my model and i want to keep as much information as possible in the embedding, and i don't want to have a maximum length for sentences because important information might happen in the end. The only two approaches I can think of so far are: Convert them to one hot vector and add them Convert them to a word embedding and then add them Is there any better way? What is the best approach to represent a variable length sentence without losing information from it (like having a maximum length for each sentence - I want all the words in the sentence to affect the embedding)? AI: May be Universal Sentence Encoder might work for you if it can encode words in language you want. https://tfhub.dev/google/universal-sentence-encoder/4
H: Python - accessing dictionary values for math operations I have this dictionary: stocks = {'FB': 255, 'AAPL': 431, 'TSLA': 1700} and this script: shares = input('How many shares? ') stock = input('What\'s the stock? ') for name in stocks.keys(): ticker = (stocks[name]) if name == stock: print('The price for', stock, 'is', ticker) amount = int(ticker) * int(shares) print(amount) I wanted to access the value of the key when it is selected in the "stock" input and multiply the ticker chosen in the input, but it always multiplies 'TSLA' value 1700 by the number of shares selected, even if I choose FB as stock input. The right output should be: How many shares? 10 What's the stock? FB 2550 Instead I get 17000 I'd like to understand why.. thank you! AI: The reason is that ticker = (stocks[name]) is outside the if-statement. Since the loop does not stop when name == stock it continues to loop until the end of stocks and name will always be assigned to the last key in the dict. One of multiple solutions is to move it within the if-statement: shares = input('How many shares? ') stock = input('What\'s the stock? ') for name in stocks.keys(): if name == stock: ticker = (stocks[name]) print('The price for', stock, 'is', ticker) amount = int(ticker) * int(shares) print(amount) This gives: How many shares? 10 What's the stock? FB The price for FB is 255 2550 And you get the same output even without a loop: shares = input('How many shares? ') stock = input('What\'s the stock? ') ticker = (stocks[stock]) print('The price for', stock, 'is', ticker) amount = int(ticker) * int(shares) print(amount) This gives: How many shares? 10 What's the stock? FB The price for FB is 255 2550
H: Does LSTM without delayed inputs work as a deep net? I want to predict a multivariate time series. My time series is $a_1(t),...,a_k(t)$ and I want to predict $a_k(t)$. I use the following keras LSTM: model = Sequential() model.add(LSTM(90,return_sequences=True,input_shape=(train_X.shape[1], train_X.shape[2]))) model.add(LSTM(90)) model.add(Dense(1)) I use $a_1(t),...,a_{k-1}(t)$ as input and $a_k(t)$ as output to train it. So I don't use delayed inputs like $a_s(t-l)$. My qestion is in this situation, Does LSTM work as a deep neural network? i.e. Is it same as the following keras net? model.add(Dense(90, input_dim=12)) model.add(Dense(90)) model.add(Dense(1)) AI: LSTMs are RNN with memory cell It can be difficult to train standard RNNs to solve problems that require learning long-term temporal dependencies. LSTM units include a 'memory cell' that can maintain information in memory for long periods of time. A set of gates is used to control when information enters the memory, when it's output, and when it's forgotten. This architecture lets them learn longer-term dependencies. So without the delayed inputs a LSTM is simply a RNN. As you can see here, RNN has a recurrent connection on the hidden state. This looping constraint ensures that sequential information is captured in the input data. Check this for more details
H: Avoiding Overfitting with a large LSTM net on a small amount of data I'm reposting this question from AI.SE here as I think it was maybe off-topic for AI.SE... 1. Context I'm studying Health-Monitoring techniques, and I practice on the C-MAPSS dataset. The goal is to predict the Remaining Useful Life (RUL) of an engine given sensor measurements series. There's a wide litterature about the C-MAPSS dataset, including both classical (non-DL) ML techniques and DL-based approaches. A few years ago, LSTM-based networks showed promising results (see Long Short-Term Memory Network for Remaining Useful Life estimation, Zheng et al, 2017), and I'm trying to reproduce these results. The C-MAPSS dataset contains a low amount of data. The FD001 subset has for instance only 100 run-to-failure series. When I pre process it to get fixed-length time series, I can get up to ~20 000 framed series. In the article mentioned above using LSTM, they use two hidden LSTM layers with 64 units each, and two fully-connected layers with 8 neurons each (~55 000 parameters). 2. Problem LSTMs induce a great number of parameter, so overfitting may be encountered when training such a network. I can use L1 or L2 regularization, dropouts, the net will still be largely oversized regarding to the dataset. Keeping the same architecture, I can't reach the scores and RMSE in the paper in the validation set, and overfitting is always here. However, one thing that works is reducing the number of units of the LSTM layers. Expectedly, with only 24 units instead of 64 per layer, the net has much less parameters (~9000), and it presents no overfitting. The scores and RMSE are a bit worse than the one in the paper, but it's the best I can get so far. Although these results are fine for me, I'm curious about how it was possible for the authors of the paper to avoid overfitting on their LSTM(64,64) net. 3. Question LSTM are great, but they induce a lot of parameters that hinder a correct learning on small dataset : I wonder if there is any method to tackle this specific issue. Would you have any advice on how to avoid overfitting with a LSTM-based net on a small dataset ? 4. Infos I provide here below more infos about my net and results : Network architecture model = keras.models.Sequential([ keras.layers.LSTM(24, return_sequences=True, kernel_regularizer=keras.regularizers.l1(0.01), input_shape=input_shape), keras.layers.Dropout(0.2), keras.layers.LSTM(24, return_sequences=False, kernel_regularizer=keras.regularizers.l1(0.01)), keras.layers.Dropout(0.2), keras.layers.Dense(8, activation='relu', kernel_regularizer=keras.regularizers.l2()), keras.layers.Dropout(0.2), keras.layers.Dense(8, activation='relu', kernel_regularizer=keras.regularizers.l2(), bias_regularizer=keras.regularizers.l2()), keras.layers.Dense(1, activation='relu') ]) Scores (Validation set) Paper: Score = 16.14 ; RMSE = 338 My LSTM(64, 64): Score = 26.47; RMSE = 3585 (overfits) My LSTM(24, 24): Score = 16.82; RMSE = 515 Edit : Results for solution proposed by @hH1sG0n3 LSTM(64, 64) with recurrent_dropout=0.3 : Score = 16.36; RMSE = 545 AI: You may want to check a couple of hyperparameters that it appears you are not testing for in your code above: Gradient clipping: large updates to weights during training can cause a numerical overflow or underflow often referred to as “exploding gradients.” # configure your optimizer with gradient norm clipping opt = SGD(lr=0.01, momentum=0.9, clipnorm=1.0) Reccurent dropout: Dropout that is applied to the recurrent input signal of the units of the LSTM layer. keras.layers.LSTM(24, kernel_regularizer=keras.regularizers.l1(0.01), ..., recurrent_dropout=0.3) Stateful: Is it clear from the paper if the model retains its state with each training iteration. You can experiment with this as well. keras.layers.LSTM(24, kernel_regularizer=keras.regularizers.l1(0.01), ..., stateful=True)
H: Image multi class classifier CNN I have a problem, im designing a multiclass classifier to classify medic images, I have to classify in which grade of desease is it, this are 6 grades , each time the joint deforms a little, so, mi original dataset was imabalanced, each class have like 16 to 200 images that are very very similar with some little deatils of change, ,so i did some contrasts and flips to the images to have like 500 images per class aprox, i used a vgg16 architechture to train the model but, it soesnt work 30% acc, i ddid a simple model with 2 layers of convolution, I have like 70% of accurracy but my recall is very very low like 0.001 to 0.3, and when I predict with an image, all the images goes to the incorrect class, i dont know how to correct my model , use other metrics or some architechture for this type of images?.Here's a sample of the code https://github.com/Franciscogtu/OARSI-IMAGE-CLASSIFIER.git Thank you for the help AI: Have you tried a batch normalization layer between the conv2D layers and the dense top? I'd also keep the number of filters low ,maybe 32 or 64. Same for the number of nodes in the top, no point in having a huge dense top with hundreds of nodes.
H: Regression and Classification, which is better in financial market price prediction? I want to use a model to trade in finanical market. which i have several features, like macd, rsi, or other common features. and my target is to make a tradeable predict in every time point. so my target can be: yield in a fixed time laster, like, 30 min. yt = close(t+ws) - close(t) futures price direction, which only can be 1(price up in the future) -1 (price down in the future) these are difference between regression and classification. which one you think is better, and, any suggestions about this problem? Thanks AI: From my personal experience, I think what matters the most in terms of return is how good your risk management is. You can use both regression or classification, but all approaches have some errors associated with it, so in the long run, it comes down to how you can manage incorrect predictions. So you can start with any model, say you go for classification and you achieve 70% accuracy. then you use your model to predict on some historical data (that for sure you have not used during trading) and you build some trading strategy using your forecasts. Then, you can use something like Pythons Backtrader and check how are you doing in terms of return on historical data. Sometimes its not enough to correctly predict price movement in 9 out of 10 cases, because your one loss can be bigger than all the profitable bets and you also need to take into account fees that will burn your profit. Thats a lot of issues you need to think about before putting this model into production. And even using this approach you should be aware that historical success of your model+strategy does not guarantee that it will be the same in the future. To sum it up: I think it comes down to the strategy and not the model itself.
H: How to model the probability of detecting an image, given it is seen multiple times Are there any existing methods/models describing the probability of an object being detected by a computer vision algorithm given it is seen $n$ times at similar angles and orientations? I know that an autonomous car may, for example, have trouble recognizing a stop sign and as a result the image recognition system being used may have a bounded box around the stop sign continuously appearing and disappearing signaling that the object recognition algorithm is only detecting the stop sign a certain amount of the time it sees it. I would like to understand this phenomenon in a more general sense. More precisely put, I would like to model that: given an object is detected with probability $p$ if it is seen 'once' (AKA during a moment in time), then what is the probability of it being detected if it is seen for a second time, a third time, etc... Each 'moment in time' may be characterized by a single video frame/image or some other unit of time. Each time the image is seen it may be in a similar but not identical orientation. AI: An approach that is not specific to the image domain is to use a probabilistic data structure like a Count-Min Sketch. A Count-Min Sketch data structure can accumulate information to estimate the observed frequency of an input value based on the past set of input values by using multiple hashing functions over the input.
H: Which policy gradient method is used for continuous action spaces? Which policy gradient method is used that deals with continuous action spaces? AI: You can use Soft Actor Critic or PPO from the same website. Most PG methods can be used in discrete and/or continuous action spaces.
H: Rescale parameter in data augmentation I'm a little bit lost about the rescale parameter in the ImageDatagenerator function. I know that the rescale argument by itself does not augment my data and that by doing rescale=1./255 it will convert the pixels in range [0,255] to range [0,1]. Currently, I'm only using random flips(vertical and horizontal) and 90º degrees rotations but I'm pondering whether or not I should add the rescale parameter. My question is what is the advantage in doing that? AI: As rightly pointed out by you the rescale=1./255 will convert the pixels in range [0,255] to range [0,1]. This process is also called Normalizing the input. Scaling every images to the same range [0,1] will make images contributes more evenly to the total loss. Without scaling, the high pixel range images will have a large say to determine how to update weights. For example, black/white cat image could be higher pixel range than pure black cat image, but it just doesn't mean black/white cat image is more important for training. Also the neural network has a higher chance of converging as it make the coefficients in range of [0,1] as opposed to [0,255] so helps the model process input faster.
H: Suitable metric choice for imbalanced multi-class dataset (classes have equal importance) What type of metrics I should use to evaluate my classification models, given that I have two imbalanced multi-class datasets (21 and 16 classes, respectively) where all classes have equal importance? I am somehow convinced with macro-averaged-based metrics choice such as macro F1 and macro TNR, ...etc. Are macro-averaged-based metrics suitable for my problem based on the aforementioned inputs? AI: Yes, a macro-average measure is the standard choice in this context: a macro-average score is simply the mean of the individual score for every class, thus it treats every class equally. With an strongly imbalanced dataset, this means that a small class which has only a few instances instances in the data is given as much weight as the majority class. Since the former is generally harder for a classifier to correctly identify, the macro-average performance value is usually lower than a micro-average one (this is normal of course).
H: Is it a good idea to combine fine tuning and feature extraction techniques? I have a normal/tumor medical images dataset and, for the same patients, also the relative genomics, and my goal is to predict if a patient has a tumor by combining all the information. To achieve this, I am using a ResNet50 with imagenet weights to extract features from images, and other methods to extract features from genes. I join the two features and use an SVM to make a prediction. The accuracy isn't extremely bad, but I wanted to know if it can be increased by performing a fine-tuning over the same images, in place of using a network only trained on the imagenet datasets as it is. I have researched papers but I found literally nothing, neither in favor nor in contrast with this hypothesis. Is there some known contraindication? AI: The concept of multimodal learning is relevant: in this case, combining data from two modalities: 1) image signal using ResNet50 and 2) genomic features extracted from genes. Multimodal learning for extending state-of-the-art performance of pre-trained unimodal models is currently an area of active research in the literature. In the NLP domain, the paper on Multimodal Bert describes combining a pre-trained language model with additional signals from visual and auditory inputs for improved classification performance. In the specific scenario mentioned here, combining the image and non-image signals into a single deep neural network graph would allow for additional fine-tuning. It is possible to first train the layer combining signals from both modalities, then later fine-tune all weights in the graph (with a lower learning rate) for improved performance. This procedure discussed a bit in the Keras Guide to Transfer Learning.
H: KFold cross validation ambiguity I just studied K-Fold cross validation technique for finding model parameters and something seemed to be very confusing. Every tutorial I follow says that for K-Fold validation, the whole dataset will be split into K portions and K models will be fit with one portion as validation dataset each time. So, my doubt is that this method will generate K models. Which model are we supposed to use during actual inference? Or is it that same model will be trained K times with different portion of data as hold-out set? AI: What you want to achieve with this validation strategy is a robust estimate of what combination of hyperparameters is good enough for your final model, so: for each combination of hyperparameters, you carry out k trainings (as you ask with your last question), following this schema: source of info once you have this k-trained models (i.e. for one hyperparams combination) you find the mean and standard deviation of the k sub models you repeat this process for all the hyperparameters combinations you want to try out Once you select the model hyperparams which provided the best mean value (and std) of the desired evaluation metric, you retrain with all the training dataset (the green section in the image), and evaluate it with the never-seen-before blue test data. Luckily for you, this whole process is automated with helpers like this one from scikit-learn, which already retrains the final model for you, and is accessible via the best_estimator_ attribute. If you want a bit more detailed documentation about this cross-validation, you can have a look at this link.
H: Max Pooling in first Layer of CNN I am seeing, in all the notebooks that I found, that Max Pooling is never used in the first layer of a CNN. Why this? Is it a convention among data scientist to do not use max pooling in the first layer? Or is it an error to use in the first layer? AI: The purpose of max pooling operation is to decrease the spatial dimensions of the input while also being robust by only considering the maximum values. Generally, as you might have noticed, most CNNs aim at decreasing the spatial dimension of the input while increasing its depth. Very broadly speaking, you can think of this as trying to encode the information enclosed in the spatial image into channels representing different aspects of the input. To answer your question, It would not have been an error to max pool the input right away, however, in that case, the input would not have been through enough layers to extract its enclosing information. Having said that, max pooling that early might result in losing spatial information while not gaining anything valuable depthwise.
H: Warning when plotting confusion matrix with all sample of one class I have two arrays: the first one with all the correct labels (they are all set to zero since each sample belong to the same class) and another one with all the labels predicted by my neural network. What I want do to is plot a simple confusion matrix to show the results. This is my code: test_labels = [0, 0, 0, ...., 0] #all set to zero predict_labels = [0, 0, 1, ...., 1] matrix = confusion_matrix(test_labels, predict_labels) fig = plt.figure() colors = ['orange', 'green'] labels = ["Normal", "Anomaly"] sns.heatmap(matrix, xticklabels=labels, yticklabels=labels, cmap=colors, annot=True, fmt="d") plt.title("Confusion matrix") plt.ylabel("Actual") plt.xlabel("Predicted") plt.close() The problem is that when I run this code I get the following warnings: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use zero_division parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) RuntimeWarning: invalid value encountered in true_divide recall = tps / tps[-1] UndefinedMetricWarning: No positive samples in y_true, true positive value should be meaningless UndefinedMetricWarning) How can I solve it? AI: If you insist on giving no positive examples it will have the same problem. As the error message states, the tool wants to compute the recall. The recall metric is given by: $$recall = \frac{tp}{tp+fn}$$ $tp$ (true positive) is equal with the number of positive cases predicted as positive, which is $0$ in your case. $fn$ (false negatives) is the number of positive cases predicted as negative, which again is $0$. What you ask is to compute $\frac{0}{0}$ which is undefined. You can add positive cases also or you should study the tool (I'm not too familiar with Python stack)and perhaps you find an option to not compute recall metrics for you.
H: How to use random forest with large number of categorical features and categories? I have 2 features productName and productCategory , both of them are strings. I have a category named supplier. There are 4000 suppliers and 500,000 items in test data. I don't think one hot encoding will be a good approach to deal with such a big data. How should i handle these using random forest? AI: You have a few options on your hand: Option 1 - Augment you dataset with additional Supplier-related features Do you have additional features about the supplier that makes sense to augment into your dataset ? For e.g. you might be able to replace the Supplier field with something like "Years in Business" , "Primary Industry", "Reputation Score" etc. That way you convert the categorical variable into something more meaningful that can be compared across the universe of Suppliers. Go this route if and only if you have meaningful and easily accessible features for your problem domain. Option 2 - Encode each Supplier with a unique integer I know this sounds stupid - but it really works for Random Forest models (wouldn't work on simple Linear Regression for sure). Random Forest models are scale invariant. They won't get biased towards suppliers with a larger numeric Supplier Id. They'll figure out the best way to split your Supplier Ids to meet your objective function. I would recommend Option 2 to start with since that is the simplest.
H: Difference between bagging and pasting? I found the definition: Bagging is to use the same training for every predictor, but to train them on different random subsets of the training set. When sampling is performed with replacement, this method is called bagging (short for bootstrap aggregating). When sampling is performed without replacement, it is called pasting. What is "replacement" in this context? AI: When a sampling unit is drawn from a finite population and is returned to that population, after its characteristic(s) have been recorded, before the next unit is drawn, the sampling is said to be “with replacement”. In the contrary case the sampling is “without replacement”. Source: OECD Glossary of Statistical Terms
H: How to best visualise two sets of data over several years? I have a set of data on crime statistics by police force and year, and another set of data showing police workforce numbers over the same period. I am trying to determine the best way to show this data. I have tried three alternatives but looking for preference out of these three or altnernatives that would show the data better (using Tableau but it's the visualisation more than the tool that matters). First, i created a pair of charts shown one above the other so the X axis is common with separate Y axis. Then i combined these two into a dual axis chart And my last attempt was to use a KPI used by the police in some of their reports - the crime rate per 1000 population and create a similar metric for police staff per 1000 population. So for 2020, this is showing on average across England and Wales, there were 86 crimes per 1000 people and 3.34 police staff. Dropdowns on the dashboard of each allow changing this to a specific police force and to specific crime groups. The crime rate per 1000 dont match up with the police force reported numbers as they seem to use just the census 2011 figures whereas i have rolledup the population estimates per year (census 2011 + births + immigration - deaths - emmigration). It's still an estimate but should be more accurate than using 10 year old figures. Any suggestions on how to better present this data ? Thanks in advance. The full dashboard (still work in progress and draft so apologies for the performance until i determine the right data vis), is on tableau public https://public.tableau.com/profile/trevor.north#!/vizhome/police-reported-crime-workforce-population-public-v1_12/PoliceWorkforceandReportedCrime-EnglandWales-2007-2020 After some further feedback, i have created two more versions. One without the map And one retaining the map but removing one of the panes (that showed %change of workforce). There is a problem with the map. I used colour to show the number of offences so London shows as darker colour. Without modifying the data, all the rest were at other end of the scale. To rectify, i used a LOG function on the number of offences. This then shows more colour variation between the different police force regions. But i now cannot show the legend since that shows a range upto about 7000 whereas the number of offences is over a million in London. Think i may have to create an information button and an extra layer and manually include a legend on that for the map. Is there a better way ? AI: Statistics of this nature are commonly normalised with respect to the population sizes by presenting them as per X. In this case per 1,000 or 100,000 people. Absolute number of crimes makes no sense and lacks context. In terms of display, as both KPI ranges are similar (3 vs 19) a common X and Y axis is possible with two different lines. If you want to communicate trends relative to a baseline year (e.g. 2014) you could divide all values relative to this year and express it as a % change. I.e. crime went up 10% but police numbers went down by 5%.
H: Encode the days of week as numeric variable I would like to understand if there is the possibility to encode the days of the week as a single numerical column to preserve the ordinal relationship between the days. My task is a classification task. So, something like this: Monday: 0 Thrusday: 1 Wednesday: 2 ... Sunday: 6 I wouldn't use a one-hot encoding for this scenario because I would like to preserve the ordinal relationship between the days. However, I don't know if this is an allowed approach or not. AI: The answer is as follows: If you are using a Linear Regression Model - then it will assume that Sunday (6) is bigger (or carries more weight) than all other days in the week. If that assumption makes sense for your problem domain - you are fine. If you are using a model such as Random Forests - then it doesn't really matter (meaning you are good). Random Forests are typically scale invariant - They do not assume higher weightage for large values - They will figure out how best to split up the numeric values (0 - 6) to meet your objective.
H: Hi, im currently working for a company that has some inventory control problems First, I was asked by the manager to make a plot showing produced vs received items, its a multistage process so we are only in charge of one of the steps which is designing, I made the plot comparing Received cases vs produced here in my country, produced out of the country, total produced and % of advancement. Later on in a meeting she asked me to show the graph and table I made to the production supervisors, and she explained that somehow they could predict inventory with it, after the meeting I've been thinking a lot about it, I made another 2 graphs one dependant on the day of the week vs received items, and another dependant on time of the day vs received cases, but now im thinking of doing a regression analysis to be able to predict inventory. This is not my field, I am a chemist, but I know that if I somehow do a regression analysis and while doing so I succesfully take into account all the variables that affects inventory I should be able to come up with some equation that gives me the answer I need right? What do you think of my approach? any advice ? AI: Okay, so since the visuals are done in excel, I think you can start with simple linear regression in excel. Im not an excel expert in this regard, but I think this article will be a good starting point https://www.ablebits.com/office-addins-blog/2018/08/01/linear-regression-analysis-excel/ So, what you can do: Get your data and fit linear regression on it. You need to choose the dependent variable y (this you want to predict) - this will be your inventory variable. All other features (basically other data in your disposal) will act as independent Xi variables. You fit linear model of form y = bx + a (simplified) After you have fit the model, you can access its coefficients. One of them will be an Intercept other ones will be coefficients for your input X features. This coefficient will be X affect on your y (inventory). So, for example, if coefficient is 0.1 for X1, then increasing X1 for 1 will lead to 0.1 increase in y. At this point you have dependency between input data and inventory change. Thats a lot more to consider - how good your model is (for this you use metrics, for example MAE and test your model on unseen data), do you have a data-leak, and so on and so on. You will also need to pre-process your data in viable format, for example extract day/week number from date to try capturing seasonality. Also you should avoid correlation between variable, thats really a lot to think of. But most importantly, I think you should try to implement it even if you dont know all details and all the rules just to get a fealing if this task fits you, after it you can go on and lear more about all this stuff. (I would definitely recommend going through this series one day if you are interested https://www.youtube.com/playlist?list=PLOg0ngHtcqbPTlZzRHA2ocQZqB1D_qZ5V ) Hope this helps.