text
stringlengths
83
79.5k
H: Predict_proba on a binary classification problem I have a binary classification task on my hands, i have a bunch of people that i need to classify as being ones or zeros and then use predict_proba to estimate how confident my prediction was on the samples used for inference. My understanding is that predict_proba for most classification algorithms isn't accurate and needs to be calibrated. Is there a common approach to get objectively accurate class probabilities ? Algorithms names , techniques and some code if possible. Thanks! Note : my classes are imbalanced 80/20. AI: sklearn provides us with two methods to calibrate a probabilistic classifier via their CalibratedClassifierCV class; one using Platt's scaling (sigmoid) and one using isotonic regression. Another method is by using a Venn-Abers predictor, which is not implemented in sklearn, however you can find a custom implementation here. As for imbalanced datasets, make sure to exploit the class_weights argument of an sklearn classifier, to increase the weight of under-represented classes.
H: Keras - understanding ImageDataGenerator dimensions I'm trying to implement custom object detection by taking a trained YOLOv2 model in Keras, removing the last layer and retraining it with new data. I'm confused about how to feed the data to Keras, though. I have annotated a bunch of pictures with bounding boxes using the YOLO annotation, and put them in two separate folders (images where the .jpgs reside and annots where the .txt annotations are). I also removed the last layer from the model and added a custom one (I'm trying to predict bounding boxes for 2 classes). I'm trying to pass my data with an ImageDataGenerator, as my dataset is quite small. I have the following input objects: np.shape(train_images) # this contains RGB data from 79 pictures (79, 1, 608, 608, 3) np.shape(train_y) (79,) I'm trying to pass these to the ImageDataGenerator, but I get an error: train_datagen = ImageDataGenerator( rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, fill_mode='nearest') train_generator = train_datagen.flow( train_images, train_y) ValueError: `x` (images tensor) and `y` (labels) should have the same length. Found: x.shape = (1, 608, 608, 3), y.shape = (79,) I don't understand what the problem is. Somehow the first dimension of my images data is completely gone and thus does not match... What's wrong with it? AI: I think your train_images array should have shape (79, 698, 608, 3). The generator works through each of the first dimensions of those arrays, so is passing a batch of 4d numpy arrays, instead of a batch of 3d numpy arrays. You can try seeing if that helps, using numpy.squeeze(), like this: In [1]: import numpy as np In [2]: a = np.random.randint(0, 10, (2, 1, 10, 10, 3)) In [3]: np.squeeze(a, axis=1).shape Out[3]: (2, 10, 10, 3) So be sure to set the axis=1 argument. Then just pass the updated array as you were doing.
H: Making sense of indices in 2D convolution operations in convolutional neural networks Referring to the answer here: https://www.quora.com/Why-are-convolutional-nets-called-so-when-they-are-actually-doing-correlations, the equation for a discrete 2D convolution is specified as: $$C(x,y)=\sum_{m=1}^M\sum_{n=1}^NI(m,n)K(x-m,y-n)$$ or $$C'(x,y)=\sum_{m=1}^M\sum_{n=1}^NI(x-m,y-n)K(m,n)$$ where $I$ is the image and $K$ is the kernel or filter. I can't understand how the indices work. Let's say I have the image: $\begin{bmatrix}I_{11} & I_{12} & I_{13} & I_{14} & I_{15}\\ I_{21} & I_{22} & I_{23} & I_{24} & I_{25}\\ I_{31} & I_{32} & I_{33} & I_{34} & I_{35}\\ I_{41} & I_{42} & I_{43} & I_{44} & I_{45}\\ I_{51} & I_{52} & I_{53} & I_{54} & I_{55} \end{bmatrix}$ and kernel $\begin{bmatrix}K_{11} & K_{12} & K_{13}\\ K_{21} & K_{22} & K_{23}\\ K_{31} & K_{32} & K_{33}\end{bmatrix}$ Now by the above definition (in this case $M=3$ and $N=3$) $$C_{11} = I_{11}K_{00}+I_{12}K_{0,-1}+I_{13}K_{0,-2}\\ +I_{21}K_{-1,0}+I_{22}K_{-1,-1}+I_{23}K_{-1,-2}+\\ +I_{31}K_{-2,0}+I_{32}K_{-2,-1}+I_{33}K_{-2,-2}$$ or $$C'_{11} = I_{00}K_{11}+I_{0,-1}K_{12}+I_{0,-2}K_{13}\\ +I_{-1,0}K_{21}+I_{-1,-1}K_{22}+I_{-1,-2}K_{23}+\\ +I_{-2,0}K_{31}+I_{-2,-1}K_{32}+I_{-2,-2}K_{33}$$ Even if I assume that the indices for $C$ or $C'$ run from $2$ to $4$ (instead of $1$ to $3$), then $$C_{22} = I_{11}K_{11}+I_{12}K_{1,0}+I_{13}K_{1,-1}\\ +I_{21}K_{0,1}+I_{22}K_{0,0}+I_{23}K_{0,-1}+\\ +I_{31}K_{-1,1}+I_{32}K_{-1,0}+I_{33}K_{-1,-1}$$ or $$C'_{22} = I_{11}K_{11}+I_{1,0}K_{12}+I_{1,-1}K_{13}\\ +I_{0,1}K_{21}+I_{0,0}K_{22}+I_{0,-1}K_{23}+\\ +I_{-1,1}K_{31}+I_{-1,0}K_{32}+I_{-1,-1}K_{33}$$ So no matter how the indices are defined, the indices for either $I$ or $K$ go out of bounds in the expression for convolution. How do I make sense of this? What's meant by terms with negative indices like $I_{-1,-2}$ or $K_{0,-1}$? Follow-up doubt: So assuming zero-padding, all terms with non-positive indices are assumed to be $0$. From that, given the two formulas for $C_{22}$ and $C'_{22}$ above, they evaluate to just $I_{11}K_{11}$, since all terms involving non-positive indices vanish. But that doesn't sound right, since from my understanding, it should evaluate to: $$\begin{bmatrix}I_{11} & I_{12} & I_{13}\\ I_{21} & I_{22} & I_{23}\\ I_{31} & I_{32} & I_{33}\end{bmatrix}: \begin{bmatrix}K_{33} & K_{32} & K_{31}\\ K_{23} & K_{22} & K_{21}\\ K_{13} & K_{12} & K_{11}\end{bmatrix}$$ (where $:$ represents Frobenius inner product) since convolution is the same as cross-correlation with a flipped kernel. So I still can't make sense of the formulas for $C_{22}$ and $C'_{22}$ as I wrote above. AI: From http://www.deeplearningbook.org/contents/convnets.html, The only reason to flip the kernel is to obtain the commutative property. While the commutative property is useful for writing proofs, it is not usually an important property of a neural network implementation. Instead, many neural network libraries implement a related function called the cross-correlation, which is the same as convolution but without flipping the kernel. The discrete cross correlation function with 1-based indexing for $$I = U*V\ and\ K=M*N$$ is given by $$C(x,y)=\sum_{m=1}^M\sum_{n=1}^NI(x+m-1,y+n-1)K(m,n)$$ where $$x\in(U-M+1)\ and\ y\in(V-N+1)$$ When applying 2-D convolutions, many neural network implementations will reduce the size of the output. To retain the original size in output and to retain information at the borders, many practical use cases add a padding of zeros to the input image. Coming back to your specific question, in the case where indices go out of bounds, either on the positive side or on the negative side, (which happens when you try to retain the size of the input in your output), values corresponding to them are usually taken as zeros, which is akin to zero-padding in implementation. Discrete convolutions are actually given by $$(f * g)[n] = \sum_{m=-\infty}^\infty f[n-m] g[m]$$ In practice the limits are made finite because of the same assumption of zero amplitude signal for out of bound indices.
H: are OneHotEncoder and keras To_categorical same? The length of human_vocab is 18377. The length of input X is 1000 I'm trying to run to_categorical np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), X))) Is this the same if i apply: onehot_encoder = OneHotEncoder(sparse=False) onehot_encoder.fit_transform(X) the output of onehot_encode is (1000, 9739) shouldn't it be (1000,18377) ? human_vocab contains the corpus data that X doesnt contain I'm trying to find the replacement of to_categorical due to memory issues to create hot vector AI: To answer your question: the output of onehot_encode is (1000, 9739) shouldn't it be (1000,18377) ? The OneHotEncoder instance will create a dimension per unique word seen in the training sample. Here you are only showing it 9739 different words at training so it does not need more dimensions to perform one hot encoding. One way to have it accommodate the entire vocabulary is onehot_encoder = OneHotEncoder(sparse=False, categories=human_vocab) onehot_encoder.fit_transform(X)
H: How to export PCA to use in another program I'm trying to write a random forest classifier for a very large dataset, as such as part of the pre-processing i have applied PCA to reduce from 643 features to 5 PC's. Is it possible to export these settings so I can pca.transform(data) in another program. I have been able to do this to the the scaler using pickle but when i run the transfrom i get given the error: Traceback (most recent call last): File "<ipython-input-35-68c9849c2acc>", line 1, in <module> pca.transform(data) TypeError: transform() missing 1 required positional argument: 'X' AI: Ideally PCA should not be used as a part of pre-processing feature reduction. Anyhow regarding saving and reusing PCA model, sharing a basic code snippet which is working very fine in my case(as I'm not able to reproduce the error case). from sklearn.decomposition import PCA import pickle as pk pca = PCA(n_components=2) result = pca.fit_transform(X) # Assume X is having more than 2 dimensions pk.dump(pca, open("pca.pkl","wb")) . . . # later reload the pickle file pca_reload = pk.load(open("pca.pkl",'rb')) result_new = pca_reload .transform(X) # result and result_new same in my case
H: what is the main difference between GAN and autoencoder? what is the main difference between GAN and other older generative models? what were the characteristics of GAN that made it more successful than other generative models? AI: The main differences are the philosophy that drives the loss metric, and consequently the architecture (the latter goes without saying, obviously). Autoencoders The job of an autoencoder is to simultaneously learn an encoding network and decoding network. This means an input (e.g. an image) is given to the encoder, which attempts to reduce the input to a strongly compressed encoded form, which is then fed to the decoder. The network learns this encoding/decoding because the loss metric increases with the difference between the input and output image - every iteration, the encoder gets a little bit better at finding an efficient compressed form of the input information, and the decoder gets a little bit better at reconstructing the input from the encoded form. Summarised: An autoencoder learns to represent some input information very efficiently, and subsequently how to reconstruct the input from it's compressed form. Generative Adversarial Networks Here, we have a "generator" whose job is to take some noise signal and transform it to some target space (again, images is a popular example). The other component (the adversary) is the "discriminator", whose job is to distinguish real images drawn from the desired target space from the fake images created by the generator. In this case, the network is trained in two alternating phases, each with a different loss: Firstly, the discriminator is given labelled samples from the real set of images, and fake images generated by the generator (of course, at the start, these images are just noise). Loss is calculated according to some binary classification loss metric (e.g. crossentropy). The discriminator thus begins to learn the difference between a fake image and a real image. Before it can learn too much, though, we switch over to the generator. To train the generator, we once again feed in some noise and check the output of the discriminator. This time, we want the discriminator to detect that the image (generated by the generator) is real - this means that we have successfully fooled the discriminator, and therefore the generator has begun to learn how to make an image that resembles a "real" image from the training set. Summarised: A GAN uses an adversarial feedback loop to learn how to generate some information that "seems real" (i.e. looks the same/sounds the same/is otherwise indistinguishable from some real data) The difference Hopefully that shows that there is a very fundamental structural difference between the two networks, and the goals that each of them are trying to achieve during the learning process. I don't really know if GANs are necessarily "more successful", but for certain generative problems they have been observed to give more "realistic" outputs. Why? My gut says that a GAN probably learns more about "how can I make an image look real in general" rather than "how can I memorise this particular set of images with the greatest accuracy/efficiency". But there are certainly similarities, in particular between the generator (of the GAN) and the decoder (of the autoencoder).
H: Layer notation for feed forward neural networks Apologies in advance, for I have a fairly rudimentary question on the notations for studying Feed-Forward Neural Networks. Here is a nice schematic taken from this blog-post. Here $x_i = f_i(W_i \cdot x_{i-1})$ where $f_i$ is the activation function. Let us denote the number of nodes in the $i^{\text{th}}$ layer by $n_i$ and each example of the training set being $d-$dimensional (i.e., having $d$ features). Which of the following do the nodes in the above graph represent? Each one of the $d$ features in every example in the training set. In this case, $n_0 = d$ and $x_0$ is $(d \times 1)$. Each example of the training set, which is $d-$dimensional. In this case, $n_0$ is the number of examples and $x_0$ is $(d \times n_0)$. In both cases, the weight matrix $W_i$ is $(n_i \times n_{i-1})$. On the one hand, most references like this blog-post claim it is (1), while on the other, I can also find few references such as this video which seem to claim it is (2). Which one of them carries the right interpretation? Although seems like the back propagation algorithm can be executed in both representations, I'm quite sure it only makes sense in one of them. Any help will be greatly appreciated. AI: Well, the image you sent is not nicely denominated. The first layer in the image is $x_0$ which is the input consisting of d dimensions, it is actually the first sample of the training set. Here its dimensions are $x_{01}, x_{02}, x_{03}, x_{04}$ (the green nodes in the left, hence, d equals $4$). Then the next layer which is called $x_1$ is the first hidden layer and subsequently $x_2$ is the second hidden layer and $x_3$ is the output of this feed-forward network. By this definition, $x_0$ is the input with d dimensions $x_{01}, x_{02}, x_{03}, x_{04}$ and for calculating each node in the proceding hidden layer which is called $x_1$ here, we should do: consider the most up node in hidden layer $x_1$ as the node we want to figure its value. We call it $x_{11}$, first we compute a linear computation of weights and inputs and then we apply some activation function $\sigma$ to it: $$x_{11} = \sigma(x_{01} \cdot w_{11} + x_{02} \cdot w_{12} + x_{03} \cdot w_{13} + x_{04} \cdot w_{14})$$ also, an offset might be added to this expression. consider that hidden layers can be of any size. Each one of the $d$ features in every example in the training set. In this case, $n_0 = d$ and $x_0$ is $(d \times 1)$. $n_0 = d$ and $x_0$ is $(d \times 1)$ is right and in the first layer, yes each node is depicting a single one of d features of the input. but not for the hidden layers. Each example of the training set, which is $d-$dimensional. In this case, $n_0$ is the number of examples and $x_0$ is $(d \times n_0)$. No as I mentioned, this is an architecture that depicts the process for a single training set. Hence, each node is not a sample of the training set. You set $n_0$ for the number of nodes in the first layer which is input. so $n_0$ here equals d and $x_0$ which is the input equals $(x_{00}, x_{01}, x_{02}, ...x_{0d})$, $0$ showing that this is the first sample of training set. In the backpropagation process, we have the same architecture. Then by calculating the gradient of each node, we update each weight. this process is done so many times to find the most optimal weights. there are various approaches for this weight updating thing like batch, etc updates.
H: How to handle missing date data? I have a column named GarageYrBlt which just lists the year the garage of that house was built. I have one nan value for this column. Does it make sense to fill it with the columns median (This was a common approach for the missing age values in the Titanic competition)? AI: I suspect you're working with the Ames house price dataset - one of Kaggle's introductory competitions. Replacing the missing values with the dataset mean / median is very general. I believe you have a variable which describes which neighbourhood the house is in. A more focussed approach would be to replace that missing value in GarageYrBlt with the mean / median for that neighbourhood (subject to additional sensibility checks like the garage not being built before the house).
H: How can I find correlation between features? The problem I want to figure out how routers correlate between each other. Like, if a specific error occurred in router A, and almost at the same time the error occurs in router B, they probably have some connection with each other (are at one line). The Data Suppose I have a dataframe that looks like this: |Router|Error|Duration|Timestamp | |DB-XX |GSM |26.5374 |2019-05-01 00:20:14| |DT-XY |AUC |15.5400 |2019-05-01 01:15:01| |DR-YY |AUC |02.0333 |2019-05-01 01:17:13| |DP-YX |LOC |45.2609 |2019-05-01 00:01:10| The question What is the best way to deal with it? Regression (one vs the rest) for each router? The problem is, that there are hundreds of models and I also want to reduce computational costs... AI: Blindly Dummy Coding errors in Pandas will introduce irrational numerical relationships between different types of errors and this will not help you in finding true similarity. First and foremost you would like to convert your data into time series data of each router with sampling at equal time steps for each error. 1s for the time steps when the error occurs, 0s for the time step when it doesn't. By this, you convert each router's data into a binary vector for each specific error. Next thing, calculating a Pearson or Spearman correlation between binary vectors is not a good idea. As explained brilliantly here, Correlations arise naturally for some problems involving 0s and 1s, e.g. in the study of binary processes in time or space. On the whole, however, there will be better ways of thinking about such data, depending largely on the main motive for such a study. For example, the fact that correlations make much sense does not mean that linear regression is a good way to model a binary response. You would like to use a similarity metric designed specifically for binary vectors. For example, Jaccard Similarity which computes intersection over union (number of times when both of the vectors were one divided by number of times when either one was) is a good choice. A great summary of such similarity vectors can be found in this article. Calculations involving these similarity calculations won't be computationally intensive. Depending on the sparsity of data, it might be better to do Frequent Itemset Mining so that you know when an error occurs, which routers go down together.
H: How to interpret this summary output in R? How Do I interpret this summary output in R? Coefficients: Estimate Std. Error z value Pr(>|z|) 国家 3.976e-05 2.484e-05 1.600 0.109 就业情况 -2.471e-02 2.878e-03 -8.588 < 2e-16 *** 工作类型 1.677e-02 1.700e-03 9.865 < 2e-16 *** 家庭经济地位 -5.482e-02 5.610e-03 -9.772 < 2e-16 *** 性别 4.753e-01 1.223e-02 38.876 < 2e-16 *** 社会阶级 2.292e-02 5.172e-03 4.432 9.34e-06 *** 受教育水平 9.215e-02 2.599e-03 35.461 < 2e-16 *** AI: summary(regressor) #or whatever your model is named. estimate contains the estimate of each feature. std. Error is standard deviation divided by the square root of the sample size that can be achieved via some packages or by manual computations. z value also comes as t value is the division of estimate by std.err. $z-value = \frac{Estimate}{std. err}$ p-value $Pr( >|z|) $ indicates the statistical significance. the lower it is for a feature, the more impact the corresponding independent variable would have on the output (dependent variable) here for instance, those features with the p-value equal to $2e-16$ are highly statistically significant. Meaning, their impact on determining the outcome is high. the $***$ shows the significance rate of your features (国家, 就业情况, ...).You can have between 0 to three stars and 3 stars depict a high statistical significance. All of these are practical in analyzing different features.
H: Can Anyone Explain this code piece by piece? Function that creates a DataFrame with a column for Cluster Number def pd_centers(featuresUsed, centers): colNames = list(featuresUsed) colNames.append('prediction') # Zip with a column called 'prediction' (index) Z = [np.append(A, index) for index, A in enumerate(centers)] # Convert to pandas data frame for plotting P = pd.DataFrame(Z, columns=colNames) P['prediction'] = P['prediction'].astype(int) return P ``` AI: def pd_centers(**featuresUsed**, centers): inputs of the function are featuresUsed that are the features(column names) you already have and centers. colNames = list(**featuresUsed**) names of columns (featuresUsed) are saved here as a list named colNames colNames.append('prediction') Here a new column name rather than those already existing is added to column names (features) named prediction # Zip with a column called 'prediction' (index) Z = [np.append(A, index) for index, A in enumerate(centers)] A list of elements of centers with their indices are saved as Z For instance, if we have centers equal to ['CA', 'TEX', 'FLO', 'ORE'] then Z would become and finally these indices would become the values of prediction column. [array(['CA', '0'], dtype='<U21'), array(['TEX', '1'], dtype='<U21'), array(['FLO', '2'], dtype='<U21'), array(['ORE', '3'], dtype='<U21')] # Convert to a pandas data frame for plotting P = pd.DataFrame(Z, columns=colNames) #save Z as a dataframe with columns colNames P['prediction'] = P['prediction'].astype(int) #converting the values in Prediction column to int type return P here Z is converted to dataframe named P and the elements in the prediction column is converted to int type and P as the ultimate dataframe is returned. this prediction column is the cluster number. if centers is a list of tuples, and it should be I assume, then instead of increasing indices, then the corresponding elements in each tuple are saved as A and index.
H: How to write a LSTM model with 3 dimensional X_train and Y_trains? I have X_train and Y_train with [2160,24,3] dimensions. But when I try a simple LSTM like this: model = Sequential() model.add(LSTM(24 , return_sequences = True)) model.add(Dropout(dp)) It gives me this error message: ValueError: Error when checking target: expected dropout_192 to have shape (24, 24) but got array with shape (24, 3) And when I try to change my code like this: model = Sequential() model.add(LSTM(24 , return_sequences = False)) model.add(Dropout(dp)) model.add(RepeatVector(24)) I get: ValueError: Error when checking target: expected repeat_vector_55 to have shape (24, 24) but got array with shape (24, 3) Also: model = Sequential() model.add(LSTM(24 , return_sequences = False)) model.add(Dropout(dp)) model.add(RepeatVector(24)) model.add(TimeDistributed(Dense(1))) Or: model = Sequential() model.add(LSTM(24 , return_sequences = False)) model.add(Dropout(dp)) model.add(RepeatVector(24)) model.add(LSTM(24 , return_sequences = True)) model.add(Dropout(dp)) model.add(TimeDistributed(Dense(1))) Gives me: ValueError: Error when checking target: expected time_distributed_56 to have shape (24, 1) but got array with shape (24, 3) I am confused and don't know how to fix it? What I want is train a LSTM model with X_train and Y_train of size [2160,24,3]. How can I do that? AI: In all the above cases, you are not setting the output shape properly. The last layer should create 3 features. Try this for example: model = Sequential() model.add(LSTM(24, return_sequences = True)) model.add(Dropout(dp)) model.add(Dense(3))
H: Pattern Recognition - Kernel Density estimators 2.5.1 Please refer page 122-123 of Pattern recognition and Machine Learning - Bishop. A few equations: Density estimate: $$ p(\mathbf{x}) = \frac{K}{NV} \tag{2.246}$$ where $K$ = #points in: $N$ regions of volume $V$ each. Kernel function: Number $K$ of points falling within a hypercube centered at origin: $$\begin{equation}k(\mathbf{u}) =\begin{cases} 1, \quad |u_i| \leq \frac{1}{2} \qquad i = 1,\dots,D,\\ 0, \quad otherwise \end{cases} \tag {2.247} \end{equation}$$ From 2.247, the quantity $k((\mathbf{x} - \mathbf{x_n})/h)$ will be one if the data point $\mathbf{x_n}$ lies inside a cube of side $h$ centered on $\mathbf{x}$, and zero otherwise. The total number of data points lying inside this cube will therefore be: $$K = \sum_{n=1}^{N} k \Big(\frac{\mathbf{x} - \mathbf{x_n}}{h}\Big) \tag{2.248}$$ Substituting this expression into (2.246) then gives the following result for the estimated density at $\mathbf{x}$ $$p(\mathbf{x}) = \frac{1}{N}\sum_{n=1}^{N} \frac{1}{h{^D}}k\Big( \frac{\mathbf{x} - \mathbf{x_n}}{h}\Big) \tag{2.249}$$ where we have used $V$ = $h^D$ for the volume of hypercube of side $h$ in $D$ dimensions. Using the symmetry function $k(\mathbf{u})$, we can now re-interpret this equation, not as a single cube centered on $\mathbf{x}$ but as the sum over $N$ cubes centered on $N$ data points $\mathbf{x_n}$ I am struggling to follow explanation of last paragraph. Each point $\mathbf{x_n}$ is at the center of (respective) cube - total $N$ cubes. If so, then concept of boundary/ distance from a fixed point seems to be missing - ref. 2.247, 2.248 resulting all 1's. Which points will be zero, if any. Just a rough sketch: Left sketch represents 2.248 - imagine few points outside the cube (=0); points inside cube=1. Right sketch is my understanding of 2.249's explanation: 3 points, all at center of respective cube. On what basis will the data points be classified as 0 or 1? AI: Pls. refer the equations above. In case of eq 2.248 (left box image) we're calculating density at point $x$, only. There are total of 3 points, only 2 qualify as per kernel function and hence probability at $x$, $p(x) = \frac{2}{3}$. The right box diagram shows that we take the points, one by one, and calculate the probability at each point using kernel function. Thus we get probability at each of point of data set. And effectively, we get a probability distribution for the entire data set. The bar chart shows the PMF for 2 cases - left: probability based on (eq 2.248) and right: for (eq. 2.249). I use PMF to easily communicate the understanding. *Note: 1- With x_1 centered cube 2 points: x_1 & x_2 lie inside the cube; same for x_2 centered cube. However for x_1 only one point x_1 lies in the cube - out of total of 3 points. 2- d|a, b| gives distance between points a, b. 3- for a large collection of data points and cubes of small sides (h), we get a PDF distribution curve.
H: Machine learning for object states I have the objects pool with histories of their states, where each transition from one state to another takes some time, ex: $$ object\ 1: A \overset{1s}{\rightarrow} B \overset{2s}{\rightarrow} C \\ object\ 2: D \overset{3s}{\rightarrow} E \overset{4s}{\rightarrow} F $$ The pool consists of many objects with right histories, where sequence of states and transition duration follows some unknown rules. Also the pool consists a small amount of object with wrong history. But which objects are right or wrong is unknown. It needs to found mistakes in the pool. Ex, "the transition from X to Y is not allowed" or "the transition is too long for such type". What machine learning models are more suitable for this problem? AI: To me this problem looks similar to language modeling: a model is trained on a large amount of sequences, and then it can predict the probability of any input sequence. In your case a low probability would indicate an abnormal sequence. My background is in NLP that's why I think language modeling, but I guess the same techniques are used for other problems as well. The fact that you have transitions and states suggests Markov Models, for which there are known methods for inference and estimation. So maybe you could design a more specific kind of model for your case and use something like the Baum–Welch algorithm.
H: Is there a real life meaning about KMeans error? I am trying to understand the meaning of error in sklearn KMeans. In the context of house pricing prediction, the error linear regression could be considered as the money difference per square foot. Is there a real life meaning about KMeans error? AI: The K-means Error gives you, what is known as, total intra-cluster variance. Intra-cluster variance is the measure of how much are the points in a given cluster spread. The following cluster will have high intra-cluster variance In the image below, even though the number of points are same as that of the image above, the points are densely distributed and hence will have lower intra-cluster variance. K-means Error is interested in the total of such individual cluster variances. Suppose for a given data, if clustering 'A' forms clusters like the first image and clustering 'B' forms clusters like the second image, you will in most cases choose the second one. Although this does not mean that the K-means Error is a perfect objective to optimize on to form clusters. But it pretty much catches the essence behind clustering. Code used for cluster plot generation - import numpy as np from matplotlib import pyplot as plt sparse_samples = np.random.multivariate_normal([0, 0], [[50000, 0], [0, 50000]], size=(1000)) plt.plot(sparse_samples[:, 0], sparse_samples[:, 1], 'b+') axes = plt.gca() axes.set_xlim(-1000, 1000) axes.set_ylim(-1000, 1000) plt.show() dense_samples = np.random.multivariate_normal([0, 0], [[5000, 0], [0, 5000]], size=(1000)) plt.plot(dense_samples[:, 0], dense_samples[:, 1], 'r+') axes = plt.gca() axes.set_xlim(-1000, 1000) axes.set_ylim(-1000, 1000) plt.show() In both cases, a 1000 datapoints from a Bivariate Normal Distribution are sampled and plotted . In the second case, the Covariance Matrix is changed to plot a denser cluster. np.random.multivariate_normal's documentation can be found here. Hope this helps!
H: how to run my python code on google cloud without fear of getting disconnected - an absolute beginner? I have been trying to use python 3 for text mining on a 650 MB csv file, which my computer was not powerful enough to do. My second solution was to reach out to google cloud. I have set up my VMs and my jupyter notebook on google cloud, and it works perfectly well. The problem, however, is that I am in constant fear of getting disconnected. As a matter of fact, my connection with google server was lost a couple of time and so was my whole work. My question: Is there a way to have the cloud run my code without fear of getting disconnected? I need to be able to have access to my csv file and also the output file. I know there is more than one way and have read a lot of material but they are too technical for a beginner like me to understand. UPDATE: here is how I get access to my jupyter notebook on google cloud: 1- I run my instance on google cloud 2-I click on SSH 3- in the window that appears, I type the following: jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser & I have seen people recommend to add nohup to the beginning of the same commend. I have tried it and got this message: nohup: ignoring input and appending output to 'nohup.out' And nothing happens. AI: There are several options to keep your session running on the host machine even if it's disconnected, see for instance this: https://www.tecmint.com/keep-remote-ssh-sessions-running-after-disconnection/ Personally I use the screen command because I'm familiar with it but I think tmux is more recent.
H: Confused with the derivation of the gradient descent update rule I have been going over some theory for gradient descent. The source I am looking at said that the change in cost can be described by the following equation: $$∆C=∇C∙∆w$$ where $∇C$ is the gradient vector/vector derivative of the cost function (MSE) and $∆w$ is the change in weights. It said that the goal is to make the change in cost negative. Good so far. My issue is with the next part. It states that $$∆v=-η∇C$$ My issue is with this, and why $∆v$ is set to this. Why would we want to change the weights by a small amount of the gradient function? AI: Upon writing this I have realised the answer to the question. I am still going to post so that anyone else who wants to learn where the update rule comes from can do so. I have come to this by studying the equation carefully. $∇C$ is the gradient vector of the cost function. The definition of the gradient vector is a collection of partial derivatives that point in the direction of steepest ascent. Since we are performing gradient 'descent', we take the negative of this, as we hope to descend towards the minimum point. The issue for me was how this relates to the weights. It does so because we want to 'take'/'travel' along this vector towards the minimum, so we add this onto the weights. Finally, we use neta which is a small constant. It is small so that the inequality $∆C>0$ is obeyed, because we want to always decrease the cost, not increase it. However, too small, and the algorithm will take a long time to converge. This means the value for eta must be experimented with.
H: Average reward reinforcement learning What is the bellman equation update rule for the average reward reinforcement learning? I searched a few articles, but could not find any practical answer. AI: In general, the average reward setting replaces the discounted setting in continuous tasks. It relies on there being a long term stable distribution of states under any particular policy (this is called ergodicity) - and in this will usually be true for continuous MDPs that don't have absorbing states. If you see an update rule in the discounted setting that looks like this (for Q learning): $$Q(s,a) \leftarrow Q(s,a) + \alpha(r + \gamma\text{max}_{a'}Q(s',a') - Q(s,a))$$ Then you replace the discounted TD error by the differential TD error: $$Q(s,a) \leftarrow Q(s,a) + \alpha(r -\bar{r} + \text{max}_{a'}Q(s',a') - Q(s,a))$$ where $\bar{r}$ is the mean reward per time step under the current policy. You can estimate this simply from the rewards seen so far. I searched a few articles, but could not find any practical answer. See Reinforcement Learning: An Introduction (Second Edition) chapter 10, sections 3 and 4 for a more thorough description and more examples.
H: How does personalized machine learning work? Many services (such as Netflix, Amazon, and Google Search, Apple's Siri) are said to get better by learning the 'habits' of their users. As I understand, they somehow create a customized machine learning model for each individual because a generic model would not work well. However, I would like to know how this can be achieved in practice. For the sake of discussion, suppose I am developing a new massage chair that changes its "massage recipe" based on the respiration pattern of its user. I suppose (and I might be wrong here) that the respiration pattern varies from person to another but that the variations in a person's respiration pattern indicate what kind of message the person would like to have (again, these suppositions are wrong but they are presented for the sake of discussion). I have questions on how these models are customized. Specifically: How would I train the model for my hypothetical device? How would customize the said model? Do big companies (Amazon et.al) create a person-specific model for every user? And if this is the case, wouldn't this be impractical (Google serves billions of people; therefore, having billions of models would be hard to maintain) In the case of Google for example, when setting a new android phone for the first time, the user is requested to repeat three-times "Ok Google" to train their voice recognition model. Is this small sample of sound enough to create the voice recognition model? Or is the sample used to customize an existing generic model? And if so, how is this done? Is there any references (blog, paper, etc...) that discuss this topic in details? AI: You should google recommendation systems. A good starting point is this place https://paperswithcode.com/task/recommendation-systems where they have some papers on recommendation systems with code.
H: How do I convert a summation equation to a vector equation (backpropagation)? $$a_j^l=\sigma(\sum_{k} {w_j}_k^l {a}_k^{l-1}+b_j^l)$$ $$a^l=\sigma( w^l {a}^{l-1}+b^l)$$ In a resource I have been reading, the above equations describe the activation of a neurone. They have the same function (I agree) but the first is a summation representation, whereas the second is this equation but in a vector form. My questions are, What did they do to remove the subscripts, and what is the process for converting summation equations to vector equations? AI: $a_j^l$ is the $j$-th component of $a^l$. Hence, the first equation is $1$ dimensional, the second equation is a vector equation. Similarly for notaiton $b_j^l$ and $w_{jk}$. $\sigma$ means sigmoid function. If the input is a vector, then it evaluated the vector componentwise. To answer your main quesiton, recall that from definition of matrix vector product, where $A$ is a matrix and $x$ is a vector, $Ax$ produces a vector $y$ where the $j$-th component is $y_j= \sum_k A_{jk}x_k$. That is element of $$y=Ax$$ can be written as $$y_j = \sum_k A_{jk}x_k$$ elementwise. Hence that is how we can "remove" the summation sign.
H: Class Imbalance Problem even after Balancing Data So I am training a neural network on a binary classification problem and my Case (1) and Controls (0) were imbalanced so I oversampled my cases so that that the training set was 0.5053 made up of controls. I did not balance my test set which was 0.562 controls. In the beginning my train and test accuracy rises (it is not very accurate still but I expect this to be the case) but then the training accuracy steeply drops while the test accuracy plateaus. They end up with accuracies of 0.5053 and 0.562 respectively so the network is just classifying everything the same. I do not understand how this behavior arises as I thought that balancing my training set would avoid the problem of classifying everything the same? Also, the training set begins to learn initially upwards from 50/50 but I cannot understand its reversion. Is there anything I can do to prevent this? Or should I just employ early stopping when the training accuracy begins to decrease? Any insight would be appreciated! opt = tf.keras.optimizers.SGD(lr=0.000001, momentum=0.9, decay=0, nesterov=True) model = keras.Sequential([keras.layers.Dense(100,kernel_initializer='he_uniform',bias_initializer=keras.initializers.Constant(value=0.01),activation=tf.nn.relu,kernel_regularizer=regularizers.l2(0.1)), keras.layers.Dense(100,kernel_initializer='he_uniform',bias_initializer=keras.initializers.Constant(value=0.01),activation=tf.nn.relu, kernel_regularizer=regularizers.l2(0.1)), keras.layers.Dense(1, activation=tf.nn.sigmoid)]) model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy']) AI: The fact tha balancing the dataset Will prevent the overfitting and thus, good results in the test set, Works under the assumption that the model does not underfit. What it may be happening is that the model is too bad that it can't perform the classification task in either the un-balanced and the balanced dataset. It's difficult to analyse the problema without more information, you may have facing convergence issues (too large learning rate, vanishing gradients), you may also be computing the accuracy in a bad way...
H: Predict Customer Next Purchase with Sequence Suppose I buy products: [1,2,3,4] Another customer X bought: [2,3] Most probably customer X next purchase will be: 4 Sequence is very important in my problem I tried association analysis using R, but it don't take under consideration the sequence Please advise what algorithms I need to solve this? Do I need to First do Clustering to find similar customers? AI: For sequence problems, I generally used Recurrent Neural Network. It has a property to learn its values, which is based upon the previous state and current input. Since in your case sequence is important, you can use RNN. You can also try LSTM(type of RNN) cell. And Additional Note: For product suggestion(where sequence is not important) you can also use Apriori algorithm. This algorithm try to build association between the products of a single order.
H: How to standardize my data (Univariate Time Series Forecasting using Keras LSTM)? Let be $X = (X_1,...., X_n)$ an univariate time serie. I would like to know how to standardize my data when I split it into train and test data. Let me explain you how I tranform $X$ so that I can fit an LSTM neural net. From $X$ I make a new input data and its corresponding output data. So, we have: $X = ( (X_1, ..., X_m), ... , (X_{n-m}, ..., X_{n-1}) )$ $Y = (X_{m+1}, ..., X_n)$ $\text{Card}X = \text{Card}Y$ Let's set $p$ the size of my test set. If I use Python's notation, we have: $X_{train} = X[:-p]$ $X_{test} = X[-p:]$ Idem for $Y$ ... Now, I am wondering how to standardize my data. I think that standardizing $X$ before splitting the data into train and test sets could lead to over-fitting since we a transformation that involves all $X_i$. Basically, I am not sure that the sum (mean, standard deviation) will drown the information. In this case I think it could be better to just compute the mean and the standard deviation in the training set and use them to standardize both of the train and test sets. For me it makes no sense to standardize them separately since $\text{Card}X_{test} << \text{Card}X_{train}$. But may be I am wrong. I would also like to know whether I have to standardize $Y$ and $X$ or just $X$. When I am working with MLP neural net I used to just normalize the input data.. So, thank you first for reading and if you have any ideas or remarks, any questions to ask, please let me know. I can explain more it is up to you :) P.S. I don't find a 'standardization' tag and I thus use the tag named normalization. AI: Welcome kakarotto! The first thing, standarize with respect to the training set only, then use the statistics to standarize the other sets. As a rule of thumb, we should keep in mind that the test set does not exits while training... (although we have access to it) When training, if the network does not see the test set we (the humans) do either… If the test set was well selected, and so the whole dataset, the training set and the test set should come from the same probability density function and, if enough data is provided, the statistics you get from the training set would be close to the statistics of the pdf (assuming unbiased estimators for these statistics, that should be the case for the mean and standard deviation) Based on my experience, only standarize the input data. With the output data, I usually scale it (min-max scaler) if I know the boundaries (it they even exist…). Another usefull trick that usually worked for me its to not feed the net with the input data rather with its relative differences, for example: $X = (0, X_1 - X_0, X_2 - X_0,\dots,X_m-X_0)$
H: In Naive Bayes classifier how is P(sneezing,builder|flu) = P(sneezing|flu)P(builder|flu)? Please refer to this literature: According to Naive Bayes classification algorithm: $P(sneezing,builder|flu) = P(sneezing|flu)P(builder|flu) $ where sneezing and builder are independent events. How do they arrive at the above conclusion mathematically? Is it something like: $P(sneezing,builder|flu)$ $=P(sneezing \cap builder| flu)$ $= \frac{P((sneezing \cap builder) \cap flu)}{P(flu)}$ AI: This is just a basic property of conditional independence. If two events A and B are conditionally independent, given event C then: $$Pr(A \ and\ B \ |\ C) = Pr(A \ |\ C) * Pr(B \ | \ C)$$ or equivalently, $$Pr(A \ | \ B \ and\ C) = Pr(A \ |\ C).$$ That is, $$ Pr(A \ and \ B | \ C) $$ $$ =\frac{Pr(A \ and \ B \ and \ C)}{Pr(C)} $$ (definition of conditional probability) $$= \frac{Pr(A \ | B \ and\ C) * Pr (B \ and \ C)}{Pr(C)}$$ (by Bayes rule) $$= Pr(A \ | \ C) * Pr(B\ | \ C) $$ (independence and definition of conditional probability again) The independence assumption is what makes Naive Bayes "naive" because it is a bold assumption in general to think your variables are all conditionally independent, given a class. In Naive Bayes, we seek to find an expression for: $$Pr(class \ r \ | x_1, x_2, x_3..., x_k)$$ where $x_1, x_2, x_3, ... x_k$ are the features/variables/predictors that we have. Applying Bayes rule; $$Pr(class \ r\ | x_1, x_2,...,x_k) = \frac{Pr(x_1, x_2, ...x_k\ | class \ r) * Pr(class \ r)}{\sum_{i=1}^m Pr(x_1, x_2, ...x_k\ |\ class \ i) Pr(class \ i )}$$ where m = number of classes. The expression $Pr(x_1, x_2, ...x_k\ | class \ r)$, the conditional joint distribution of the predictors, is essentially impossible to estimate directly. So we make the assumption of conditional independence in the predictors, given a class (and that the distribution of $(x \ | \ class \ i)$ follows Gaussian, typically, but not always) $$Pr(x_1, x_2, ...x_k\ | class \ r) = \prod_{j = 1}^{k}Pr(x_j \ | \ class \ r)$$ Hence: $$Pr(class \ r\ | x_1, x_2,...,x_k) = \frac{\prod_{j = 1}^{k}Pr( \ x_j | \ class \ r) * Pr(class \ r)}{\sum_{i=1}^m \left[\prod_{j = 1}^{k}Pr(x_j | class \ i) \right]Pr(class \ i )}$$. We typically estimate the prior probabilities, $Pr(class \ i )$, using the MLE = sample proportion; # of training examples in class i / # of total training samples.
H: How to transform time series data to apply supervised learning algorithms to it? Apologies in advance for what may be a very basic question. I have a dataset consisting of marketing calls to different clients, which include the timestamp for the call. My goal is to train a model to predict if whether a customer will answer a call or ignore it based on the call time, as well as other features like caller id, etc. The issue is that the outcome of the call is clearly time dependent, that is, the order of the data points per client matters for the prediction of the dependent variable at a given time. My question is, how can I transform my features, so that I can use standard classification algorithms like Logistic Regression/Random Forest to classify a new data point? Are these algorithms effective for these scenarios? If so, how should I proceed to take the previous data points into consideration? I have read that time series data can be converted to a supervised learning problem, by including lead and lag columns of the dependent variable. But since my test data will not have those columns, I am confused to how they might help me. Thank you very much. AI: You can aggregate all previous data points into a new features. For example the number of previous (un)successful call attempts or the number of days since the last call. After this aggregation (of transforming previous calls into new scalar features) you are no longer dealing with a time series but have the usual design matrix with one row per client (and one column per feature). Are these algorithms effective for these scenarios? They should be effective if you can derive good features, which shouldn't be too difficult. This is more manual than using a sequence-model (say LSTM) but if you derive good features, I would expect this too work better. I have read that time series data can be converted to a supervised learning problem, by including lead and lag columns of the dependent variable. But since my test data will not have those columns, I am confused to how they might help me. It doesn't sounds like you are dealing with a standard time-series problem. The timestamps themselves don't strike me as too important for predicting answers to a call. You can probably achieve good results by simply including the time of day and day of the week in addition to the features mentioned above.
H: How do I get similarity with autoencoders I have build an autoencoder to extract from a very high dimensional (200 dimensions) space a smaller but significant representation (16 dimensions). Now that I have these "encoded" vectors, I would like to compute some kind of similarity score, or clustering. I am not sure which notion of distance to apply at this point. Any ideas how I can get similarity/clusters considering that I have used autoencoders? AI: You can calculate the cosine similarity between two encoded vectors you would like to compare. The cosine similarity between two vectors is defined as follows:
H: Is this the way to obtain the same individuals for x_test and y_test? x_train, x_test = train_test_split(x, test_size = 0.3,random_state=250) y_train, y_test = train_test_split(y, test_size = 0.3,random_state=250) Is this the way to coincide the same individuals in x_test and y_test as well as x_train and y_train? AI: train_test_split has this capability built-in. Just pass all of the data in the first go like: Code: x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=250) Test Code: from sklearn.model_selection import train_test_split x = list(range(100)) y = list(range(100, 200)) x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.3, random_state=250) assert(all(xx == yy - 100 for xx, yy in zip(x_train, y_train))) assert(all(xx == yy - 100 for xx, yy in zip(x_test, y_test)))
H: When I should use PCA? I have a data set with 60000 rows and 32 columns. I want to use SVM (with some more constraints that make it more complicated)and I think 32 columns are too large. So I decided to use PCA. But when I load PCA, the first 20 component describe 85% of data which 20 variables still is too large I think, but it is better than 32. I am wondering is that ok if I use PCA? when n>>m? if not what is a better option? AI: If you can, go for some non-linear dimensionality reduction technique. The most powerful are Autoencoders, but you can also use t-SNE or other manifold techniques. The problem of PCA is that it can extract only latent factors that are linearly associated with your variables. Using non-linear techniques, less variables can let you capture more of the original variance.
H: Multiply weights after using dropout in training - PyTorch I have a Pytorch regression model as follows: model = nn.Sequential(nn.Linear(n_in, n_h_1), nn.ReLU(), nn.Dropout(p=0.1), nn.Linear(n_h_1, n_h_2), nn.ReLU(), nn.Linear(n_h_2, n_out)) As you can see, I have used a Dropout regularization layer with dropout probability of 0.1. When the training is completed, we should disable the dropout. This can be achieved using model.eval(). Reference to this StackOverflow answer and other resources, we should multiply the output of hidden layer with (1-p) during inferencing of model. My question is: do we have to do this manually using pytorch or the library handles this itself after applying model.eval()? If we have to do it ourselves, how can we achieve thisfor the given model? AI: PyTorch handles this with scaling the output of the dropout layer at training time with this probability: So it handles this itself after applying model.eval(). You can lookup their code here:
H: How to measure the correlation between categorical variables and a continuous variable I have the following list of the names of the categorical variables in my dataset: categorical_columns = ['MSSubClass', 'MSZoning', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'Foundation', 'Heating', 'Electrical', 'Functional', 'GarageType', 'PavedDrive', 'Fence', 'MiscFeature', 'SaleType', 'SaleCondition', 'Street', 'CentralAir'] in this function definition, I am one-hot encoding each column with the help of a for loop: def feature_encoding(df, categorical_list): # One Hot Encoding the columns gathered in categorical_columns for col in categorical_list: # take one-hot encoding OHE_sdf = pd.get_dummies(df[categorical_list]) # drop the old categorical column from original df df.drop(col, axis = 1, inplace = True) # attach one-hot encoded columns to original dataframe df = pd.concat([df, OHE_sdf], axis = 1, ignore_index = True) I don't want to use all of these columns in my training so now I am at the dimensionality reduction phase of my work. I want to measure the correlation between each of these columns and my SalePrice variable (numerical) and weed out the columns with low correlation. I've read that Chi-square test is generally used for measuring the correlation of categorical variables but I have not seen an implementation where it was a list of categorical variables vs a continuous variable. AI: The Chi-squared test measures the relationship between two categorical variables. To measure the relationship between a categorical feature and a continuous feature, you can use an ANOVA test. As an aside, you don't need to use a for loop for your encoding - get_dummies has the parameter columns which allows the user to specify which columns are encoded.
H: In the context of machine learning, what is the relationship between "Normalization" and "norm" "Normalization" and "norm" are used a lot in machine learning In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment. In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space—except for the zero vector, which is assigned a length of zero. A seminorm, on the other hand, is allowed to assign zero length to some non-zero vectors (in addition to the zero vector). A norm must also satisfy certain properties pertaining to scalability and additivity which are given in the formal definition below. in the context of machine learning, what is the relationship between "Normalization" and "norm" AI: In the context of deep learning, normalization usually refers to the process of subtracting the mean and dividing by the standard deviation: $$ \hat{x_i}=\frac{x_i - \mu}{\sigma} $$ This kind of normalization is not related with the norm of a vector. Instead, it refers to the statistical notion you referred aimed at rescaling the values. In a statistical context, this approach is sometimes referred to as "standardization" of a random variable, which makes its mean $0$ and its standard deviation $1$, assuming the original variable follows a normal distribution. There exist multiple types of normalization in deep learning, depending on what we normalize over. The most used ones are instance normalization, batch normalization and layer normalization. These are explained graphically in the image below, which is borrowed from article Group Normalization: Each subplot shows a feature map tensor, with N as the batch axis, C as the channel axis, and (H, W) as the spatial axes. The pixels in blue are normalized by the same mean and variance, computed by aggregating the values of these pixels. Each type of normalization has a different purpose, and some of them have shown their practical success but it's not clear why they work well. These are some authoritative references for the mainstream normalization strategies: Batch normalization: [Ioffe and Szegedy, 2015], [Santurkar et al., 2018] Instance normalization [Ulyanov et al., 2016] Layer normalization: [Ba et al., 2016] Note that some of the normalization variants need further processing apart from the value normalization itself. For instance, with batch normalization you need to store $\mu$ and $\sigma$ to use them at inference time.
H: Max pooling has no parameters and therefore doesn't affect the backpropagation? I feel this is a question that has a lot of variations already posted but it doesn't exactly answer my question. I understand the concept of max pooling and also the concept of backpropagation. What i cannot understand is that why is it said that max pooling doesn't affect the process of backpropagation? Is it because max-pooling "doesn't have any parameters"? Also is this statement - max pooling has no parameters- right? Why or why not? AI: Pooling layers does not have parameters which affect the back propagation. Backpropagation is an algorithm to efficiently implement gradient descent in a neural network by using the chain rule and propagating the error from the final layers to the early ones. The parameters that affect backprop are often called trainable variables (in Tensorflow), and they are the weights and biases of the convolutional filters. Adding pooling layers does not change the number of them. Pooling layers are a way of performing downsampling, and they are used for the following main reasons: To decrease the computational load of the network: smaller outputs means less FLOPS To save memory during training: less memory allocation in the GPUs To increase the receptive field of the neurons in successive layers. You can check this link about this topic: https://medium.com/@santi.pdp/receptive-fields-in-convolutional-neural-networks-6368a699d838 Adding transalation-invariance to the feature maps: when pooling, small translations of a feature will result in this feature located in the same position in the following pooled layer What is true is that pooling layers may have a great impact in the learning capabilities of a network
H: Seaborn subplots massive whitespace I'm trying to plot three heatmaps in a vertical column using Seaborns subplot method. import seaborn as sns initalCorr = inputX.corr() secondaryCorr = inputX_corr.corr() finalCorr = inputX[selected_columns_pvalue].corr() fig, axs = plt.subplots(3, figsize = (15,45)) fig.suptitle('Heatmaps of correlation between features selected for training', fontsize = 20) sns.heatmap(data = initalCorr, center = 0, square = True, ax = axs[0]) axs[0].set_title('Correlation between all features', fontsize = 20) sns.heatmap(secondaryCorr, center = 0, square = True, annot = True, linewidths = .5, fmt = '.2f', ax = axs[1]) axs[1].set_title('Correlation between features slected for P-Value analysis', fontsize = 20) sns.heatmap(finalCorr, center = 0, square = True, annot = True, linewidths = .5, fmt = '.2f', ax = axs[2]) axs[2].set_title('Correlation between finally selected features', fontsize = 20) The graphs themselves plot fine but the title appears out of place relative to the plots. How do I make the title directly above the heatmaps? Additional: seaborn version 0.9.0 matplotlib version 3.0.3 AI: You can move the title closer to the first figure. Just add the following two lines at the end of the code fig.tight_layout() fig.subplots_adjust(top=0.95)
H: Is it correct to join training and validation set before inferring on test-set? I would like to know if is a correct procedure to join training-set and validation-set together, in order to train the model on this new dataset, before making predictions on the test-set. AI: Yes, once you optimized your model and parameters with the validation set, it is advised to train your final model on the combination of the training and validation sets before applying it to the test set. Indeed, you can see the validation set as a subset of the training set. It is used to tweak your models and your parameters, but once it is done, it would just be a waste to not use the validation set for training during the testing phase. Remember that once you tested your model on the test set, you should not tweak it anymore.
H: Tweedie Loss for Keras We are currently using XGBoost model with Tweedie loss for solving a regression problem which works very good, now I wanted to move our model to Keras and experience with neural networks, do anybody know how I can implement Tweedie loss for Keras? I only care about the instance when p=1.5 which gives us the best result in XGBoost. Thanks AI: I end up implementing something like this: def tweedieloss(y_true, y_pred): p=1.5 dev = 2 * (K.pow(y_true, 2-p)/((1-p) * (2-p)) - y_true * K.pow(y_pred, 1-p)/(1-p) + K.pow(y_pred, 2-p)/(2-p)) return K.mean(dev) which I don't know how right it is, for now it seems to be working.
H: How many parameters in a Conv2d Layer? I was following andrew-ng coursera course on deep learning and there's a question that has been asked there which I couldn't figure out the answer for? Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with 100 filters that are each 5x5. How many parameters does this hidden layer have (including the bias parameters)? 2501 2600 7500 7600 AI: The right answer is the fourth. From this, the formula to calculate the number of parameters in a convolutional layer is (n*m*l+1)*k with n = m = 5, k = 100, l = 3 and +1 for the bias.
H: Seaborn barchart for frequency of data I'm trying to plot a bar chart to represent the frequency of two variables (Dead and Alive) in my test and training data. I want it to look like the second example described in the documentation (minus the error bars). Currently i have yfreq = pd.DataFrame(data = [Counter(Ytrain),Counter(Ytest)], index = ['Train','Test']) yfreq.columns = ['Dead', 'Alive'] print(yfreq) >>> Dead Alive Train 384 239 Test 165 103 And then for the graph sns.barplot(data = yfreq) Which correctly outputs a graph but does not split the data into test and training data. I know it has something to do with the hue kwarg but i can't work out how too do it. Thanks in advance AI: This should do the trick. You have to melt your data frame to use x,y and hue in your seaborn barplot. yfreq['type'] = yfreq.index yfreq = yfreq.melt(id_vars = 'type') sns.barplot(x = 'variable', y = 'value', hue = 'type', data = yfreq)
H: Inputs required for Random Forest Regressor and ways to improve performance I am using Random Forest Regressor to predict inventory needs. The data I am using to train the model lists the total quantity picked for each product per date, but does not include rows where total quantity picked for a product on the specified date is 0. The model considers the features that allow it to take date into consideration. For details on the data being used see the example data below: UPC day_ID month day_of_year day_of_week quantity_picked 0 0000000002554 7500.0 5 141 1 4.0 1 0000000002554 7503.0 5 144 4 2.0 2 0000000002554 7512.0 6 153 6 2.0 3 0000000002554 7527.0 6 168 9 2.0 4 0000000003082 7494.0 5 135 2 2.0 5 0000000003082 7495.0 5 136 3 2.0 6 0000000003082 7496.0 5 137 4 8.0 7 0000000003082 7497.0 5 138 5 4.0 8 0000000003082 7498.0 5 139 6 4.0 9 0000000003082 7499.0 5 140 0 9.0 10 0000000003082 7500.0 5 141 1 3.0 11 0000000003082 7501.0 5 142 2 5.0 12 0000000003082 7502.0 5 143 3 3.0 13 0000000003082 7503.0 5 144 4 8.0 14 0000000003082 7505.0 5 146 6 2.0 15 0000000003082 7506.0 5 147 3 7.0 Will the model be less accurate at predicting inventory needs because it is missing dates for items where quantity picked is 0? I have tried running the same model with the rows where quantity picked = 0 but the total number of rows changes from approximately 50k to 5 million and my computer literally can't handle it, it just freezes. Without the rows where quantity picked = 0, the model reports mean squared log error level of .39448 and runs successfully within 4 minutes 37 seconds. Any guidance on if that data is necessary or not would be very much appreciated and/or advice on how to improve performance/accuracy of such a model. AI: A random forest trained on data whose labels are all positive integers cannot produce any prediction less than 1. So yes, the model will be less accurate. I would suggest including a random subset of the rows with 0 sold, so that the forest can learn a (hopefully representative) pattern to them. I'd also probably weight them higher in the fitting algorithm, to reproduce a more faithful average of items sold (in each leaf).
H: What is the Value of X in KNN and Why? I have a dataset of 25 instances these instances are divided into 2 classes Green Circles and Blue Squares data distributed as this graph I want to predict X's class based on "Likelihood Weighted KNN with k =3" In normal KNN this is easy the nearest 3 points are 2 Blue Squares and 1 Green Circle which means X will be Blue Square there are more Blue Squares neighbours than Green Circles (2 vs 1) But What is needed is to find the Likelihood Weighted KNN with k =3 This is my try In this case we have to calculate the weight (Likelihood) for each instance Each Green Circle likelihood is $\frac{1}{5} $ , we have 5 Green Circles While for Blue Squares it is $\frac{1}{20} $ , we have 20 Blue Squares Therefore the weights around X will be $\frac{1}{5} $ Green Circle, and $\frac{2}{20} $ Blue Squares. which means $\frac{1}{5} > \frac{2}{20} $ Then X is Green Circle Well, this is wrong :( Can someone help me find the AI: Green circles and blue squares are samples of two different classes. How does it matter that "X" belongs to which one of either of these "Green circle" or "Blue Square" class? Likelihood should be done based upon the number of classes and the K value, instead of dataset samples. For me likelihood weight will be: Likelihood of being blue square ~ 0.66 i.e. 2/3, whereas likelihood of being green circle is 1/3. Prepared another example: Where I have 2 Green circle and 16 blue square, whereas K=5 If I follow your approach, likelihood of Green Square is 1/2, Blue square is 4/16 = 1/4, whereas from the image it's clear that "X" belongs to Blue Square class only. [Update] Thanks @Ben Reiniger, for correcting me. I spent more time and what I understood is, it's not always correct to put equal weights(likelihood) across different classes in dataset. My earlier observation was based upon the assumption that all classes have equal weights and I was wrong, even though data is bit skewed. Consider an another example: We have a huge dataset of patient reports, where features are created based upon the several tests done so far. Our task is to identify the possibility if a particular person suffering from cancer or not. In this case, even if a single feature point towards cancer disease, we cann't neglect it and should predict towards rare class, such that patient can go for further analysis. In such cases weights are not equivalent across different classes. According to me, wights should be defined based upon the use case. A very well explained its calculation in How does Weighted KNN works?
H: logic behind weighted KNN I am reading about KNN So I made another example to make things clearer In this example (Image attached) You can see there are in total 5 Greed Circles and 20 Blue Squares by standard KNN (k=3) , X should be a Blue Square This is obvious 2 Blue Squares vs 1 Green Circle. But in weighted KNN things are difference In this case we have to calculate the weight (Likelihood) for each instance Each Green Circle likelihood is $\frac{1}{5} $ , we have 5 Green Circles While for Blue Squares it is $\frac{1}{20} $ , we have 20 Blue Squares Therefore the weights around X will be $\frac{1}{5} $ Green Circle, and $\frac{2}{20} $ Blue Squares. which means $\frac{1}{5} > \frac{2}{20} $ Then X is Green Circle But if try to think of it logically then there are more Blue Squares than Green Circles which means X more likely to be Blue Square than Green Circle. My question is : Am I doing anything wrong here? Can someone explain why the equation is showing Green Circle while logic says Blue Square? AI: Logically speaking, I think "X is Green Circle" is a reasonable conclusion. I find the idea in the paper in your question is quite similar to this paper: KRNN: k Rare-class Nearest Neighbour Classification. Intuitively, for example, if a new data point is close to one rare class' point and one common class' point, it is more likely to belong to the rare class. There's no conflict here because a data point will be much more likely to be close to a common class' point. However, once it's already close to a rare class's data point, it's more likely to belong to this class. That said, I didn't check your calculation, I just don't think the "X is Green Circle" conclusion is illogical for this algorithm. [Update] After reconsidering this problem, I think Weighted kNN want to put an emphasis on the rare class data point because that is the class of interest (e.g. Anomaly detection). It's possible that Accuary is not the metrics here but a Weighted Accuracy metrics that penalize a misclassified rare class data point harder so that we can detect more rare class data point.
H: How to classify a document by image? I need an opens source solution to classify a document. I do not want to use NLP i need only to check the look and feel. I tried OpenCV. I have a template and i need to match it. import cv2 template = cv2.imread(template_file,0) template = cv2.normalize(template, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) method = ['cv2.TM_CCOEFF_NORMED'] img = img2.copy() method = eval(meth) # Apply template Matching res = cv2.matchTemplate(img,template,method) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) if (max_val > threshold): print('match OK') this method doesn't seems so robust and I get a lot of false positive. AI: You might want to look at Siamese CNNs depending on the size of your dataset. A good introduction can be found here.
H: Train and predict on a varying number of inputs - time based events I have the problem where I am trying to build a model which takes in n events for a single user as input for prediction, the problem is that the number of events is not set, so for example: A user may have performed a single event: time - eventType - eventData Yet another user for whom I need a prediction may have performed many of these time based events. What architectures / models can I explore? AI: When dealing with sequence data it is usual to have sequences with different sizes. One answer for your problem might be to using padding in your events, in this way you keep a fixed number of events for all your data. It happens in the same way with NLP applications, let's set number of events to be five. So you have the following two rows in your data set: UserID | time | eventType | eventData 1 08:00:00 A X 1 08:01:00 A X 1 08:02:00 B X 1 08:03:00 C Y 1 08:03:00 A Z 1 08:00:00 A X 2 07:00:00 A Z 2 07:01:00 B X So, you have for userId = 1 six events and for userId = 2 two events. Using paddings = 5, you will add 'Null'/Zero data for user 2 and you will truncate one event for user 1. Usually, you keep the last ones, so your data will become: UserID | time | eventType | eventData 1 08:01:00 A X 1 08:02:00 B X 1 08:03:00 C Y 1 08:03:00 A Z 1 08:00:00 A X 2 07:00:00 Null Null 2 07:00:00 Null Null 2 07:00:00 Null Null 2 07:00:00 A Z 2 07:01:00 B X But what about the null values? You can use GradientBoosting algorithms that deals with null values by default. Or a missing imputation stretegy such as the most common value or a new class indicating null value. Then, to finish, you can transform your data to have each user a row in your dataset: Notice that your 'window' size is a hyperparameter and it is a good practice to define it via cross-validation. I hope it helps.
H: Why use different variations of Softmax in training and validation for neural networks with Pytorch? Specifically, I'm working on a modeling project, and I see someone else's code that looks like def forward(self, x): x = self.fc1(x) x = self.activation1(x) x = self.fc2(x) x = self.activation2(x) x = self.fc3(x) x = self.activation3(x) # use log softmax + NLLLoss in training; softmax to make predictions if self.training: x = self.log_softmax(x) else: x = self.softmax(x) return x For context, this is using PyTorch, and it is on a classification problem. The criterion is NLLLoss. What's the rationale behind using log_softmax for training but using softmax for actual predictions? AI: It's more of a pytorch implementation thing. log_softmax() outputs the raw logits and they are passed to NLL Loss in training. During inference you just need the probabilities so softmax will suffice. You don't use different algorithms for training and testing, the question is quite misleading as currently stated. You are using different implementations of the same algorithm. You can find more on the matter here: Pytorch discuss Pytorch discuss #2 Pytorch docs P.S: I remember encountering the same question on a Udacity scholarship. If you 're reading code from one of their courses, there is probably an explanation on the solution jupyter.
H: What is the hypothesis space used by this AND gate Perceptron? Per this post The hypothesis space used by a machine learning system is the set of all hypotheses that might possibly be returned by it. Per this post, the Perceptron algorithm makes prediction $$ \begin{equation} \hat y = \begin{cases} 1 & wx+b >= 0\\ 0 & wx+b<0 \end{cases} \end{equation} $$ we can conclude that the model to achieve an AND gate, using the Perceptron algorithm is $x_1 + x_2 – 1.5$ In this case, what is the hypothesis space used by this AND gate Perceptron? AI: As far as I understand: A hypothesis is a model which is capable of predicting outputs from inputs, hence the $x_1 + x_2 - 1.5$ is a hypothesis but not the only one. The whole models that have the same capability are regarded as hypothesis space. We know that in AND gate: x1 x2 output |---------|--------|--------| | 0 | 0 | 0 | | 0 | 1 | 0 | | 1 | 0 | 0 | | 1 | 1 | 1 | |---------------------------| and we have $w .\cdot x + b$, based on which, either $0$ or $1$ turns out as output. $$w \cdot x + b$$ $$w_1 \cdot x_1 + w_2 \cdot x_2 + b$$ trying all the inputs in this expression: $w_1 \cdot 0 + w_2 \cdot 0 + b <= 0$ (because the output should be 0) $b < 0$ $w_1 \cdot 0 + w_2 \cdot 1 + b <= 0$ ---> $ w_2 + b <= 0$ So $w_1 < |b|$ $w_1 \cdot 1 + w_2 \cdot 0 + b <= 0$ ---> $ w_1 + b <= 0$ So $w_2 < |b|$ $w_1 \cdot 1 + w_2 \cdot 1 + b > 0$ ---> $ w_1 + w_2 + b > 0$ Firstly, we initialize the weights and bias parameters and then if needed, change them. Here, since $b < 0$ we set it as $-1$ Since $w_1 < |b|$, $w_2 < |b|$ and weights are not negative, we set them as 1. So we would have: $w_1 \cdot 0 + w_2 \cdot 0 + b = -1 < 0$ is right, returns 0 because is negative. $w_1 \cdot 0 + w_2 \cdot 1 + b = w_2 + b = 1 - 1 = 0$ wrong, it returns 1 while it should return 0 $w_1 \cdot 1 + w_2 \cdot 0 + b <= 0$ ---> $ w_1 + b <= 0$ So $w_2 < |b|$ wrong, it returns 1 while it should return 0 $w_1 \cdot 1 + w_2 \cdot 1 + b > 0$ ---> $ w_1 + w_2 + b > 0$ So we set b for a smaller value like -1.5 (Note), then all the expressions would work appropriately. Hence $x_1 + x_2 - 1.5$ is a hypothesis for this problem. Note: we know that in using perceptron algorithm, when reaching at any point that is not following the current model, weights and bias are updated as follows: w = w + yx b = b + y Here, in the source that you referred to, maybe for simplicity they haven't done so and have just found a sample of a plausible model (hypothesis) The other hypothesizes should follow the previously mentioned rules, thereby, $x_1 + x_2 - 2$ can also be another hypothesis for this problem, etc.
H: Is there a way to get y_pred values from saved Keras model? I have a Keras model saved in a .h5 file. As you know there are a y_pred and a y_act that confusion matrix creates from, at run time, it's easy to get y_pred values but my model is saved and now I need the y_pred values from that saved model. Is there a way to do that? AI: If I've figured it out correctly, the answer is no. The point is that your saved model solely contains the network architecture and the parameters it has. What you want relates to the recall phase where you have to provide input to get output. This means that you need input data to be fed to your network in order to get output. What you want can be done using another approach. First, load your network and feed your data to your model. After that, get the outputs and store the inputs and outputs alongside each other using Numpy save method or maybe .h5 format. The flow for achieving the y_pred can be like the following sequence of actions: Load your model. Feed your data to your model and get y_pred. define a Numpy array of inputs and a Numpy array of outputs. Store inputs, real outputs and y_preds using the methods which are available. Later, when you want to make your confusion matrix, you can load your inputs and outputs, and the real outputs to make your matrix.
H: Two different pytorch networks, combined loss, back propagation and optimizer step So here is my network architecture as shown in below image. I’ve two separate networks and loss_1 and loss_2 are coming from two separate nn.Modules networks and final_loss = loss_1 + loss_2. Still only one final_loss.backward() would calculate grad in both networks? I do have two separate optimizers to update grads in two separate networks respectively. If not so, then what is the correct way to do back propagation in these individual networks on combined loss value. AI: A single final_loss.backward() would calculate the gradients but there are some strange things in your scheme (these may seem strange due to lack of information in your question): The first strange this is: why having two separate optimizers? It could only be justified if you purposely want different optimization algorithms for each network. The second and most strange thing is: why would you want to have a combined loss? The only reason that comes to my mind would be that module1 and module2 share some parameters. Otherwise, I see no point in combining two totally unrelated losses because they could have different scales, leading to one of the partial losses to have little effect. Minimizing each loss separately would lead to a much better result.
H: PicklingError: Could not serialize object: TypeError: can't pickle fasttext_pybind.fasttext objects I built a fasttext classification model in order to do sentiment analysis for facebook comments (using pyspark 2.4.1 on windows). When I use the prediction model function to predict the class of a sentence, the result is a tuple with the form below: [('__label__positif', '__label__négatif', '__label__neutre', 0.8947999477386475, 0.08174632489681244, 0.023483742028474808)] but when I tried to apply it to the column "text" I did this : from pyspark.sql.types import * from pyspark.sql.functions import udf, col import fasttext schema = StructType([ StructField("pos", StringType(), False), StructField("neg", StringType(), False), StructField("ntr", StringType(), False), StructField("pr_pos", DoubleType(), False), StructField("pr_neg", DoubleType(), False), StructField("pr_ntr", DoubleType(), False) ]) udf_label = udf(lambda words : predictClass(words), schema) df = df.withColumn("classe", udf_label(col('text'))) df.select('classe').show() I get this error: PicklingError: Could not serialize object: TypeError: can't pickle fasttext_pybind.fasttext objects AI: In FastText Users FB page a certain Maksym Kysylov answered me " It's not a FastText problem. It's a Spark problem :) When you apply function to Dataframe (or RDD) Spark needs to serialize it and send to all executors. It's not really possible to serialize FastText's code, because part of it is native (in C++). Possible solution would be to save model to disk, then for each spark partition load model from disk and apply it to the data. Something like: df.rdd.mapPartitions(func). And func should: 1. load the model; 2. for record in partition: yield ft.predict(record['text'])." It works for me and I thank him very much !!!
H: 'tuple' object is not callable while reshaping training data using python I have data csv file with three inputs names temperature, humidity, wind. Here I want to predict temperature value in every 60 minute using LSTM model. Here I write the code to reshape the train . But I got an error tuple' object is not callable My code: data = pd.read_csv('data6.csv' ) data['date'] = pd.to_datetime(data['date'] + " " + data['time'], format='%m/%d/%Y %H:%M:%S') data.set_index('date', inplace=True) data = data.values scaler = MinMaxScaler(feature_range=(0, 1)) data = scaler.fit_transform(data) train_size = int(len(data) * 0.67) test_size = len(data) - train_size train, test = data[0:train_size,:], data[train_size:len(data),:] X = 1 n_out = 1 x,y=[],[] start =0 data = train.reshape(train.shape(train.shape[0] ,3, train.shape[1])) for _ in range(len(data)): in_end = start+X out_end= in_end + n_out if out_end < len(data): x_input = data[start:in_end] x.append(x_input) y.append(data[in_end:out_end,0]) start +=1 x = np.asanyarray(x) y = np.asanyarray(y) My csv file Error: data = train.reshape(train.shape(train.shape[0] ,3, train.shape1)) Error image: Can anyone help me to solve this problem? Error: AI: The problem is that train.shape is simply a tuple. So train.shape(arguments) doesn't make sense because train.shape is not callable (hence the error). Try to just replace that line by data = train.reshape((train.shape[0], 3, train.shape[1]))
H: Problem of finding best combination of features when desired feature is feature some_feature_A/some_feature_B Problem is stated: we have giant csv file with one target column and rest are inputs, we don't know these features impact target but we would like to use algorithm that besides using linear and non-linear transformations will also take into account that right solution would be some_feature_A/some_feature_B. Is there algorithm that will allow to take this case into account? One way is to craft these feature columns yourself but is there better way? AI: In theory, I think a deep neural net might be able to find features that are the product of two other columns. There exist some nice mathematical results which guarantee the ability of a neural net (with certain activation functions) to approximate any function, so there's no theoretical reason why a neural net couldn't compute the division function $f(x, y) = \frac{x}{y}$. That being said, it may be difficult to achieve in practice without a little preprocessing. If you want to give it a try, I would suggest adding additional features obtained by taking the logarithm of any numerical features in the dataset. This might make it easier for the network for learn features that are products of other features (since $\log{(x * y)} = \log{x} + \log{y}$ and $\log{x / y} = \log{x} - \log{y}$).
H: Visualize/analyze data before or after imputing missing values? My understanding is that we impute missing values in order to preserve those training examples so our Machine Learning algorithms have as many training examples as possible. To me, it would make intuitive sense to visualize/analyze the data before imputing the missing values, as imputation will skew distributions and may lead to false assumptions about the real data before imputation. I can see a case for doing both before and after, but that adds time to the feature analysis. On the other hand, I can see a case for not imputing values if the % of missing values is high enough to affect the data's distribution. In Python I imagine using something along the lines of pd.Series.dropna() to isolate the existing values. TL;DR: Should one impute missing values before or after visualizing the data and pulling insights from it? AI: Why not do both? Like you mention, it might be worth first computing the percentage of all values that are values. Generally you might also have a percentage in mind that is acceptable, like up to 10% missing values, if they are scattered at random throughout your dataset. There are libraries built specifically for visualising missing data, such as missingno, which offers quite a few ideas. Here is an example heatmap of missing variables across features: "missing" normally implies you have a sequential dataset, for example time-series data. If you had discrete observations e.g. of people's height versus shoe size, there is no sequential causality (autocorrelation: dependency on previous values). In this case, imputing makes little sense. So assuming you do have sequential data, whether or not to impute or drop time steps with missing values will really depend on your use case. Also perhaps the frequency of the data. If all the missing vaiues appear in one chunk at either end of the time series, it is cokmmon to simply leave out that chunk. For example, if you have minute frequency data and you wish to predict a value once per day, then missing a few minutes here and there might be tolerable, and imputation of some kind (e.g. fill-forwad) wouldn't have a huge impact overall, but could help the model optimisation work more effectively. Some models cannot handle missing values, so imputation is necessary. In any case, it would always visualise the data before and after imputation. You can usually run the same visualisation anyway. Sure it costs a few extra minutes, but you might catch important issues. This can save a lot of time compared to only finding the issues later on while debugging your trained model.
H: Pattern Recognition, Bishop - (Linear) Discriminant Functions 4.1 Please refer "Pattern Recognition and Machine Learning" - Bishop, page 182. I am struggling to visualize the intuition behind equations 4.6 & 4.7. I am presenting my understanding of section 4.1.1 using the diagram: Pls. Note: I have used $x_{\perp}$ and $x_{p}$ interchangeably. Equations 4.6, 4.7 from book: $$\mathbf{x} = \mathbf{x_{\perp}} + \textit{r}\mathbf{\frac{w}{\Vert{w}\Vert}} \tag {4.6}$$ Muiltiplying both sides of this result by $\mathbf{w^{T}}$ and adding $w_{0}$, and making use of $y(\mathbf{x}) = \mathbf{w^{T}x} + w_{0}$ and $y(\mathbf{x_{\perp}}) = \mathbf{w^{T}x_{\perp}} + w_{0} = 0$, we have $$r = \frac{y(\mathbf{x})}{\Vert{\mathbf{w}}\Vert} \tag{4.7}$$ Questions: Is $y(\mathbf{x})$ the (orthogonal) projection of $(\mathbf{w^{T}x} + w_{0})$ along $\mathbf{w}$, the weight vector? Are the lengths normalized to express them as multiples of unit vector $\mathbf{\frac{{w}}{\Vert{w}\Vert}}$. If so, can the distance $r = $$\frac{y(\mathbf{x})}{\Vert\mathbf{w}\Vert}$ exceed 1. Given that, $$ y(\mathbf{x}) = \mathbf{w^{T}x} + w_{0}$$ i.e. $y(\cdot)$ has two parts: $\textit{orthogonal component above/ below}$ decision boundary, and the $\textit{bias}$. And so, I'm calculating $y(\cdot)$ as: $$y(\mathbf{x}) = \mathbf{\frac{w^{T}x}{\Vert{w}\Vert}} + \frac{w_{0}}{\Vert\mathbf{w}\Vert}$$ while the book gets it as: $$y(\mathbf{x}) = {\frac{y(\mathbf{x})}{\Vert\mathbf{w}\Vert}} + \frac{w_{0}}{\Vert\mathbf{w}\Vert}$$ I am struggling to visualize how do we get the first term in the equation above (in book, eqn. 4.7) Alternatively, presenting my doubt/argument w.r.t. to book eqns 4.6 & 4.7; by substituting $r$ (eq. 4.7) into eq. 4.6 we get: $$\mathbf{x} = \mathbf{x_{p}} + y(\mathbf{x}) \qquad {(\Vert{\mathbf{w}}\Vert^{2} = \mathbf{w})}$$ which again seems to be incorrect - by triangle rule of vector addition. Given the context, where am I losing track? Request your inputs. AI: No, $y(\mathbf{x})=\mathbf{w}^T\mathbf{x}+w_0$; it is a scalar. The dot product $\mathbf{w}^T\mathbf{x}$ is $\|\mathbf{w}\|$ times the length of the projection of $\mathbf{x}$ onto $\mathbf{w}$. $w_0$ in your figure would be negative, and has the property that $y(\mathbf{x})=0$ whenever $\mathbf{x}$ is on the decision boundary. No normalization appears to be necessary. Certainly $r$ can be arbitrarily large (either positive or negative), when $\mathbf{x}$ is far away from the decision boundary. As mentioned previously, the first term is actually a length from the origin; the bias serves to shift this so that $y$ itself is the orthogonal (scalar) component [up to scaling by $\|w\|$...But that's what we're out to show when we pass from Eq(4.6) to (4.7)]. The text's approach is to decompose $\mathbf{x}$ into components relative to the decision boundary: $\mathbf{x}_{\perp}$ on the boundary, and something perpendicular to the boundary. Being perpendicular to the boundary, it is in the direction of $\mathbf{w}$, but we don't know how far, so they introduce its length as the unknown $r$. (There's some standard geometry stuff here that could also get us to the conclusion, but I'll explain their approach.) Now, as mentioned before, $y$ is zero on the boundary, so they have $y(\mathbf{x}_{\perp})=0$. And now, just to fill in some of the details of what they say, $$\begin{align*} \mathbf{x} &= \mathbf{x}_{\perp} + r \frac{\mathbf{w}}{\|\mathbf{w}\|}\\ \mathbf{w}^T\mathbf{x}+w_0 &= \mathbf{w}^T\mathbf{x}_{\perp}+w_0 + r \frac{ \mathbf{w}^T\mathbf{w} }{ \|\mathbf{w}\| } \\ y(\mathbf{x}) &= y(\mathbf{x}_{\perp}) + r \frac{\|\mathbf{w}\|^2}{\|\mathbf{w}\|} \\ y(\mathbf{x}) &= 0 + r \|\mathbf{w}\|, \end{align*}$$ and so $r=y(\mathbf{x})/\|\mathbf{w}\|$ as desired. [I'm not sure what you meant in your last few lines; at least some of it seems to have been a typo? Feel free to follow up.] EDIT: Regarding your addition about substituting $r$, you should get $$\mathbf{x}=\mathbf{x}_{\perp}+y(\mathbf{x})\frac{\mathbf{w}}{\|\mathbf{w}\|^2},$$ but $\|\mathbf{w}\|^2$ is not equal to $\mathbf{w}$; the former is a scalar, and the latter a vector! Rewriting, we have $$\mathbf{x}=\mathbf{x}_{\perp}+\frac{y(\mathbf{x})}{\|\mathbf{w}\|}\frac{\mathbf{w}}{\|\mathbf{w}\|}.$$ This now looks correct: from the origin, go to $\mathbf{x}_{\perp}$, then along the unit vector $\mathbf{w}/\|\mathbf{w}\|$ for a distance of $y(\mathbf{x})/\|\mathbf{w}\|$ (which, per your bubble3, is the correct distance).
H: Triplet loss function for face recognition? In the Andrew-NG coursera course on Convnets he talked about triplet loss function for one shot face recognition. The formula given in the video is, $$\to \small \small \small ||f(A)-f(P)||^2 \;+\;\alpha \leq\;||f(A)-f(N)||^2$$ $$\to \small \small \small D(A, P) + \alpha \leq D(A, N)$$ $$L(A,P,N) = max(||f(A)-f(P)||^2 - ||f(A)-f(N)||^2 + \alpha, 0)$$ Here, $$f(A) - \small \text{ Person }A$$ $$f(P) - \small \text{Different Picture of Person }A$$ $$F(N) - \small \text{Another Person}$$ I couldn't understand why did we use $\alpha$ in the formula. I understood that the ideal loss function is to decrease $\small \small D(A, P)$ and increase $\small \small D(A, N)$ but if we add $\alpha$ to $\small \small D(A,P)$ it will increase it which is not we require right? AI: $\alpha$ is known as the margin. Not only that we want to minimize $D(A,P)$ and maximize $D(A,N)$, that is we want $D(A,P)-D(A,N)$ to be small. Not only that we want it to be non-positive, we want it to be sufficiently negative. That is not only that we want $$D(A,P)-D(A,N) \le 0$$ We want $$D(A,P)-D(A,N) \le -\alpha.$$
H: In CNN, why do we increase the number of filters in deeper Convolution layers for complex images? I have been doing this online course Introduction to TensorFlow for AI, ML and DL. Here in one part, they were showing a CNN model for classifying human and horses. In this model, the first Conv2D layer had 16 filters, followed by two more Conv2D layers with 32 and 64 filters respectively. I am not sure how the number of filters is correlated with the deeper convolution layers. AI: For this you need to understand what filters actually do. Every layer of filters is there to capture patterns. For example, the first layer of filters captures patterns like edges, corners, dots etc. Subsequent layers combine those patterns to make bigger patterns (like combining edges to make squares, circles, etc.). Now as we move forward in the layers, the patterns get more complex; hence there are larger combinations of patterns to capture. That's why we increase the filter size in subsequent layers to capture as many combinations as possible.
H: String to Data frame column I have 2 column in data frame, X and Y. And I have some string values stored in text, which I want to put in X, Y as shown in the example. Example : text=9 10 13 110 14 16 12 1 6 1 1 2 X Y 9 12 10 1 13 6 110 1 14 1 16 2 AI: If you are looking to hard-code it for only 2 columns, this can be achieved as follows: import pandas as pd df = pd.DataFrame() text = '9 10 13 110 14 16 12 1 6 1 1 2' text = text.split() df['X'] = text[:int(len(text)/2)] df['Y'] = text[int(len(text)/2):]
H: 'Feature' definition Precisely, what is a feature? Is it an attribute/property name or its value? E.g. would features examples be "name", "adress", ...? Or "Dorothy", "123 YellowBrick Road"?? If it is a property name, what do you call its corresponding values? ("Feature-values"?) If it is a value, how do you call its corresponding property name? In the texts I read, it seems the meaning is interchangeable and depends on context. AI: Feature in the data science context is the name of your variable, answering your question it would be things like name, address, price, volume, etc. It is also known as attributes, columns, variables, etc. Each row in your data set is denominated an instance, in your example again, it would be dorothy, 123 yellowbric road, U$12.3, 1000, etc. They might be called just rows, examples, etc. I hope it helps.
H: In Conditional Random Fields, is mandatory to use features related to following and preceeding tokens? I am training a CRF classifier to classify document rows as a heading (1st level), heading (2nd level) or simple text. I am using Conditional Random Fields for their ability to account sequential aspects. Reading some tutorials, I noticed that usually, among the features, there are some features related to the preceeding or following token. [...] if i > 0: word1 = sent[i-1][0] postag1 = sent[i-1][1] features.update({ '-1:word.lower()': word1.lower(), '-1:word.istitle()': word1.istitle(), '-1:word.isupper()': word1.isupper(), '-1:postag': postag1, '-1:postag[:2]': postag1[:2], }) else: features['BOS'] = True [...] I wonder if the sequential aspect is learned from these features or is connate in CRF. In other words, do we need these features related to other tokens? AI: In other words, do we need these features related to other tokens? No, these features are not needed. But they are often useful: CRFs handle sequential dependencies between the labels, however it's up to you to provide the relevant features, in particular some to represent the dependencies between (certain) features if needed. With text, this relation between consecutive tokens is very often a relevant indicator. I'd suggest that you try both version, without and with this feature, and you're likely to observe a higher performance in the latter case. In my experience it's often worth trying different combinations, including trying features which go two or three steps back.
H: Cost function - ideas I build xgboost model for regression problem. By the default xgboost optimize $(y - y_{pred})^2$, so the RMSE will be the best eval metric to measure performance. But my task is to build the best model for evaluation metric which check if predicted value is in range $-/+10%$ of true value i.e. $y_{pred} \in [0.9*y,1.1*y]$. Do you have any idea for this or maybe MSE is still the best option? AI: In my experience, it is very difficult to come up with good working custom objective functions for xgboost. Custom objectives need to be continuous and need to have a convex gradient and non zero hessian, which is often not the case for custom loss functions. One simpler method you could use is to define a custom validation metric based on your range +/-10, that can be used in conjunction with early stopping to optimize your hyperparameters of the xgboost model. So the model will not directly optimize for this, but will select hyperparamters that will minimize your custom loss. However, I would suggest to stick with the RMSE objective for this problem.
H: Mathematical formulation of Support Vector Machines? I'm trying to learn maths behind SVM (hard margin) but due to different forms of mathematical formulations I'm bit confused. Assume we have two sets of points $\text{(i.e. positives, negatives)}$ one on each side of hyperplane $\pi$. So the equation of the margin maximizing plane $\pi$ can be written as, $$\pi:\;W^TX+b = 0$$ If $y\in$ $(1,-1)$ then, $$\pi^+:\; W^TX + b=+1$$ $$\pi^-:\; W^TX + b=-1$$ Here $\pi^+$ and $\pi^-$ are parallel to plane $\pi$ and they are also parallel to each other. Now the objective would be to find a hyperplane $\pi$ which maximizes the distance between $\pi^+$ and $\pi^-$. Here $\pi^+$ and $\pi^-$ are the hyperplanes passing through positive and negative support vectors respectively According to wikipedia about SVM I've found that distance/margin between $\pi^+$ and $\pi^-$ can be written as, $$\hookrightarrow\frac{2}{||w||}$$ Now if I put together everything this is the constraint optimization problem we want to solve, $$\text{find}\;w_*,b_* = \underbrace{argmax}_{w,b}\frac{2}{||w||} \rightarrow\text{margin}$$ $$\hookrightarrow \text{s.t}\;\;y_i(w^Tx\;+\;b)\;\ge 1\;\;\;\forall\;x_i$$ Before proceeding to my doubts please do confirm if my understanding above is correct? If you find any mistakes please do correct me. How to derive margin between $\pi^+$ and $\pi^-$ to be $\frac{2}{||w||}?$ I did find a similar question asked here but I couldn't understand the formulations used there? If possible can anyone explain it in the formulation I used above? How can $y_i(w^Tx+b)\ge1\;\;\forall\;x_i$? AI: Your understandings are right. deriving the margin to be $\frac{2}{|w|}$ we know that $w \cdot x +b = 1$ If we move from point z in $w \cdot x +b = 1$ to the $w \cdot x +b = 0$ we land in a point $\lambda$. This line that we have passed or this margin between the two lines $w \cdot x +b = 1$ and $w \cdot x +b = 0$ is the margin between them which we call $\gamma$ For calculating the margin, we know that we have moved from z, in opposite direction of w to point $\lambda$. Hence this margin $\gamma$ would be equal to $z - margin \cdot \frac{w}{|w|} = z - \gamma \cdot \frac{w}{|w|} =$ (we have moved in the opposite direction of w, we just want the direction so we normalize w to be a unit vector $\frac{w}{|w|}$) Since this $\lambda$ point lies in the decision boundary we know that it should suit in line $w \cdot x + b = 0$ Hence we set is in this line in place of x: $$w \cdot x + b = 0$$ $$w \cdot (z - \gamma \cdot \frac{w}{|w|}) + b = 0$$ $$w \cdot z + b - w \cdot \gamma \cdot \frac{w}{|w|}) = 0$$ $$w \cdot z + b = w \cdot \gamma \cdot \frac{w}{|w|}$$ we know that $w \cdot z +b = 1$ (z is the point on $w \cdot x +b = 1)$ $$1 = w \cdot \gamma \cdot \frac{w}{|w|}$$ $$\gamma= \frac{1}{w} \cdot \frac{|w|}{w} $$ we also know that $w \cdot w = |w|^2$, hence: $$\gamma= \frac{1}{|w|}$$ Why is in your formula 2 instead of 1? because I have calculated the margin between the middle line and the upper, not the whole margin. How can $y_i(w^Tx+b)\ge1\;\;\forall\;x_i$? We want to classify the points in the +1 part as +1 and the points in the -1 part as -1, since $(w^Tx_i+b)$ is the predicted value and $y_i$ is the actual value for each point, if it is classified correctly, then the predicted and actual values should be same so their production $y_i(w^Tx+b)$ should be positive (the term >= 0 is substituded by >= 1 because it is a stronger condition) The transpose is in order to be able to calculate the dot product. I just wanted to show the logic of dot product hence, didn't write transpose For calculating the total distance between lines $w \cdot x + b = -1$ and $w \cdot x + b = 1$: Either you can multiply the calculated margin by 2 Or if you want to directly find it, you can consider a point $\alpha$ in line $w \cdot x + b = -1$. then we know that the distance between these two lines is twice the value of $\gamma$, hence if we want to move from the point z to $\alpha$, the total margin (passed length) would be: $$z - 2 \cdot \gamma \cdot \frac{w}{|w|}$$ then we can calculate the margin from here. derived from ML course of UCSD by Prof. Sanjoy Dasgupta
H: Is there any way to use (update) a pre-trained logistic regression model for data with new set of columns? I am building an insurance recommendation engine. I have used some variables, like demographics, and built the model. Now I have claims data. Is there a way to include the new data without restarting the process like this? model1 = initial variables model2 = new variables model3 = x*initial variables+y*new variables AI: You can build a model two with new variables and a third one which is trained using predictions of first two models. This is called stacking. But in your case I would try to build a new model with all variables. Basically this is because you loose (not use) the information from the interaction between old set and new set of variables, which is often very useful. Usually stacking is used to combine different approaches not to split input space.
H: Why we add a constant value column in our DataFrame sometimes? Currently I'm learning data science and I'm in the beginners stage. I have seen many times we add a "constant" column in our data frame with all row cells of that column having value 1. I need to know why we do so. And also what will happen if we don't use it. Thank you. AI: In linear regression you need that column to have lines which are not constrained to pass through origin. Think of linear model $y = b_1 x_1 + b_2 x_2 + ...$. Iff all $x_i$ are 0, y must be 0, you need an additional parameter to pass that constraint.
H: Extract Domain related words I am doing a research regarding on automatic text summarizing. So in order to weighting sentences I need to get words related to a particular field or domain like shown below. Topic word - Car Related words - engine, driver, road, break, accelerator Is there any direct method that I can use like wordnet synsets. AI: One very obvious way is to use gensim's word2vec most_similar() function to get the most related words to the queried word. You can check an online version of how it works out here: https://projector.tensorflow.org/ Again it depends on what kind of relations you seek. Other useful methods include: from nltk.corpus import wordnet as wn wn.synsets('car') wn.synsets('car')[0].hyponyms()
H: Adding and subtract inbetween row inputs and value equal to the first column next row using pandas Assume I have a dataset with three inputs: x1 x2 x3 0 a b c 1 d e f 2 g h i 3 j k l 4 m n o 5 p q r 6 s t u : : 0,1,2,3 are times, x1, x2, x3 are inputs that are measured. So here x1 inputs are measured at every one hour. x2 and x3 will be measured at different time. What I need to do , I want write that what ever the measured in x1, x2, x3 it will add and subtract the values are equal to the x1 input next time value So here what I want to do is: x1 x2 x3 y 0 a b c a+b-c==d 1 d e f d+e+f==g 2 g h i g+h-i==j 3 j k l j+k-l==m 4 m n o m+n-o==p 5 p q r p+q-r==s 6 s t u s+t-u==v : : I tried shift method and it didn't work for me what I want exactly. I tried another method and it worked, but didn't came as I want. Here I upload the code. Code : data = pd.read_csv('data6.csv') i=0 j=1 while j < len(data): j=data['x1'][i] - data['x2'][i] + data['x3'][i] i+=1 j!=i print(j) This is works , but it is just showing only one data 63 In my csv file this is second input value of x1 input. I want to write this code contonously happened and read the value as I shown above. Can anyone help me to solve this problem? My csv file AI: So are you evaluating the equality of a+b-c and d for each sample? Meaning, you aim to save the boolean value? If so: import pandas as pd #a sample dictionary data = {'x1':[1,0,4,5,8,1], 'x2':[3,4,5,6,8,9], 'x3':[4,5,1,-2,4,5]} df = pd.DataFrame(data) #converting the dictionary to dataframe df['y'] = 0 #initializing the y column with 0, also you can initialize with FALSE boolean value for i in range(len(df)-1): #iterating between all the rows of dataframe df['y'].iloc[i] = df['x1'].iloc[i] + df['x2'].iloc[i] - df['x3'].iloc[i] == df['x1'].iloc[i+1] Note: for the last row, since the content of column y should be calculated based on the next row, the value cannot be calculated, that is why we have set (len(df)-1). Result: x1 x2 x3 y 0 1 3 4 True 1 0 4 5 False 2 4 5 1 False 3 5 6 -2 False 4 8 8 4 False 5 1 9 5 0 The problem with your code: First, you are not considering that subsequent row to calculate the accuracy of the expression $a+b-c ==d$ Second, if that print(j) is outside the loop, is it? then you are looping in the rows of data frame while j is less than data frame's length, So when the flow is out of this while, a single value j would be returned.
H: Coefficients of Linear regression for minimizing MSE (I asked this in mathematics site, but nobody responded, it seems the whole problem is more related to data science than math.) In a regression problem, loss function is: $$L(a,b) = {\sum_{i=1}^n (y^i - (ax^i +b))^2})$$ In order to minimize this value, we need to set the derivative of L with respect to each of its parameters, equal to zero. Hence, $\frac{dL}{db}$ would be $y^- + a \cdot x^-$ But what would $\frac{dL}{da}$ be? $$\frac{\sigma L(a,b)}{\sigma a} = {2 \cdot \sum_{i=1}^n (y^i - (ax^i +b)}) \cdot \frac{\sigma \sum(y^i - (ax^i +b))}{\sigma a}$$ $$\frac{\sigma L(a,b))}{\sigma a} = {2 \cdot \sum_{i=1}^n (y^i - (ax^i +b)}) \cdot -\sum(x^i)$$ $$\frac{\sigma L(a,b))}{\sigma a} = {2\sum(x^i) \cdot \sum_{i=1}^n (y^i) - 2\sum(x^i) \cdot \sum_{i=1}^n (ax^i) - 2\sum(x^i) \cdot \sum_{i=1}^nb}$$ $$\frac{\sigma L(a,b))}{\sigma a} = {2\sum_{i=1}^n (y^i \cdot x^i) - 2\sum_{i=1}^n (ax^i \cdot x^i) - 2\sum_{i=1}^n x^i \cdot b}$$ $$\frac{\sigma L(a,b))}{\sigma a} = {2\sum_{i=1}^n (y^i \cdot x^i) - 2\sum_{i=1}^n ((ax^i +b) \cdot x^i)} = 0$$ How this would be equal to $\frac{cov(x, y)}{\sigma^2x}$ AI: For a linear regression we have the loss function $$J(a,b)=\sum_{n=1}^N(y_n-a-bx_n)^2.$$ The partial derivatives are $$\dfrac{\partial J}{\partial a}=2\sum_{n=1}^N(y_n-a-bx_n)(-1)$$ $$\dfrac{\partial J}{\partial b}=2\sum_{n=1}^N(y_n-a-bx_n)(-x_n).$$ If we set both derivatives to zero and divide by the sample size $N$ we obtain $$0=\overline{y}-a-b\overline{x}$$ $$0=\overline{xy}-a\overline{x}-b\overline{x^2}.$$ Now, solve the first equation for $a= \overline{y}-b\overline{x}$ and plug this into the second equation $$0=\overline{xy}-\overline{x}\overline{y}+b\overline{x}^2-b\overline{x^2}$$ and solve for $b$ to obtain $$b=\dfrac{\overline{xy}-\overline{x}\overline{y}}{\overline{x^2}-\overline{x}^2}.$$ The espression in the numerator is the covariance for a sample and the expression in the denominator is the variance of $x$.
H: Two different cost functions for neural networks, how they can give the same result? One is: $$J=-\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}\Big[y_{k}^{i}\log\big((h_{\theta}(x^{i}))_k\big)+(1-y_{k}^{i})\log\big(1-(h_{\theta}(x^{i}))_k\big)\Big]$$ The other one is: $$J=-\frac{1}{m}\sum_{i=1}^{m}\Big[y^{i}\log(a^{i})+(1-y^{i})\log(1-a^{i})\Big]$$ As I can see those two equations are not equal. How both can be used to calculate cost function? Also, one of them using $h$ function which is $a$ of output layer, whereas others are using $a$ ($a$ is $f(w*x)$ where $f$ is activation function). When I looked from the book "Pattern Recognition and Machine Learning" from Bishop, he used $a$ for both of the equations. But from another source which I took equations from $h$ is used. But using different $a$ values and using just one of them (namely $h$ which is $a$ of output) are totally different things. Both sources are reliable, what am I missing? AI: I don't remember exactly what the book has mentioned, but I guess the difference between the two is due to having one or multiple features. I guess it is already mentioned in the book. They are the same. One is for multi-dimensional input, and the other is for one-dimensional. One sigma is iterating over the features and the other is iterating over the examples. You can put $k$ to one to achieve the simpler formula.
H: How to automate ANOVA in Python I am at the dimensionality reduction phase of my model. I have a list of categorical columns and I want to find the correlation between each column and my continuous SalePrice column. Below is the list of column names: categorical_columns = ['MSSubClass', 'MSZoning', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'Foundation', 'Heating', 'Electrical', 'Functional', 'GarageType', 'PavedDrive', 'Fence', 'MiscFeature', 'SaleType', 'SaleCondition', 'Street', 'CentralAir'] Because its categorical vs continuous, I've read that ANOVA is the best way to go but I have never used it before and couldn't find a concise implementation of it in Python. I want to loop through and output the correlation between each element in the list and the SalePrice column. AI: I am not sure ANOVA is the best and easiest way to find correlation between these categorical features and your target. You may see this great post where they propose many other methods along with ANOVA. If you persist to use ANOVA test or Kruskal-Wallis H Test, you need to know how it works to give you that notion of correlation (variation of variance among groups of categoricals). It is nicely explained in that post: ANOVA estimates the variance of the continuous variable that can be explained through the categorical variable. One need to group the continuous variable using the categorical variable, measure the variance in each group and comparing it to the overall variance of the continuous variable. If the variance after grouping falls down significantly, it means that the categorical variable can explain most of the variance of the continuous variable and so the two variables likely have a strong association. If the variables have no correlation, then the variance in the groups is expected to be similar to the original variance. Once you understand how it works, implementing it and automating it is not difficult. In fact scipy and statsmodels have ANOVA. Check this post out, where they demonstrate in details how to perform ANOVA test on an actual dataset and estimate the correlation between categorical variable and continuous target. It is just a matter of putting these pieces together and change a bit to make it work for your own dataframe.
H: How to handle large number of categories in a dataset? I have one dataset of "Books" which contains 8 columns initially and out of which 3 of them contains text values which can be categorized. The 3 columns contains "Language-code", "Author Name" and "title" of the book. As sklearn LinearRegression don't take text as input so i decided to categorize these 3 columns by using "pandas_getdummies(...)" but after categorizing it the columns number exceeded to 20072 from 8 which is way too high. The dataset url is: https://www.kaggle.com/jealousleopard/goodreadsbooks/downloads/goodreadsbooks.zip/6 So my queries are: What to do with the title name? Categorizing it doesn't seems right. What to with the rest 2 columns? If i leave the title name then the number of columns exceeds to 7646. Is there any other algorithm where i can directly feed the dataset without categorization? How to handle these large number of features after categorizing? AI: Algorithm like Decision Tree, can also work well on ordinal values, i.e. without OneHotEncoding. You can try this one. Also Im not sure the importance of Title Name, so you have to take decision based upon the requirements. But I have avoided in my use case. Also, generally when you have large number of categories, you can give a try by clubbing uncommon categories into one.
H: What are the exact differences between Deep Learning, Deep Neural Networks, Artificial Neural Networks and further terms? After having read some theory I am getting a bit confused about the following terms: Deep Learning Deep Neural Network Artificial Neural Network Feedforward Neural Network So, what seems clear to me is that Deep Neural Networks are Artificial Neural Networks with multiple layers (usually more than 1 hidden layer). However I have read several times that: "Deep Neural Networks are feedforward Neural Networks with many layers." I know what a feedforward Neural Network is, but to my understanding Deep Neural Networks is a term for ALL Artificial Neural Networks with multiple layers between input and output layer? Shouldn't there also be for instance Deep Recurrent Neural Networks? Is it correct that Deep Neural Networks must feedforward Neural Networks? This would in turn mean that Deep Recurrent Neural Networks cannot be referred to as Deep Neural Networks. Moreover, I see there is a large variety of deep learning architectures, such as: Convolutional Neural Networks Residual Neural Networks Deep Belief Networks Deep Boltzmann machines ... However, now also Wikipedia gives me a hard time to distinguish all the terms, saying: "Deep learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including..." So, my concrete questions rising from all the thoughts above are: Does the term Deep Neural Networks ONLY belong to feedforward Neural Networks? If the answer to 1) is yes: is the Wikipedia definition correct as you can read it up there? This would mean that i.e. a Convolutional Neural Network with multiple layers must be referred to as a Deep Convolutional Neural Network, which is not a subclass of Deep Neural Networks? Is the term Deep Neural Networks a collective term for ALL Artificial Neural Networks with multiple layers, or just for all feedforward Neural Networks with multiple layers? Would it be more accurate to use the collective term "Deep Learning Architectures" with strictly separated subclasses as Wikipedia suggests? AI: I will try to explain it in the simplest way I can- Deep Learning - Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. It is a subclass of Machine Learning where (in layman terms) we use only Neural Networks. https://www.mathworks.com/discovery/deep-learning.html Deep Neural Network (DNN) - This is just a term for a neural network with many layers and many nodes in layers, which is not usually found in a shallow network. There is no clear boundary between a shallow and deep neural network. I suggest you watch Andrew Ng's course on deep learning to get a better understanding. Artificial Neural Network (ANN) - This is just another term for a neural network, all neural networks are ANN's. Feedforward Neural Network - It is a type of neural network where there is no feedback connections. In technical terms, the information flows only in one direction (input to output) in the forward propagation stage. https://towardsdatascience.com/deep-learning-feedforward-neural-network-26a6705dbdc7 Shouldn't there also be for instance Deep Recurrent Neural Networks? There is no need for Deep Recurrent Neural Networks, it would simply be Recurrent Neural Networks. Is it correct that Deep Neural Networks must feedforward Neural Networks? No, RNN is a type of Deep Neural Network which is not a feedforward NN. So, to answer your questions, Does the term Deep Neural Networks ONLY belong to feedforward Neural Networks? No, the term Deep Neural Networks belong to all Neural Networks with multiple layers. (Architecture does not matter). Is the term Deep Neural Networks a collective term for ALL Artificial Neural Networks with multiple layers, or just for all feedforward Neural Networks with multiple layers? Yes, it is a collective term for ALL Artificial Neural Networks with multiple layers. Would it be more accurate to use the collective term "Deep Learning Architectures" with strictly separated subclasses as Wikipedia suggests? Deep learning architectures just refer to the different neural network architectures used for different tasks - CNN for image processing, RNN for sequence (text, audio etc) processing etc. So, when you are doing a particular task, you can use the associated architecture name. For general purpose, you can just use the term DNN. Hope this clears your confusion.
H: Can we use DecisionTreeClassifier of sklearn for continuous target variable? I have a continuous target variable named "quality" which ranges from 0 to 10. Also I have 11 input variables in my dataset. When I'm building my model using DecisionTreeClassifier() of sklearn then I'm getting a score of 60% but when I'm building my model using DecisionTreeRegressor() of sklearn then I'm getting accuracy of 3% only and also RMSE as 85%. Also, when using Linear Regression my R-squared value is 0.376. Is it good? Dataset Link : https://archive.ics.uci.edu/ml/datasets/Wine+Quality Am I doing something wrong? I need help. Thank you. AI: In the Wine Dataset you linked, the quality column is not a continues variable but a discrete. It takes integer value between 0 and 10. When you use the DecisionTreeClassifier, you make the assumption that your target variable is a multi-class one with the values 0,1,2,3,4,5,6,7,8,9,10. So, the model tries to predict one of these and only these values. When you use the DecisionTreeRegressor, the assumption is that any number between 0 and 10 is acceptable. Like the number 4.52356. As a result, the accuracy will be noticeable worst. If you still want to use the Regressor for some reason, you can try to round the outcome and then calculate the accuracy. Keep in mind that RMSA doesn't fit your problem. You have a multi-class and not a regression model.
H: How to learn certain Maths to understand machine Learning papers? I have done the deeplearning.ai course on deep learning. But I cannot Understand equations like minGmaxDV(D,G)=Ex∼pdata(x)[logD(x)]+Ez∼pz(z)[log(1−D(G(z)))] What kind of Maths am I am supposed to learn? I know Calculus basic Multivariable Calculus and Linear Algebra. So should I learn Probability (Advance probability maybe)? AI: Mathematical expectations are a concept from probability theory. Expressions like $E_x[...]$ denote conditional expectations and I think you will have trouble understanding Machine Learning concepts without understanding conditional expectations. (Understanding them as a mathematical concept, not on an intuitive level). My recommendation is that you look for a beginner's course in probability theory to get a grasp of the following concepts: Random variables Probability distributions Stochastic independence Mathematical expectation Conditional expectation/conditional probability Law of total probability Bayes' theorem There are probably courses designed specifically for people interested in Machine Learning. Have fun!
H: Adding Group Average Line to Bar Chart I am trying to create a PivotChart in the form of a bar chart and I would like to add a group average line on the chart. I have this (sample) table in my PowerPivot Data Model: Group averages are A: 20, B: 50 and C: 80. My bar chart looks like this. How would I do this? Do I need to add a measure? If so, how? I am not that great with DAX functions yet. AI: While I am not sure this is what you are looking for, here is what I think you are asking. First, you are going to need an additional column. Two ways to get er done. Calculated column =CALCULATE ( AVERAGE ( Range[Value] ), FILTER ( ALLSELECTED ( Range ), Range[Group] = EARLIER ( Range[Group] ) ) ) Measure =CALCULATE ( AVERAGE ( Range[Value] ), FILTER ( ALLSELECTED ( Range ), Range[Group] = MAX( Range[Group] ) ) ) Now edit your pivot table: Now select your chart type: For future reference you should look into Power BI. It has features which do these sorts of calculations automatically. If your looking for something different it would help is you provided the expected outcome.
H: what does `a factor of K-fold` mean in GPU-based training? From the book "Deep Learning and Convolutional Neural Networks for Medical Image Computing" As we learned about the current state of research on deep learning, I was surprised to find that other investigators had used convolutional neural networks, one type of deep learning, in the past [22, 23]. But there seemed to be something different about the most recent crop of deep learning algorithms. They routinely used GPU processing to accelerate training by as much as a factor of 40-fold. They also used multiple convolution layers and multiple data reduction layers. does 40-fold mean "K-Fold Cross-Validation"? AI: The author here refers to a comparison between older and most recent implementations of deep learning algorithms regarding execution time (accelerating processing speed). It just means that "they used GPUs and achieved x40 better execution time". It has nothing to do with K-Folds.
H: How to get mean of column using groupby() and another condition For the following df = pd.DataFrame() df['1'] = 1,2,1,2,1,2 df['2'] = 3,6,5,4,7,8 df['3'] = 1,1,1,2,2,2 I want to do mean(df.groupby().loc[df['1']==df['3'],'2'].mean() which doesn't work. And simply doing this : a=df.groupby(['1','3'])['2'].mean() gives OUTPUT: 1 3 1 1 4 2 7 2 1 6 2 6 But I only want cases where column 1 and 3 have the same elements: 1 3 1 1 4 2 2 6 AI: Try this way: a=df[df['1']==df['3']].groupby(['1','3'])['2'].mean()
H: What is "spatial feature encoding"? Can anyone give a concrete example? This book "Deep Learning and Convolutional Neural Networks for Medical Image Computing" mentioned a term spatial feature encoding On the other hand, CNN models have been proved to have much higher modeling capacity, compared to the previous image recognition mainstream pipelines, e.g., HAAR, SIFT, HOG image features followed by spatial feature encoding, then random forest or support vector classifiers. Given millions of parameters to fit during model training (much more than previous pipelines), CNN representation empowers and enables computerized image recognition models, with a good possibility to be able to handle more challenging imaging problems. The primary risk is overfitting since model capacity is generally high in deep learning but often very limited datasets are available (that are with good quality of labels to facilitate supervised training). The core topics of this book are represented by examples on how to address this task-critical overfitting issue with deep learning model selection, dataset resampling and balancing, and the proper quantitative evaluation protocols or setups. I googled a lot and didn't find explanation or definition of "spatial feature encoding". What is "spatial feature encoding"? Can anyone give a concrete example? AI: Chatfield et al. provides a good survey of feature encoding methods. From the introduction to that paper (emphasis my own): The typical object recognition pipeline is composed of the following three steps: (i) extraction of local image features (e.g., SIFT descriptors), (ii) encoding of the local features in an image descriptor (e.g., a histogram of the quantized local features), and (iii) classification of the image descriptor (e.g., by a support vector machine). I believe that "spatial feature encoding" refers to step (ii) in the typical pipeline. In this respect, I believe that the simplest possible example (and a standard one) would be the histogram encoding. If "spatial feature encoding" is meant to imply the preservation of some geometric information, than spatial binning is another technique to look at.
H: better confussion matrix higher LogLoss ? Is that possible> I have tried a 2 different versions of a gbm in a multinomial classification problem. The second model results in better confusion matrix but in worse Log Loss value (at the test sample). How is that possible. Further are the results of the two models. I thought that it could be because the Class A is much more oversampled and the small decrease of that class could lead to such a deterioration of hte logloss? Any ideas? Thank you AI: Log loss is a measure of confidence of the model in its predictions. Lower log loss implies higher confidence and vice-versa. Better confusion matrix with worse log-loss implies that few of the misclassifications, but with higher probability score. Since the accuracy is calculated with a fixed probability threshold, you might not observe this directly. Plot the ROC curves for both the models with is based on varying the decision threshold. This gives you the idea about which model is better.
H: How to normalize complex-valued data? I'm taking the abs of all elements, compute the mean, subtract it off from the original values. I just feel that this is not correct and can change the vectors. I'm also dividing by the standard deviation, but I'm quite confident about this, knowing that this is pure rescaling of complex values. Any guidelines on how to do this? PS: My concern eminates from thinking of each complex valued element as a vector in 2D plane, and this subtraction could change where it is pointing to. AI: First off, it's always helpful to think geometrically about what complex numbers are, and what arithmetic operations achieve. In your function, you are using the mean and standard deviation of the absolute value of these complex numbers. That means that if you perform your operation to the absolute value of your data: (tmp - tmp.mean()) / tmp.std() you will end up with normalized data of mean 0 and standard deviation 1. Going back to thinking geometrically, when you perform your original operation: (x_source - tmp.mean()) / tmp.std() you are essentially moving your data's mean tmp.mean() units to the left, then scaling horizontally by 1/tmp.std(). Notice none of this is vertical shift or scaling, so something smells funny. What I would do: I would normalize each coordinate independently. Finding the mean is fine -- the mean of complex data points is the same as means of components: $$ \bar z = (\bar x , \bar y) $$ So you can subtract the mean of the $x$ value's from each input value's $x$-coordinate. Ditto for $y$. Then you divide the real component by the standard deviation of the real component, and ditto for the imaginary component. It could also be appropriate to divide by the standard deviation of the (new) norms. This would ensure good properties involving your data lying within a circle of a certain radius. Code: real_data = real(x_source) imag_data = imaginary(x_source) real_data = ( real_data - real_data.mean() ) / real_data.std() imag_data = ( imag_data - imag_data.mean() ) / imag_data.std() x_source_norm = real_data + i * imag_data
H: Find columns with numeric values, but stored as string I need to find the columns in data frame, which has numeric values and are stored as string. data_set = pd.DataFrame({"Number":["1","2","3","4","5"], "Char":["A","B","C","D","E"]}) data_set.dtypes In above code, column "Number" has numeric values, but stored as string. I have to iterate through columns and convert it to int or float. I can do it by taking column name and convert it. data_set["Number"] = data_set["Number"].astype(int) What i need is to do it dynamically. Dynamically means, iterating through columns and changing it. AI: I think this does what you want. for column in data_set: If isinstace(data_set[column].dtypes,str) data_set[column]=data_set[column].astype(int)
H: Are mainstream pre-trained models useful as discriminators? In the context of GANs I see many papers designing new discriminator networks. I'm curious about the usefulness of designing discriminators as modified versions of mainstream models like Inception, MobileNet, EfficientNet etc. My intuition is that the mentioned image classification models are way more refined and standardised than a custom discriminator for a specific paper. In addition, I think their pretrained weights should be helpful. And if they aren't useful, I'm curious why not. Any links about this topic are much appreciated. AI: Using a standard network architecture is perfectly reasonable. Most discriminator architectures are trivially different variants of well-known architectures anyway. Depending on the GAN loss, starting with a pretrained network as a discriminator may be dangerous. For instance, the classical GAN loss minimizes the Jensen-Shannon divergence, so having a powerful discriminator right from the start will mean little support overlap and no useful gradients from the discriminator. This is a major reason for training instability in GANs. Yet, more recent losses (e.g., the various WGAN losses) do not have this shared support problem. However, losses like WGAN assume certain properties of the discriminator (i.e., Lipschitz-$k$), which pretrained networks are quite unlikely to satisfy out of the box. So from a purely theoretical perspective, I can see why it's not very appealing. Another reason, I suspect, is that the discriminator's job is quite specialized and requires it to be dynamic; i.e., it has to be able to "change its mind" quickly as the generator adapts. It's not clear to me that a pretrained network, which people have worked to place in a high-quality and robust part of weight space, is good for this. Furthermore, since a discriminator's job is a little easier than e.g. ImageNet classification I suspect that the massive deep networks often used for transfer learning are simply unnecessarily large for the task (the backward or even forward passes being unnecessarily costly, I mean; GANs already take enough time to train). That being said, it is actually common to use pretrained networks for the discriminator for perceptual feature matching losses. (You may be familiar with the even more common "perceptual losses" commonly employed in autoencoders). The idea is to match feature statistics, rather than optimize a scalar probability (this idea comes from Salimans et al, I believe). Note that the pretrained networks used for this (usually VGG) are not trained or even fine-tuned most of the time though; they're usually left alone. Some relevant papers: [1], [2], [3]. I suspect that you can always use a pretrained network in a setup similar to these or in the manner employed by McGAN, and it will be helpful. Despite all this, I am fairly sure that using a pretrained network as a starting point will still work fine in many cases, though I'd suggest starting with a WGAN variant for it to work. It may even do very well, depending on the data and the other training details. Let me know if you try it out =)
H: Does the mean/median of a set sentence embedded vectors represent anything? Please bear with me as I am new to NLP. I am specifically using tensorflow's universal sentence encoder: https://tfhub.dev/google/universal-sentence-encoder-large/3 I am clustering text based on the cosine similarity of the embedding produced by the model and I want to see what cluster a new text would most likely lie in. I was going to compare the new text embedding to the mean/median of all the embeddings within a cluster to see which cluster it would most likely lie in. Would taking the mean/median of the cluster's vectors "represent" the general idea of the cluster or will the vector not represent what I am looking for? AI: Well, the mean is pretty average for all the words. These tend to all be quite similar, cluster in the center of the data, and have nearest neighbors to pretty bland, generic words. The average word vector is not a good representation of what a text is about.
H: What is the difference between the value -99 and NaN in a data column? I am new to data science. I was looking into some datasets and I saw some values like -99, which I discovered later that it means that there is a missing value. Does this mean the same thing as NaN? If it is the same thing, why do we use -99 instead of NaN? AI: No, it is not the same. It may have that meaning in that particular dataframe, but don't take that as a rule. I'd recommend you replace that magic number with actual NaNs, and then try to find the best possible way of filling up the missing values.
H: Which language to learn for Machine Learning? I am currently working in BigData with Spark-Scala framework. I want to learn Machine learning from scratch. Which language would be better to learn for machine learning, Scala or Python? AI: I have done only a little bit of scala (spark streaming for database replication), but have worked with python for about a year. I would recommend python for the following reasons: We do a lot of notebook centric development at my job. I would recommend that because the Jupyter IDE allows you to visualize/debug your data really effectively, which is important to understanding and validating concepts. Python is dynamically typed and has a powerful set of built-ins and utilities that will let you transform your data into the shape you need pretty easily. Python has first-class support for almost all ML libraries (TensorFlow, PyTorch, Sci-kit Learn).
H: How should a decision tree handle an attribute that can be anything? Say I have AttributeA that can take values A1, A2, A3, AttributeB that can take values B1, B2, B3, etc. and I know ahead of time that my classification table looks like AttributeA | AttributeB | AttributeC | Classification A1 | B1 | anything | Class 1 anything | B2 | anything | Class 2 A3 | B1 | C2 | Class 3 A2 | anything | C3 | Class 4 ... I'm curious how I would modify a decision tree to handle attributes that can take on any value. One idea I had was to change single rules with "anything" into multiple rules where every possible value of that attribute is explicitly stated. For instance, the rule A1 | B1 | anything | Class 1 could be changed into the three rules: A1 | B1 | C1 | Class 1 A1 | B1 | C2 | Class 1 A1 | B1 | C3 | Class 1 I'm sure this would work, but I'd like to see if there are any existing decision tree implementations that can handle "anything" or "does not matter" entries. AI: I think your description of anything matches the idea of missing values. Basically by stating that a value could take any value you say that you do not know the value. In standard Breiman description of CART or random forests there is a way to handle missing values. If you use Python stack, however, the implementations does not allow missing values. If that is the case one surrogate would be a different new value for categorical data. For numerical continuous variables I don’t know something better than imputing with average, but it is not quite the same thing.
H: What to do when the target variable does not correlate with any of the independent variable in a dataset? I am quite new to data science. I am trying to use Logistic Regression to predict my target (either 1 or 0). But the problem is when I use a heatmap to find the correlation between the columns and the target variable, the highest value I get is around 0.17 (out of 1). So it appears to me that my target variable doesn't correlate with any of the columns in the dataset. My question is, is it normal to have a such target variable? And what can I do to increase correlation between the target variable and other columns? AI: If your predictors have nothing to do with the outcome, you should not be able to build a model that works out-of-sample. This is a feature, not a bug, of machine learning. For instance, do you consider what time I set my alarm in the morning to be predictive whether or not you have cereal for breakfast? Features can, however, have just a small relationship with the outcome and combine to be quite predictive. Perhaps my alarm does not influence your breakfast choice, but there are a number of factors that do, each of which might be poor at predicting the outcome, but the combination of $3$ or $10$ might be very predictive. At an extreme, consider individual pixels of the MNIST digits. Does the middle pixel, on its own, have much ability to distinguish between the digits? What about some other pixel? Every individual pixel is a poor predictor of the digit, yet all $784$ combined result in strong performance.
H: how to correct mislabeled data in training, validation and test set In an image classification task, I know there are mislabeled data. should I remove/correct them in all training / validation / test set ? I saw this article https://arxiv.org/pdf/2103.14749.pdf but I am not sure if I understood the result correctly - does adding mislabel improve the performance ?! AI: I'd say yes : if you have mislabeled data in train, the model won't learn correctly, and if you have them in test/val, your test results won't represent the actual model results. Since that's directly the label, that's ok to correct/remove them, since you'd never have the label in real new cases. Moreover, if you encounter the same problem with a variable, it's more difficult to deal with, since the variable may also have misvalues in the future application of your model.
H: Which neural network is better? MNIST dataset with 60 000 training samples and 10 000 test samples. Neural network #1. Accuracy on the training set: 99.53%. Accuracy on the test set: 99.31%. Neural network #2. Accuracy on the training set: 100.0%. Accuracy on the test set: 99.19%. Which neural network is better if other parameters are unknown? I have seen how many studies focus on accuracy on a test set, and rarely write about accuracy on a training set. The first neural network is better in accuracy on the test set, but worse in accuracy on the training set. Would you say that unlearned training samples can be bad for testing a different test set? I have an idea to compare for overall accuracy: (99.53% * 60000 + 99.31% * 10000) / (60000 + 10000) = 99.499% (100.0% * 60000 + 99.19% * 10000) / (60000 + 10000) = 99.884% Or it could be a weighted multiplication. But I'm not sure about that. What do you think about this? AI: When you train a neural network, you usually use 3 sets: one for training, one for development, one for testing. Your training set is here for (obviously) training your model: the performance of your model on your training set reflects how well your model learnt "by heart" what you showed it. Your development set is used jointly during training: your model does not see this data, but monitoring its accuracy on said set allows you to stop the training before overfitting. Overfitting is when your model learns on your training set so well it can no longer generalise what it learned to new data, but can only reproduce what it has seen strictly. In general, you do not want your models to have this behavior, and you therefore stop the training at the inflexion point where your training accuracy increases while your development accuracy starts decreasing. (This is what you observe for your model 2: its training accuracy is higher, but dev accuracy lower) Your test set is only used at the end of your training to check, on yet a new set of data, that your model can still generalize. You NEVER stop your model training based on the test set, because it would introduce a bias. TLDR. The only accuracy reported in the papers is the one on the test set. In most cases, how well the model learned the data "by heart" is not important (training accuracy) nor is how you chose to stop it before overfitting (dev accuracy). What is interesting is how well your model can generalize on completely new data, used neither for training nor for monitoring training (test accuracy).
H: How to interpret .get_booster().get_score(importance_type='weight') for XGBRegressor() I am trying to do feature selection using XGRegressor(). I am doing this because I have many features to choose from over 4,000. Once I have a set of features I have a neural network I created to use these features to predict median_gross_rent. My question is the following, I have these feature important scores from feature_important = model.get_booster().get_score(importance_type='weight') keys = list(feature_important.keys()) values = list(feature_important.values()) feat_imp_df = pd.DataFrame(data=values, index=keys, columns=["score"]).sort_values(by = "score", ascending=False) Is there a threshold score that I should set to make a cutoff? What are the best practices for feature selection in determining out of large amount of features to use? Thanks! AI: If you must select features in this way, the traditional method is to pick the number of top features that you want to obtain, instead of a threshold. Normally this kind of feature selection is done only when there too many features with respect to the number of instances. This is why one tries to guess what would be a reasonable number of features $n$, then arbitrarily pick this number. A slightly more advanced method is to try for several values of $n$, e.g. 100,200,300..., train/test a model for each $n$ (on a validation set), then observe how the performance evolves using these values in order to pick the best number. Note that selecting feature based on their individual importance is not always optimal. When possible, it's better to let to the algorithm decide which features it uses.
H: Getting both results and probabilities running scikit learn random forest I have a scikit learn RandomForestClassifier that returns 0s and 1s: X = [ [2,1,1,1], [2,0,2,1], [3,1,1,1] , [3,1,1,1], [3,1,1,1] ] y = [ 0, 1, 1, 1, 1 ] rf = RandomForestClassifier(n_estimators=200, max_depth=5) rf.fit(X, y) X_test = [ [2, 0, 1, 0], [2,1,1,1] , [3,1,1,1] ] y_result = rf.predict(X_test) I can rerun the classifier and get probabilities instead of values replacing with y_result = rf.predict_proba(X_test) But how can I get from scikit learn BOTH the result and the probability? If I cannot get both results in the same run, does it make sense to run the probability and have a threshold, say 0.7, that if the probability is greater than the threshold then the result is 1 ? AI: how can I get from scikit learn BOTH the result and the probability? You can simply run both: The results will always be consistent because there is no randomness involved at the prediction stage, only at training stage. The computations required for predicting are not intensive, so I don't think there can be any major efficiency issue running it twice. does it make sense to run the probability and have a threshold, say 0.7, that if the probability is greater than the threshold then the result is 1 ? You can but it's not standard: in binary classification the regular threshold is 0.5 (since if $p(x)>0.5$ then $p(y)<0.5$). The use of a different threshold makes sense if you want to give more importance to either precision or recall. For example a threshold of 0.7 means requiring higher confidence for positive cases, so less instances predicted as positive therefore higher precision but lower recall.
H: n_jobs=-1 or n_jobs=1? I am confused regarding the n_jobs parameter used in some models and for CV. I know it is used for parallel computing, where it includes the number of processors specified in n_jobs parameter. So if I set the value as -1, it will include all the cores and their threads for faster computation. But this article:- https://machinelearningmastery.com/multi-core-machine-learning-in-python/#comment-617976 states that using all cores for training, evaluation and hyperparameter tuning is a bad idea. The crux of the article is as follows:- 1.)When using k-fold cross-validation, it is probably better to assign cores to the resampling procedure and leave model training single core. 2.)When using hyperparamter tuning, it is probably better to make the search multi-core and leave the model training and evaluation single core. But common sense says that setting n_jobs = -1 everywhere will include all cores for faster computation and hence result in less run time. Can anyone clarify? AI: This is about distributed computing: let's say that you have 100 tasks and 10 cores available. You parallelize your tasks so that each core processes 10 of them. Now let's imagine that the task involves some subtasks and internally tries to use all the cores available: at the two levels of parallelization the processes compete for the cores, causing a loss in performance because there are more pending processes than available cores. This can usually be observed with some utilities to visualize the activity of the cores. This is why it's much more efficient to control at which level parallelization should happen, in order to minimize competition between processes. In my example in theory the top tasks could be distributed say into 5 processes while each task could still use 2 cores. However it's usually simpler and more efficient to just decide one level of parallelization: in your case either the training or the parallelization, but not both. Imho the choice is not obvious though, in particular it depends on how intensive the training process is.
H: Is there a general rule for how many layers a NN should be based on the number of inputs? I have a neural network that takes 1935 inputs, so I'm wondering if there is a general rule for how many layers the network should be. Should the number of neurons be descending by a certain amount? AI: It doesn't depend on the number of inputs directly. Your network has to be large enough to capture the structure of the problem and the data. This process would involve some trial and error experimentation For challenging predictive modeling problems, deep neural networks may be a heuristic approach. For the number of neurons and how much they should descend too, use trial and error. We cannot specify the “best” number of neurons analytically. We must test. Exploring and implementing more such models would help you develop better intuition about these, still involving some trial and error. Read more here
H: Using MFCC and MFCC Delta features with a CNN A lot of studies feed MFCCs as well as MFCC delta and double deltas directly to a CNN for audio classification. My question is, are the MFCC Deltas concatenated with the MFCC matrix? Most papers simply state they used MFCC + MFCC Delta + MFCC Double Delta and the plus sign is left to interpretation! AI: Yes, the delta and delta-delta variants are concatenated. However the details may vary a bit based on model type: If the model takes a 1d (features,) input (such as a multi-layer-perceptron, logistic regression, random forest etc), then the delta coefficients are concatenated. So features is [mfcc1,mfcc2...,dmfcc1,dmfcc2... ]. For a model that takes 2d (time,features) input like a RNN, then it is concatenated on the features axis. For a model that takes 3d (time,features,channels) inputs like a CNN, then the delta coefficients are usually its own plane in the channels dimensions. This ensures that the delta MFCC coefficient is in the same time x feature position as the corresponding MFCC coefficient, which is easiest for the convolutional kernel to exploit.
H: What is the difference between Multi task learning and domain generalization I was wondering about the differences between "multi-task learning" and "domain generalization". It seems to me that both of them are types of inductive transfer learning but I'm not sure of their differences. AI: Domain generalization: Aims to train a model using multi-domain source data, such that it can directly generalize to new domains without need of retraining. Focusing, Multiple domains on same task Multi-task learning (MTL): MTL is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In other words, same domain on multiple tasks Main Difference: Domain generalization Multi-task learning Multiple domain dataset on same task Same domain dataset on multiple tasks As its a single task, no need for parallel execution Multiple tasks are executed in parallel
H: How to classify objects from a description in natural language My objective is to classify objects that all belong to a certain category, based on a textual description of these objects by humans. My problem is not specific to a certain category of objects, but for sake of clarity I am going to give examples as if the objects I wanted to classify were movies. To be precise: the description contains both a judgement of the object, and a more objective description of the various parts of the object. For example: “This movie has great lines, and the scenario is well-though. It counterbalances the poor actor performance. Still, overall I think it's a very good movie”. This both contains information about different aspects of the movie, and provides a subjective review. what I want is: a score for each object (like a movie rating), based on how appreciated it is; for a given object, "similar" objects (ie. if you liked this movie, you might also enjoy these), based on similar "features" each object has. For instance, a movie which was also well-written might be considered "similar" to the former example. I also have access to a pre-existing classification of these objects. For instance, a movie might be labeled "action/thriller". This classification is too broad for my purposes (ie. not all "action/thriller" movies are similar), but it might be a good start. I have though that to solve my problem, I could use sentiment analysis to give each object a score, and that natural language processing coupled with a feature space could do the trick for classifying objects. The point is that I am unsure on how to proceed, because I am new to machine learning, natural language processing, and data sciences in general. I have nonetheless a CS and mathematical background. Could you provide some insight on where to start? Are there libraries that already provide this kind of features? This is a repost of this question, since it was not focused enough, and this forum seems more appropriate. It has been rewritten. AI: Text vectorisation is a good way to have a reliable classification. You have several libraries like doc2vec that you can use together with logistic regression or dimensional reduction technique like tSNE or UMAP. https://radimrehurek.com/gensim/auto_examples/tutorials/run_doc2vec_lee.html On the other hand, you can also use libraries like BERT or TF-IDF: https://pypi.org/project/bert-document-classification/ https://medium.com/swlh/text-classification-using-tf-idf-7404e75565b8
H: What methods are there for predicting a signal? I have a large dataset of signals (composed of time series). All time series describe the same process, but each series has a different duration (number of points). Based on these time series, I want to train some neural network, so that then I give a new time series as input and it predicts 100 further points. I have two questions: What transformations are there to reduce all signals to one size? What are the methods for solving such problems (predict time series)? I know the popular ARMA and ARIMA models, but they work with the same time series. The goal of my task is to find patterns between time series in order to learn how to predict the further behavior of a new time series. Thanks for any help! AI: I don't think you can compress time series because there is a risk of losing valuable data. Rather than that, you can set a the max size as the default size, and set zeros to the left for smaller data. If the sampling is too high (ex: milli seconds), do not hesitate to reduce it for all data (ex: seconds) taking the average values, as long as the prediction objectives allows it. Furthermore, the further you want to predict, the worst the prediction generally is: that's why a lower sampling rate could be useful. RNN and LSTM are also good solutions, in addition to ARIMA. However, they are quite sensitive to noise: if your signals are quite noisy, try to reduce the noise to have good predictions. Keep in mind that time series prediction with NN is not an exact science: you may have to apply many modifications and improvements on your data to reach very good results. Here is a notebook that could be useful: https://github.com/ageron/handson-ml2/blob/master/15_processing_sequences_using_rnns_and_cnns.ipynb
H: Why must x and y axis be the same length? I am getting started with visualization, but right at the start I am having a serious conceptual problem. Repeatedly, I get the error ‘x and y must be the same size’ / ‘array must be the same length’ / ‘have the same shape’. I just fundamentally don’t understand why that must be so. Let me give you an example: I go bowling once a day, every day, for a month. I keep track of my scores. I want to know what day of the week I get the best scores. I want to make a chart with days of the week on the y axis, and dates of the month on the x axis. My scores go in each intersection of these two axes. There are 7 days in a week, and 28 (just to ignore a few days getting more turns) days in a month. Clearly, x and y are not the same length, and yes, I get an error. But so what? Why is that an issue? Every day and date will get a score. I would think it is much more common for x and y to not be the same length in the real world. For example, how would anyone ever be able to do a long running time series? So clearly there is something fundamental about charts that I just am not grasping. Can you tell me what it is? Thanks. @seanowen Yes, and bars, too... import altair as alt import pandas as pd data = pd.DataFrame({'y': ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'], 'x':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26, 27,28]}) ValueError Traceback (most recent call last) /tmp/ipykernel_233189/3256713105.py in <module> .....snip... ~/.virtualenvs/PapasBook/lib/python3.9/site-packages/pandas/core/internals/construction.py in _extract_index(data) 633 lengths = list(set(raw_lengths)) 634 if len(lengths) > 1: --> 635 raise ValueError("All arrays must be of the same length") 636 637 if have_dicts: ValueError: All arrays must be of the same length AI: You aren't making a plot here. You're creating a DataFrame. A DataFrame is like a table, with rows and columns. You are specifying two columns x and y, but don't give the same number of values. It's like you're asking for 7 rows, but also 31 rows. I believe you really mean to create a DataFrame with three columns: dayOfWeek, dayOfMonth, and score. A row represents one score, along with the day of week and day of month it happened. This kind of data could be correctly used for all kinds of plots then.
H: K-Fold cross validation-How to calculate regular parameters/hyper-parameters of the algorithms K-fold cross-validation divides the data into k bins and each time uses k-1 bins for training and 1 bin for testing. The performance is measured as the average across all the K runs err ← err + (y[i] − y_out)^2 as demonstrated in Wikipedia and the literature err ← 0 for i ← 1, ..., N do // define the cross-validation subsets x_in ← (x[1], ..., x[i − 1], x[i + 1], ..., x[N]) y_in ← (y[1], ..., y[i − 1], y[i + 1], ..., y[N]) x_out ← x[i] y_out ← interpolate(x_in, y_in, x_out) err ← err + (y[i] − y_out)^2 end for err ← err/N But what about the parameters that are obtained from the training? is it the average across all the training or does it to be picked from the best output in k-fold cross-validation? Do we need to run the same ML algorithm in k-fold cross-validation or each fold can have a different algorithm? I think we need to run only one algorithm for the k-fold and for each individual algorithms we need to run k-fold cross-validation. AI: Cross-validation is a method to obtain to obtain a reliable estimation of the performance. The performance is obtained as the average across the CV "folds" because this way it doesn't depend on a single test set, i.e. the impact of chance is minimized. In the case of hyper-parameter selection, the goal is not only to evaluate but also to select the hyper-parameters values based on this evaluation. This turns the CV process into a training stage, because it is used to determine something about the model. When the goal is to select the best hyper-parameters among a set of possible assignments of their values, the method is run across all the CV "folds" for every possible assignment, and then the average performance is also obtained for every assignment. At the end of the CV process the assignment which corresponds to the maximum average performance is selected. Now that the parameters are fixed, one still has to determine the true performance on a fresh test set because the high performance among the parameters assignments could be due to chance. This is why a model is trained again with these parameters (usually using the whole training data), then applied to a fresh test set to obtain the final performance. Notice that everything in CV is done the same way across the "folds": the same method(s) are run for every fold, and the results are always obtained across all "folds". In particular one should never select the best model or parameters by picking the maximum "fold".
H: Extra feature on test set Suppose I convert categorical data into dummy variables with get_dummies and I get these columns in the training dataset: x_A x_B x_C 0 1 0 0 0 1 1 1 0 But in the test dataset I have the following columns: x_A x_B x_C x_D 0 1 0 1 0 0 1 0 1 1 0 1 Should I create a 'D' column in the training set with all the values set to zero to apply later the model on the test set?? Or what should I do? AI: The binary features obtained from one-hot encoding a categorical feature must be obtained from the training set only. This implies that any new value in the test set cannot be used. I recommend the following method: before encoding the variable in the training set, discard all the rare values (for examples the ones which have a frequency lower than 3) and replace them with a special value unknown. Training the model with this special value has two advantages: Rare values are very likely to cause overfitting. The unknown value lets the model decide what to do with these cases, so it can be used in the test set to replace any new rare value (if a value doesn't appear in the training set it must be rare).
H: what is the formula used in the scale_quantile function in R? I want to know what formula is being is in the package scale_quantile by the dynutils package. I want to check if I can achieve the same results as with manual calculations like this # Air quality dataset data = datasets::airquality # Manual calculation dat = data$Wind #Calcuate quantile q_995 = quantile(dat, 0.995) q_005 = quantile(dat, 0.005) #Filtering the data based on the quantile value dat = dat[dat >= q_005 & dat <= q_995] MEAN = mean(dat) SD = sd(dat) manual = (dat - MEAN)/SD This is the site I referred for documentation, Documentation of scale_quantile. AI: You can look in the source code or just look at the dynutils Github. scale_quantile <- function(x, outlier_cutoff = .05) { if (is.null(dim(x))) { sc <- scale_quantile(matrix(x, ncol = 1), outlier_cutoff = outlier_cutoff) out <- sc[,1] names(out) <- names(x) attr(out, "addend") <- attr(sc, "addend") attr(out, "multiplier") <- attr(sc, "multiplier") out } else { quants <- apply(x, 2, stats::quantile, c(outlier_cutoff, 1 - outlier_cutoff), na.rm = TRUE) addend <- -quants[1,] divisor <- apply(quants, 2, diff) divisor[divisor == 0] <- 1 apply_quantile_scale(x, addend, 1 / divisor) } }
H: Using numpy to enter noise into data I am new to data science and have to generate 200 numbers from a uniform distribution set this as x and generate y data using x and injecting noise from the gaussian distribution y = 12x-4 + noise My Approach: x = numpy.random.rand(200) --> This will generate 200 numbers form a uniform distribution I am not sure hot to inject noise from the guassian distribution probably it's like z = numpy.random.randn(200) and y = 12 * x - 4 + z Is that a correct way to inject noise? AI: Yes. numpy.random.randn(n) will generate an array of random numbers (generated by the normal distribution centered at 0) of size n. So just do: import numpy as np x = np.random.rand(200) y = 12 * x - 4 + np.random.rand(200) Just as you put in your question.
H: configuring axis in matplotlib x = np.random.rand(200) and y = 12 * x - 4 + np.random.randn(200) After ploting the (x,y) values I get the following graph in matplotlib: How can I configure the axis so all samples are clearly visible? AI: You could try the scatter function. import matplotlib.pyplot as plt import numpy as np x = np.random.rand(200) y = 12 * x - 4 + np.random.randn(200) plt.plot(x, y, '.') plt.show() or change the last part plt.scatter(x, y, marker='o') plt.show() take a look at some great resources: Jakevdp blog and Matplotlib's own documentation
H: Run linear regression fit on 2 1D array On doing this- x = np.random.rand(200) . . from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y, test_size =0.20, random_state = 0) reg = LinearRegression() reg.fit(x_train,y_train) I got the following error Although I do understand that I am not passing the parameters properly. But not sure how to make it into a 2D array for linear regression AI: By using np.reshape you can transform your input x to Scikit's necessary input shape from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y, test_size =0.20, random_state = 0) x_train = x_train.reshape(-1,1) x_test = x_test.reshape(-1,1) reg = LinearRegression() reg.fit(x_train,y_train)
H: Handling conflicting cases pandas python I have a data set where some rows are same but belong to different classes. Example - index Heading 1 Heading 2 Heading 1b Heading 2b Class/Target row -1 a b c d 0 row -2 t r f k 0 row -3 m u p l 0 row -4 a b c d 1 row -5 m u p l 1 row -6 v r z h 0 row -7 z q y o 1 row -8 w e t a 1 row-1 and row-4 are same rows but with different class. Similar case with row-3 and row-5 There are only two classes. I want to make those rows to new class say for example -2 It will look like this: index Heading 1 Heading 2 Heading 1b Heading 2b Class/Target row -1 a b c d 2 row -2 t r f k 0 row -3 m u p l 2 row -4 a b c d 1 row -5 m u p l 2 row -6 v r z h 0 row -7 z q y o 1 row -8 w e t a 1 We can see those rows are mapped to 2. And the duplicates are also kept in the same order. Previously, I use iloc and iterate. But it takes huge amount of time as the size of the data set is huge. So, I converted into dictionary, it was fine and fast. But it requires bit of manipulation and more coding work. I would like to know how can it be done in a simple way. AI: Your headings are pretty strange, since you have columns with the same names. I'll use columns Heading 1, Heading 2, Heading 1b, Heading 2b. First create a new column in your df marking duplicates rows : df['Duped'] = df.duplicated(subset=['Heading 1', 'Heading 2', 'Heading 1b', 'Heading 2b'], keep=False).astype(int) You now have a Duped column, with 1 if row is duped and 0 if not. Then Modify Class/Target according to it : df.loc[df['Duped'] == 1, 'Class/Target'] = 2 Then drop your intermediate Duped column : df = df.drop(columns=['Duped'])