text
stringlengths
83
79.5k
H: Finding parameters of image filter using classified pairs I want to solve the problem of finding a parameter vector for an image filter (let us assume we know nothing about how the filter works, but we can feed it an input image and a set of parameters to produce an output image). Thus, having a set $\{{I_k, J_k:=F_{\alpha}(I_k)}\}_{k\in\overline{1,N}}$ of $I_k$ images together with their filtered counterparts, $J_k$, what solutions would you recommend for finding $\alpha^\ast$ such that given $I^\ast$ the result $F_{\alpha^\ast}(I^\ast)$ is in the same "style" as the one of the $N$ training correspondence pairs. I suppose one option is to use a convnet to transform $I_k$ into a feature vector, $v_k$, and then concatenate $\alpha_k$ to obtain $u_k =(v_k,\alpha_k)$. Once this is done, use a regression method to estimate the $\alpha^\ast$ part of $u^\ast$. I would like to find an alternative solution to what seems like a candidate for the style transfer approach (e.g. https://arxiv.org/pdf/1703.07511.pdf). That approach seems to solve the problem differently, and I envision situations where I need to simply use a filter rather than let a network "guess the style of that filter". Additional details and possible assumptions Given the invoked no free lunch prospects, let us assume, for a paeticular problem from this class, that $F$ is a non-linear kernel-based filter that maps $I$ to $J$ as a result of an iterative and convergent process. More specifically, let $F$ be a mean shift filter with the $\alpha=(\rho, \sigma_s,\sigma_r)$ using a concatenated Gaussian kernel and a Parzen window of size $\rho$. Intuitively, I would be tempted to guess that this filter is not smooth w.r.t. $\alpha$, but a formal investigation is required (I suspect it is not smooth given that infinitesimal changes in the size of the window could shift the output towards another mode, indicating a step function behaviour). In general, it is correct to assume that $\alpha \in \mathbb{R}^d$, with $d \ll N$. Given the goal of finding $\alpha$ when both the filter action is known (either via numerical computation in general, or, in closed-form if the filter is a gaussian blur, for example), we can be confident that the $N$ input samples have non-constant $\alpha_k$ vector values to start with. But for sake of generalizability, it would be more elegant to pursue a solution that does not need to know how the filter operates without actually applying it to an input. The first approach suggested in the comments and based on convnets seems to fit this scenario and the optimization problem is taking into account the filter error. However, it would be interesting to hear more opinions, perhaps involving shallow approaches, even at the expense of designing the solution to address the concrete mean shift filter example from above. AI: Your parameter $\alpha$ has fairly low dimension. Therefore, I recommend that you apply optimization methods directly to try to find the best $\alpha$ (without trying to use convolutional neural networks and regression for this purpose). Define a distance measure on images, $\|I-J\|$, to represent how dissimilar images $I,J$ are. You might use the squared $L_2$ norm for this, for instance. Now, the loss for a particular parameter choice $\alpha$ is $$L(\alpha) = \sum_{k=1}^N \|F_\alpha(I_k)-J_k\|.$$ We can now formulate your problem as follows: given a training set of images $(I_k,J_k)$, find the parameter $\alpha$ that minimizes the loss $L(\alpha)$. A reasonable approach is to use some optimization procedure to solve this problem. You might use stochastic gradient descent, for instance. Because there might be multiple local minima, I would suggest that you start multiple instances of gradient descent from different starting points: use a grid search over the starting point. Since your $\alpha$ is only three-dimensional, it's not difficult to do a grid search over this three-dimensional space and then start gradient descent from each point in the grid. Stochastic gradient descent will allow you to deal with fairly large values of $N$. This does require you to be able to compute gradients for $L(\alpha)$. Depending on the filter $\alpha$, it might be possible to symbolically calculate the gradients (perhaps with the help of a framework for this, such as Tensorflow); if that's too hard, you can use black-box methods to estimate the gradient by evaluating $L(\cdot)$ at multiple points. If $L_2$ distance doesn't capture similarity in your domain, you could consider other distance measures as well. I expect this is likely to be a more promising approach than what you sketched in the question, using convolutional networks and a regression model. (For one thing, there's no reason to expect the mapping from "features of $I_k$" to "features of $J_k$" to be linear, so there's no reason to expect linear regression to be effective here.)
H: Data dashboard for SQL server database I am working on developing a data dashboard / app for a Microsoft SQL database. Currently, I am developing the dashboard using Shiny and R. The app is mostly for exploratory analysis to allow people to filter out some subset of data, build some plots, and export data / plots. My question is what are some other options for creating this type of dashboard? I am aware of Tableau. AI: You could consider PowerBI. It offers a couple of useful licensing scenarios (bundled with Office 365 etc.) and also has connectors to source data that already works well with SQL Server. You did not say if SQL Server is running on premises or in the cloud, but PowerBI should work in both cases.
H: Forget Layer in a Recurrent Neural Network (RNN) - I'm trying to figure out the dimensions of each variables in an RNN in the forget layer, however, I'm not sure if I'm on the right track. The next picture and equation is from Colah's blog post "Understanding LSTM Networks": where: $x_t$ is input of size $m*1$ vector $h_{t-1}$ is hidden state of size $n*1$ vector $[x_t, h_{t-1}]$ is a concatenation (for example, if $x_t=[1, 2, 3], h_{t-1}=[4, 5, 6]$, then $[x_t, h_{t-1}]=[1, 2, 3, 4, 5, 6]$) $w_f$ is weights of size $k*(m+n)$ matrix, where $k$ is the number of cell states (if $m=3$, and $n=3$ in the above example, and if we have 3 cell states, then $w_f=3*3 $ matrix) $b_f$ is bias of size $k*1$ vector, where $k$ is the number of cell states (since $k=3$ as the above example, then $b_f$ is a $3*1$ vector). If we set $w_f$ to be: \begin{bmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 5 & 6 & 7 & 8 & 9 & 10 \\ 3 & 4 & 5 & 6 & 7 & 8 \\ \end{bmatrix} And $b_f$ to be: $[1, 2, 3]$ Then $W_f . [h_{t-1}, x_t] =$ $$ \begin{bmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 5 & 6 & 7 & 8 & 9 & 10 \\ 3 & 4 & 5 & 6 & 7 & 8 \\ \end{bmatrix} . \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ \end{bmatrix} =\begin{bmatrix} 91 & 175 & 133\end{bmatrix}$$ Then we can add the bias, $W_f . [h_{t-1}, x_t] + b_f=$ $$\begin{bmatrix} 91 & 175 & 133\end{bmatrix} + \begin{bmatrix} 1 & 2 & 3\end{bmatrix}=\begin{bmatrix} 92 & 177 & 136\end{bmatrix}$$ Then we feed them into a sigmoid function: $\frac{1}{1+e^{-x}}$, where $x=\begin{bmatrix} 92 & 177 & 136\end{bmatrix}$, hence we perform this function element wise, and get \begin{bmatrix} 1 & 1 & 1\end{bmatrix}. Which means for each cell state, $C_{t-1}$, (there are $k=3$ cell states), we allow it to pass to the next layer. Is the above assumption correct? This also means that the number of cell state and hidden state is the same? AI: Great question! tl;dr: The cell state and the hidden state are two different things, but the hidden state is dependent on the cell state and they do indeed have the same size. Longer explanation The difference between the two can be seen from the diagram below (part of the same blog): The cell state is the bold line travelling west to east across the top. The entire green block is called the 'cell'. The hidden state from the previous time step is treated as part of the input at the current time step. However, it's a little harder to see the dependence between the two without doing a full walkthrough. I'll do that here, to provide another perspective, but heavily influenced by the blog. My notation will be the same, and I'll use images from the blog in my explanation. I like to think of the order of operations a little differently from the way they were presented in the blog. Personally, like starting from the input gate. I'll present that point of view below, but please keep in mind that the blog may very well be the best way to set up an LSTM computationally and this explanation is purely conceptual. Here's what's happening: The input gate The input at time $t$ is $x_t$ and $h_{t-1}$. These get concatenated and fed into a nonlinear function (in this case a sigmoid). This sigmoid function is called the 'input gate', because it acts as a stopgap for the input. It decides stochastically which values we're going to update at this timestep, based on the current input. That is, (following your example), if we have an input vector $x_t = [1, 2, 3]$ and a previous hidden state $h_t = [4, 5, 6]$, then the input gate does the following: a) Concatenate $x_t$ and $h_{t-1}$ to give us $[1, 2, 3, 4, 5, 6]$ b) Compute $W_i$ times the concatenated vector and add the bias (in math: $W_i \cdot [x_t, h_{t-1}] + b_i$, where $W_i$ is the weight matrix from the input vector to the nonlinearity; $b_i$ is the input bias). Let's assume we're going from a six-dimensional input (the length of the concatenated input vector) to a three-dimensional decision on what states to update. That means we need a 3x6 weight matrix and a 3x1 bias vector. Let's give those some values: $W_i = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 2 & 2 & 2 \\ 3 & 3 & 3 & 3 & 3 & 3\end{bmatrix}$ $b_i = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$ The computation would be: $\begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 2 & 2 & 2 \\ 3 & 3 & 3 & 3 & 3 & 3\end{bmatrix} \cdot \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\5 \\6 \end{bmatrix} + \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 22 \\ 42 \\ 62 \end{bmatrix}$ c) Feed that previous computation into a nonlinearity: $i_t = \sigma (W_i \cdot [x_t, h_{t-1}] + b_i)$ $\sigma(x) = \frac{1}{1 + exp(-x)}$ (we apply this elementwise to the values in the vector $x$) $\sigma(\begin{bmatrix} 22 \\ 42 \\ 62 \end{bmatrix}) = [\frac{1}{1 + exp(-22)}, \frac{1}{1 + exp(-42)}, \frac{1}{1 + exp(-62)}] = [1, 1, 1]$ In English, that means we're going to update all of our states. The input gate has a second part: d) $\tilde{C_t} = tanh(W_C[x_t, h_{t-1}] + b_C)$ The point of this part is to compute how we would update the state, if we were to do so. It's the contribution from the new input at this time step to the cell state. The computation follows the same procedure illustrated above, but with a tanh unit instead of a sigmoid unit. The output $\tilde{C_t}$ is multiplied by that binary vector $i_t$, but we'll cover that when we get to the cell update. Together, $i_t$ tells us which states we want to update, and $\tilde{C_t}$ tells us how we want to update them. It tells us what new information we want to add to our representation so far. Then comes the forget gate, which was the crux of your question. The forget gate The purpose of the forget gate is to remove previously-learned information that is no longer relevant. The example given in the blog is language-based, but we can also think of a sliding window. If you're modelling a time series that is naturally represented by integers, like counts of infectious individuals in an area during a disease outbreak, then perhaps once the disease has died out in an area, you no longer want to bother considering that area when thinking about how the disease will travel next. Just like the input layer, the forget layer takes the hidden state from the previous time step and the new input from the current time step and concatenates them. The point is to decide stochastically what to forget and what to remember. In the previous computation, I showed a sigmoid layer output of all 1's, but in reality it was closer to 0.999 and I rounded up. The computation looks a lot like what we did in the input layer: $f_t = \sigma(W_f [x_t, h_{t-1}] + b_f)$ This will give us a vector of size 3 with values between 0 and 1. Let's pretend it gave us: $[0.5, 0.8, 0.9]$ Then we decide stochastically based on these values which of those three parts of information to forget. One way of doing this is to generate a number from a uniform(0, 1) distribution and if that number is less than the probability of the unit 'turning on' (0.5, 0.8, and 0.9 for units 1, 2, and 3 respectively), then we turn that unit on. In this case, that would mean we forget that information. Quick note: the input layer and the forget layer are independent. If I were a betting person, I'd bet that's a good place for parallelization. Updating the cell state Now we have all we need to update the cell state. We take a combination of the information from the input and the forget gates: $C_t = f_t \circ C_{t-1} + i_t \circ \tilde{C_t}$ Now, this is going to be a little odd. Instead of multiplying like we've done before, here $\circ$ indicates the Hadamard product, which is an entry-wise product. Aside: Hadamard product For example, if we had two vectors $x_1 = [1, 2, 3]$ and $x_2 = [3, 2, 1]$ and we wanted to take the Hadamard product, we'd do this: $x_1 \circ x_2 = [(1 \cdot 3), (2 \cdot 2), (3 \cdot 1)] = [3, 4, 3]$ End Aside. In this way, we combine what we want to add to the cell state (input) with what we want to take away from the cell state (forget). The result is the new cell state. The output gate This will give us the new hidden state. Essentially the point of the output gate is to decide what information we want the next part of the model to take into account when updating the subsequent cell state. The example in the blog is again, language: if the noun is plural, the verb conjugation in the next step will change. In a disease model, if the susceptibility of individuals in a particular area is different than in another area, then the probability of acquiring an infection may change. The output layer takes the same input again, but then considers the updated cell state: $o_t = \sigma(W_o [x_t, h_{t-1}] + b_o)$ Again, this gives us a vector of probabilities. Then we compute: $h_t = o_t \circ tanh(C_t)$ So the current cell state and the output gate must agree on what to output. That is, if the result of $tanh(C_t)$ is $[0, 1, 1]$ after the stochastic decision has been made as to whether each unit is on or off, and the result of $o_t$ is $[0, 0, 1]$, then when we take the Hadamard product, we're going to get $[0, 0, 1]$, and only the units that were turned on by both the output gate and in the cell state will be part of the final output. [EDIT: There's a comment on the blog that says the $h_t$ is transformed again to an actual output by $y_t = \sigma(W \cdot h_t)$, meaning that the actual output to the screen (assuming you have some) is the result of another nonlinear transformation.] The diagram shows that $h_t$ goes to two places: the next cell, and to the 'output' - to the screen. I think that second part is optional. There are a lot of variants on LSTMs, but that covers the essentials!
H: Neo4j graph to cypher conversion Is there a way or tools available to generate or retrieve cypher query from a Neo4j database ? Should we need to store cypher quries along with the graph data for regeneration ? AI: This has already been answered in stackoverflow. There is no need to save the queries you used to populate the database, you only need to dump the contents of the database to a file: db1/neo4j-shell -path db1/data/graph.db/ -c dump > export_data.cypher In order to load the database dump into another database, you just supply it to neo4j shell through the standard intput: db2/bin/neo4j-shell -path db2/data/graph.db/ < export_data.cypher
H: Tensorflow ArgumentError: argument --model_dir: conflicting option string: --model_dir I am trying to execute this Tensorflow tutorial https://github.com/tensorflow/tensorflow/blob/r0.12/tensorflow/examples/learn/wide_n_deep_tutorial.py but I have the following error: ArgumentError: argument --model_dir: conflicting option string: --model_dir that appears in line: flags.DEFINE_string("model_dir", "", "Base directory for output models.") Do you know how I can solve it? Thanks in advance, Keira AI: The example you linked is from version 0.12 but the current tensorflow version is 1.2. The example underwent a lot of changes from 0.12 to 1.0. Actually, the very line you are referencing is no longer there in the last version. This way, it's possible that you are using a version of tensorflow that is no longer compatible with the APIs used by the example. Ensure that the version of tensorflow you are using is the same as the example.
H: Detecting anomalies with neural network I have a large multi dimensional dataset that is generated each day. What would be a good approach to detect any kind of 'anomaly' as compared with previous days? Is this a suitable problem that could be addressed with neural networks? Any suggestions are appreciated. Additional information: there are no examples, so the method should detect the anomalies itself AI: From the formulation of the question, I assume that there are no "examples" of anomalies (i.e. labels) whatsoever. With that assumption, a feasible approach would be to use autoencoders: neural networks that receive as input your data and are trained to output that very same data. The idea is that the training has allowed the net to learn representations of the input data distributions in the form of latent variables. There is a type of autoencoder called denoising autoencoder, which is trained with corrupted versions of the original data as input and with the uncorrupted original data as output. This delivers a network that can remove noise (i.e. data corruptions) from the inputs. You may train a denoising autoencoder with the daily data. Then use it on new daily data; this way you have the original daily data and an uncorrupted version of those very same data. You can then compare both to detect significant differences. The key here is which definition of significant difference you choose. You could compute the euclidean distance and assume that if it surpasses certain arbitrary threshold, you have an anomaly. Another important factor is the kind of corruptions you introduce; they should be as close as possible to reasonable abnormalities. Another option would be to use Generative Adversarial Networks. The byproduct of the training is a discriminator network that tells apart normal daily data from abnormal data.
H: Why do we use +1 and -1 for marginal decision boundaries in SVM While using support vector machines (SVM), we encounter 3 types of lines (for a 2D case). One is the decision boundary and the other 2 are margins: Why do we use $+1$ and $-1$ as the values after the $=$ sign while writing the equations for the SVM margins? What's so special about $1$ in this case? For example, if $x$ and $y$ are two features then the decision boundary is: $ax+by+c=0$. Why are the two marginal boundaries represented as $ax+by+c=+1$ and $ax+by+c=-1$? AI: It's important for the optimization formulation of the SVM that $y_i=\{-1,1\}$ which is why it makes sense to also output $y=\{-1,1\}$. If we look at the soft-margin linear SVM we want to minimize: $\left[\frac{1}{n}\sum_{i=1}^n\max{(0,1-y_i(w\cdot x_i+b))}\right]+\lambda\| w\| ^2$ The $y_i$ is either +1 or -1 which flips the hyperplane in the soft-margin definition of the problem.
H: Predict sinus with keras feed forward neural network I have a very simple feed forward neural network with keras that should learn a sinus. Why is the predictive power so bad and what is generally the best way to pinpoint issues with a network? In the code below, I have one input neuron, 10 in the hidden layer, and one output. I would expect the network to perform much more accurately. import numpy as np from keras.layers import Dense, Activation from keras.models import Sequential x = np.arange(100) y = np.sin(x) model = Sequential([ Dense(10, input_shape=(1,)), Activation('sigmoid'), Dense(1), Activation('sigmoid') ]) model.compile(loss='mean_squared_error', optimizer='SGD', metrics=['accuracy']) model.fit(x, y, epochs=10, batch_size=1) scores = model.evaluate(x, y, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) print(model.predict(np.array([.5]))) Output: 1/100 [..............................] - ETA: 0s - loss: 1.2016 - acc: 0.0000e+00 79/100 [======================>.......] - ETA: 0s - loss: 0.4665 - acc: 0.0127 100/100 [==============================] - 0s - loss: 0.5044 - acc: 0.0100 Baseline Error: 99.00% [[ 0.35267803]] AI: Accuracy is a metric meant for classification problems, look at the mean squared error instead. Your network is too small for a highly fluctuating function that you want to learn, if you divide your x by a smaller amount it would be easier to learn. Second of all, adding another layer and having an identity activation at the end will help quite a bit. Also taking batches bigger than 1 will make the gradient more stable. With a 1000 epochs I get to 0.00167 as mean squared error. x = np.arange(200).reshape(-1,1) / 50 y = np.sin(x) model = Sequential([ Dense(40, input_shape=(1,)), Activation('sigmoid'), Dense(12), Activation('sigmoid'), Dense(1) ]) model.compile(loss='mean_squared_error', optimizer='SGD', metrics=['mean_squared_error']) for i in range(40): model.fit(x, y, nb_epoch=25, batch_size=8, verbose=0) predictions = model.predict(x) print(np.mean(np.square(predictions - y))) The biggest issue is that the signal in your original small dataset is very difficult to learn, you can see this when you plot it, it will just collapse to the mean.
H: Why my loss is negative while training SAE? I am using loss='binary_crossentropy' here is my code: I tried to increase number of training image and Epoch ,but that did not help me. input_img = Input(shape=(28, 28, 1)) x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) encoded = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded) x = UpSampling2D((2, 2))(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) x = UpSampling2D((2, 2))(x) x = Convolution2D(16, 3, 3, activation='relu', border_mode='valid')(x) x = UpSampling2D((2, 2))(x) decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x) autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, nb_epoch=10, batch_size=500, shuffle=True, validation_data=(x_test, x_test), verbose=1) AI: Use a linear output and mean squared error loss, assuming you are predicting normalised pixel intensity values. Cross-entropy over sigmoid output layer activations can do odd things when the values are not strictly in $\{0,1\}$, depending on implementation.
H: Propensity Modeling, still use Test/Train Split? I'm using Sklearn to build a classifier in which my client wants a predicted probability for each row of data. The default of let's say Random Forest is if > 50%, then classifies as TRUE, but using the predict_proba function I'm able to get the probability. The data I'm given has 10k rows, which all are labeled TRUE or FALSE. If it's my job to provide a predicted probability by row, should I still use a 70/30 Train/Test split. In which I create my best model on the Train 70% using a 10-fold CV. But then when I need to output for all 10k rows, I would actually be make a prediction on the training data and the test data, which doesn't seem correct to predict on training data. Any thoughts on how to approach this? AI: You are right; you shouldn't predict on the same data you have used for training your model. If your goal is to only output probabilities and you don't mind achieving this by having not a single classifier but a number of them, you can potentially use nested cross-validation to achieve that. In the outer cross-validation steps, you break your data in $N_{out}$ folds, where one fold is kept for testing the model and the remaining $N_{out} - 1$ folds are used to train. You then feed this training dataset (which is similar, in nature, to that 70% you have mentioned in OP) to the inner cross-validation step with $N_{in}$ folds. Here, you will compare different models and choose the best performing one. This chosen model is then passed to the outer fold (see above) and can be used to obtain probabilities for the 1 fold that was left out. You then repeat the same procedure for other folds in the outer fold. At the end of this nested cross-validation process, you will have probabilities for all your rows (but they will have come from $N_{out}$ different classifiers, each corresponding to one of the outer folds). Note that the ultimate purpose of nested cross-validation is not to do this but it will give you what you want as a by-product.
H: Keras: Built-In Multi-Layer Shortcut Problem In Keras for Python, I have to use multiple lines of code for a simple XOR neural network: from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import SGD import numpy as np X = np.array([[0,0],[0,1],[1,0],[1,1]]) y = np.array([[0],[1],[1],[0]]) model = Sequential() model.add(Dense(8, input_dim=2)) model.add(Activation('tanh')) model.add(Dense(1)) model.add(Activation('sigmoid')) sgd = SGD(lr=0.1); model.compile(loss='binary_crossentropy', optimizer=sgd) model.fit(X, y, show_accuracy=True, batch_size=1, nb_epoch=1000) print(model.predict_proba(X)) Although the network above is quite small, the current implementation may become frustrating with deeper networks. Question Is there a built-in shortcut in Keras that quickly adds multiple layers? For example: [2,3,4,5] would be a network with 2 input neurons, 3 neurons in the first hidden layer, 4 neurons in the second hidden layer, and 5 output neurons. I use the term built-in because otherwise I can loop through the example array, and add a layer for each element. AI: There is no shortcut syntax that goes as far as accepting [2,3,4,5] as a param and create a model. However, it would be very simple for you to create this as a Python function yourself, provided in your case you already have made the decisions about activation functions, the types of layer etc. The need to make those decisions and have them available in the Keras API means that Keras itself does not offer such a short form build function. You can make the model building slightly less verbose by using a list of layers when you instantiate the model, instead of adding them afterwards: model = Sequential([ Dense(8, input_dim=2), Activation('tanh'), Dense(1), Activation('sigmoid') ]) However, if you want to try variations of a model, where the only things you change are the number and size of hidden layers, then you can write a short Python function to encapsulate that requirement. Here's an example (I'm sure you already know how to do this, just included for completeness): def build_model(hidden_layer_sizes): model = Sequential() model.add(Dense(hidden_layer_sizes[0], input_dim=2)) model.add(Activation('tanh')) for layer_size in hidden_layer_sizes[1:]: model.add(Dense(layer_size)) model.add(Activation('tanh')) model.add(Dense(1)) model.add(Activation('sigmoid')) return model If you take this approach, you may find you end up parametrising other choices such as input size (as you try some feature engineering), hidden layer activation function, whether to use a Dropout layer etc. Again, it is this need to define all the other choices in a typical network that lead to Keras' design. The best you can do is compress down the choices for your case, with a custom function. I'd like to address this comment in your question: Although the network above is quite small, the current implementation may become frustrating with deeper networks. In practice, I have not found Keras' design difficult to use for deep networks. I typically write a separate build function, and parametrise a few things, such as input dimensions. However, I typically don't loop through a list of different layer sizes as the main param of the build function. Instead, I find the more verbose approach just fine, even when trying variations of network size/shape (I guess that might change if I wanted to grid search including layer sizes). I think that is because I find the function names very easy to read - even with a screen full of .add() functions, I can see quite quickly what the NN structure is.
H: Neural Networks for time series I have an understanding problem. I am a beginner in machine learning and have also a little experience in modelling NNs but not for time series. But I cannot imagine how to use Neiural Networks for time series. So if I want to train a Multilayer NN what is the input? I read several papers about such ANN to predict time series. But they did not explain how actually. Can I imagine it as follows: Is every input node a specific time stamp? If I want to predict the value for time t+1, and I use 15 input neurons, so I use the values at each time stamp from t-15 upt to t? How do I train such a NN? I am a little confused. AI: The type of neural networks you are looking for to predicting timeseries are called recurrent networks; the previous neuron states in the network affect the new neuron states. Examples of such recurrent networks are LSTM, GRU and NARX. If you have the data from t0 to t10, and you want to predict t11, then you need to input the values from t0- t10 one by one into the network. You have to train the network so that the output of tn is tn+1. So after you have inputted t10, the output will be the values predicted for t11. Real life example: I want a network to keep on decreasing it's output until it has reached 1, and then make it output 0 to start over again: in: 0.0, out: 0.2 in: 0.2, out: 0.4 in: 0.4, out: 0.6 in: 0.6, out: 0.8 in: 0.8, out: 1.0 in: 1.0, out: 0.0 And that is what your training data should look like If it is still to complicated, I recommend you to play around with this front-end neural network library for the browser: Neataptic. It is very easy to fiddle around with. Example
H: Word2Vec Alternative for Smaller Datasets I am looking to employ Word2Vec to cluster documents and classify them by topic. However, it seems I need a significant data set [1,2] to achieve such a task. I have a data set of thousands (not millions) of documents where my total word count is somewhere in the tens of thousands or maybe low hundreds of thousands. Is there another ML methodology that allows me to attain my goal? I see how to use TF/IDF to generate words and phrases from a corpus but output as expected is a list of common words and phrases along a flat dimension: What I am looking for is something more along the lines of a high level cluster of vectors in space: [source] AI: You can side step the paucity of training data, and indeed training altogether, by using pre-trained embeddings in numerous languages. After that you can calculate your document embeddings using one of these simple algorithms, which basically amount to running dimensionality reduction on the matrix of stacked word embeddings for each sentence using PCA/SVD: A Simple but Tough-to-Beat Baseline for Sentence Embeddings (code) Representing Sentences as Low-Rank Subspaces Note that word embeddings themselves emerge from similar calculations: Neural Word Embedding as Implicit Matrix Factorization
H: Is Stratification applicable to both Classification and Regression? It is usually better if you have a not so large but balanced dataset and you are performing classification to apply stratification in order to split it in a training and testing datasets which are both balanced as well. So there is a notion of having the training and testing datasets represent adequately the overall dataset. Could you expand this notion to regression? So the idea is that instead of just shuffling and splitting the dataset, you could group the targets in bins for example [0, 10] [10, 20] etc. And then have the training and testing dataset be also an adequate representation of the whole dataset by having targets with all kinds of values. (Otherwise, you could end up leaving out some part of the range) Makes sense? :) AI: Yes that makes sense. Your training and testing datasets should both be similar distributions to the whole dataset in both classification and regression, and in the regression case, binning the target variable is one way to achieve that. You need to make sure that you choose good bin sizes -- you can completely skew how the distribution looks in a histogram based on the bin sizes. Too small and you only have one or two examples per bin, resulting in a very erratic histogram. Too large and you lose a lot of information about the shape of the distribution. Binning is just one way to approximate the density function of the distribution. Depending on how your data looks, you might be able to fit a Gaussian (or any other distribution) curve to your target variable and use that instead to sample your train/test split.
H: How is the salary for Data Scientist different among countries? I would like to know the salary for Data Scientists by countries, but failed to find any resources on it. I know that US gives data scientists an insanely high amoung of money (well, it might not be "insanely high" on the standard of US), but I rarely hear about any other countries, possibly except UK, Canada, and Australia in some cases. Is there any research or survey or something that shows the diversity of salary for Data Scientists by countries, something like for software engineers? AI: The biggest one I know is O'Reilly's survey (here the last available from 2016). There are also polls on KDNuggets (older version) and other websites. But, in general, most of them are not very representative. You can also go look at the national averages on glassdoor.com (example India) or other job sites.
H: How to determine the complexity of an English sentence? I am working on an app to help people learn English as a second language. I have validated that sentences help in learning a language by providing extra context. I did that by conducting a small research in a classroom of 60 students. I have mined over hundred thousand sentences from Wikipedia for various English words (Including Barrons'800 words and 1000 most common English words) Entire data is available at https://buildmyvocab.in In order to maintain the quality of content, I filtered out sentences which were longer than 160 characters since they might be difficult to understand. As a next step, I want to be able to automate the process of sorting this content in the order of ease of understanding. I myself am a non-native English speaker. I want to know what features I can use to separate easy sentences from difficult ones. Also, do you think this is possible? AI: Yes. There are various metrics, such as the fogg index. Textacy in python has a nice list and implementations. >>> ts.flesch_kincaid_grade_level 10.853709110179697 >>> ts.readability_stats {'automated_readability_index': 12.801546064781363, 'coleman_liau_index': 9.905629258346586, 'flesch_kincaid_grade_level': 10.853709110179697, 'flesch_readability_ease': 62.51222198133965, 'gulpease_index': 55.10492845786963, 'gunning_fog_index': 13.69506833036245, 'lix': 45.76390294037353, 'smog_index': 11.683781121521076, 'wiener_sachtextformel': 5.401029023140788}
H: AUC and classification report in Logistic regression in python I have been trying to implement logistic regression in python. Basically the code works and it gives the accuracy of the predictive model at a level of 91% but for some reason the AUC score is 0.5 which is basically the worst possible score because it means that the model is completely random. Also the classification report returns error: "UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. 'precision', 'predicted', average, warn_for)". Does anyone know what should I change so it works properly? import numpy as np import pandas as pd from sklearn.cross_validation import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score from sklearn.preprocessing import StandardScaler from sklearn.metrics import roc_auc_score from sklearn.metrics import classification_report data_file = pd.read_csv('loan.csv', delimiter=',') # variable preprocessing data_file['loan_status'] = np.where(data_file['loan_status'].isin(['Fully Paid', 'Current']), 1, 0) loan_stat=data_file['loan_status'] loan_stat=loan_stat.astype(np.float64) m = { 'n/a': 0, '< 1 year': 0, '1 year': 1, '2 years': 2, '3 years': 3, '4 years': 4, '5 years': 5, '6 years': 6, '7 years': 7, '8 years': 8, '9 years': 9, '10+ years': 10 } emp_length=data_file.emp_length.map(m) emp_length.astype(np.float64) annual_inc=data_file['annual_inc'] delinq_2yrs=data_file['delinq_2yrs'] dti=data_file['dti'] loan_amnt=data_file['loan_amnt'] installment=data_file['installment'] int_rate=data_file['int_rate'] total_acc=data_file['total_acc'] open_acc=data_file['open_acc'] pub_rec=data_file['pub_rec'] acc_now_delinq=data_file['acc_now_delinq'] #variables combined into one dataframe X=pd.DataFrame() X['annua_inc']=annual_inc X['delinq_2yrs']=delinq_2yrs X['dti']=dti X['emp_length']=emp_length X['loan_amnt']=loan_amnt X['installment']=installment X['int_rate']=int_rate X['total_acc']=total_acc X['open_acc']=open_acc X['pub_rec']=pub_rec X['acc_now_delinq']=acc_now_delinq X['loan_stat']=loan_stat X=X.dropna(axis=0) y=X['loan_stat'] X=X.drop(['loan_stat'], axis=1) scaler=StandardScaler() X=scaler.fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) model=LogisticRegression(penalty='l2', C=1) model.fit(X_train, y_train) score=accuracy_score(y_test, model.predict(X_test)) roc=roc_auc_score(y_test, model.predict(X_test)) cr=classification_report(y_test, model.predict(X_test)) Here is the link to the data: https://www.kaggle.com/wendykan/lending-club-loan-data/downloads/lending-club-loan-data.zip AI: In order to calculate the AUC, you need to have probabilities. Therefore you should use the following function: roc=roc_auc_score(y_test, model.predict_proba(X_test)[:,1]) This will give you the probability for each sample in X_test having label 1.
H: Question Regarding Multi-label probability predictions I have been doing a problem in which I have to predict probabilities for each of the labels in a multi-label (four to be precise) classification problem. Example of a solution: Id, North, East, West, South 1, 0.71663940211, 0.037567315693, 0.03525987339, 0.0021068944991 ... The training data is of the form where each y(i) is labelled either 0,1,2 or 3 (encoded for N,E,W & S respectively) I will be grateful if you can just tell me how to approach this problem. Links giving direct insight to the problem will also sufficient. AI: A question is a bit broaden as you do not specify if you do not know how to do it in theory or how to tackle it with a ML method. Some of ML methods: LogisticRegression in sklearn handles multiple class lr = LogisticRegression() lr.fit(X, y) class_probabilities = lr.predict_proba(X) # outputs the probabilities You might also want to consider Support Vector Machines. Theory: You can do a "one vs rest" when you train a single classifier per class taking the sample of all other classes as negative example. (see wiki article for that)
H: Is it always possible for validation accuracy to be as high as training accuracy? I have a very small dataset (40 training examples, 10 validation examples, 120 classes) for which I'm getting very high accuracies with a very simple model in Keras (batchnorm, flatten, and dense layers only). My training accuracy is 94-95% and validation accuracy is 76-78%. I know it's overfitting and I have tried a few things. The data is not images, so I cannot augment the data. I also cannot add data because it's a specific type. I'm using two dropout layers with 0.5 levels, and the architecture is very simple so I don't think I can reduce the architecture complexity. I can paste the model if anyone likes. My question is: Is there ever a situation where validation accuracy cannot be as high as the training accuracy? Is there a limitation based on the size of the dataset? Or is it ALWAYS possible for validation accuracies to match training accuracies and the network just needs the right parameters? AI: Yes, it is possible to have a situation in which validation accuracy cannot be as high as training accuracy. Any situation in which noise (as opposed to generalizable properties of the feature set) in the training set is more predictive of the target variable within the training set would produce this. Consider a situation in which a random property of the training sample considered is perfectly predictive, but this is found not to be true of all other examples outside the sample. The predictive power in the training set would be perfect, but outside the training set, less than perfect. This phenomenon is broadly referred to as "overfitting". For example, let's consider a case where you have a set of fruit data and you're trying to establish whether a given fruit is an orange or a tangerine. You have 4 features- the circumference of the fruit around the stem-medial axis, the height of the fruit across the stem, a numerical value of the hue of the fruit skin, and the first letter of the amateur baseball team whose home park is closest to the field the fruit was grown in. Let's imagine that by some baffling coincidence, the baseball team letter in the training set was perfectly predictive of whether the fruit would be a tangerine or orange. We can imagine that this would not hold true across the country or the world, which would produce a situation where the training set accuracy would be perfect, but the validation set accuracy would not be able to approach that using the same methods.
H: Extracting sub features from inside a df cell? I have a dataframe containing several features of form: Id, Acol, Bcol, Ccol, Dcol, 1, X:0232,Y:10332,Z:23891, E:1222, F:12912,G:1292, V:1281 2, X:432,W:2932 R:2392, T:292,U:29203 Q:2392 3, Y:29320,W:2392 R:2932, G:239,T:2392 Q:2391 ...about 10,000 Id's where 1,2,3 are the Id's. Acol, Bcol, Ccol, and Dcol are the feature columns, X, Y, Z, W are sub-features of feature "Acol" and so on... How can I extract the sub-features/features from this sort of dataframe? AI: You were not specific where you wanted to end up with the data in this frame, so I will simply show how to break out the features and sub features into a format that can be pivoted into a form as needed: Code: Most important element is taking a feature column and breaking out the sub features. That can be done like: def get_sub_features(feature_col): # split on commas and then colons feature_df = feature_col.str.split(',').apply( lambda feature: pd.Series( dict([sub_feature.split(':') for sub_feature in feature]), name=feature_col.name), 1) # add a feature name column to use as an index feature_df['feature'] = feature_col.name # name the columns as sub-feature for later stacking feature_df.columns.names = ['sub-feature'] # return dataframe with id/feature_name index new_index = [feature_df.index.name, 'feature'] return feature_df.reset_index().set_index(new_index) Test Code: df = pd.read_fwf(StringIO(u""" Id Acol Bcol Ccol Dcol 1 X:0232,Y:10332,Z:23891 E:1222 F:12912,G:1292 V:1281 2 X:432,W:2932 R:2392 T:292,U:29203 Q:2392 3 Y:29320,W:2392 R:2932 G:239,T:2392 Q:2391""" ), header=1).set_index(['Id']) print(df) feature_cols = ['Acol', 'Bcol', 'Ccol', 'Dcol'] stacked = pd.concat(get_sub_features(df[f]).stack() for f in feature_cols) print(stacked) Results: Acol Bcol Ccol Dcol Id 1 X:0232,Y:10332,Z:23891 E:1222 F:12912,G:1292 V:1281 2 X:432,W:2932 R:2392 T:292,U:29203 Q:2392 3 Y:29320,W:2392 R:2932 G:239,T:2392 Q:2391 Id feature sub-feature 1 Acol X 0232 Y 10332 Z 23891 2 Acol W 2932 X 432 3 Acol W 2392 Y 29320 1 Bcol E 1222 2 Bcol R 2392 3 Bcol R 2932 1 Ccol F 12912 G 1292 2 Ccol T 292 U 29203 3 Ccol G 239 T 2392 1 Dcol V 1281 2 Dcol Q 2392 3 Dcol Q 2391 dtype: object Accessing Data: Some examples: print(stacked.xs('T', level=2)) print(stacked.iloc[stacked.index.get_level_values('sub-feature') == 'T']) Results: 0 Id feature 2 Ccol 292 3 Ccol 2392 0 Id feature sub-feature 2 Ccol T 292 3 Ccol T 2392
H: Learning character embeddings with GenSim I am learning deep learning, and as a first exercise to myself I am trying to build a system that learns a very simple task - capitalize the first letter of each word. As a first step, I am tried to create "character embeddings" - a vector for each character. I am using the following code: import gensim model = gensim.models.Word2Vec(sentences) where sentences is a list of lists of chars which I took from this long Wikipedia page. For example, sentences[101] is: [' ', ' ', ' ', ' ', 'S', 'p', 'e', 'a', 'k', 'i', 'n', 'g', ' ', 'a', 't', ' ', 't', 'h', 'e', ' ', 'c', 'o', 'n', 'c', 'l', 'u', 's', 'i', 'o', 'n', ' ', 'o', 'f', ' ', 'a', ' ', 'm', 'i', 's', 's', 'i', 'l', 'e', ' ', 'e', 'x', 'e', 'r', 'c', 'i', 's', 'e', ... ] To test the model, I did: model.most_similar(positive=['A', 'b'], negative=['a'], topn=3) I hoped to get 'B' at the top, since 'A'-'a'+'b'='B', but I got: [('D', 0.5388374328613281), ('N', 0.5219535827636719), ('V', 0.5081528425216675)] (also, my capitalization application did not work so well, but this is probably because of the embeddings). What should I do to get embeddings that identify capitalization? AI: I believe that you misunderstood the word2vec concept. Basically for words, the feature vector for a word is learnt from the surrounding words. You shall know a word by the company it keeps- Firth.J.R In your case characters have been used, so the feature vector for each character depends upon the adjacent characters present. your example might work, if you have the following training sentences. ABCDEFGHI abcdefghi AbcDEFghi aBcdEfgHI abcDEFgHi ABcdEFGHi With these training sentences, the characters 'A','a','B','b' will preserve capital features and english alphabet order feature. But whereas, when trained with wikipedia sentences, the characters will preserve the probability of being present in a meaningful word. For instance, the closest letter to 'C' would be 'a','o','e' but hardly 'x' or 'd' because there would be words like 'covenant','country','cat' as no common words would be 'cx..'
H: Is it possible to change misclassification cost in caret? 0 down vote favorite I have an imbalanced dataset to work with (with about 20-fold positive examples than negative). I know several solutions to deal with this type of data (under/oversampling, optimizing for AUC, etc.) but I would like to try changing the misclassification costs for the two groups. However, I cannot figure out the proper way to do this in caret. Do you have any idea how to do this? Thanks for any help! AI: I would have to read more carefully: train has a weights parameter that do this.
H: Standard method to integrate tools coded in multiple languages in an analysis workflow I am trying to stitch together multiple packages and tools from multiple languages (R, python, C etc.) in a single analysis workflow. Is there any standard way to do it? Preferably (but not necessarily) in python. AI: Luigi is an open source python package by Spotify that does exactly what you described: Luigi is a Python (2.7, 3.3, 3.4, 3.5) package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization, handling failures, command line integration, and much more. Its philosophy is similar to GNU Make, letting you define tasks and their dependencies. There is also Apache Airflow (originally from Airbnb), another python solution: Airflow is a platform to programmatically author, schedule and monitor workflows. Use airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies. You may found a complete comparative table here.
H: What are the "extra nodes" in XGboost? When training an XGboost model some of the information printed regards "extra nodes". I can't find an explanation of these anywhere in the documentation. What exactly are extra nodes? [14:13:09] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 0 pruned nodes, max_depth=5 [14:13:09] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 0 pruned nodes, max_depth=5 [14:13:09] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 0 pruned nodes, max_depth=5 [14:13:09] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 0 pruned nodes, max_depth=5 [14:13:10] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 0 pruned nodes, max_depth=5 [14:13:10] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 0 pruned nodes, max_depth=5 [14:13:10] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 0 pruned nodes, max_depth=5 [14:13:10] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 0 pruned nodes, max_depth=5 [14:13:10] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 0 pruned nodes, max_depth=5 [14:13:11] C:\dev\libs\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 0 pruned nodes, max_depth=5 AI: Backtracking the updater source code, it looks like "extra nodes" are calculated this way: At each boosting stage, looking at the gradient boost tree, Extra Nodes = (the total number of nodes) - (the number of start roots) - (the number of deleted nodes) At each boosting stage, there might be different starting roots (sub trees) and different deleted (so far) nodes. The extra nodes can provide some intuition into how much your processing tree is utilized. updater_prune.cc tree_model.h xgboost "train" api
H: Recommender system based on purchase history, not ratings I'm exploring options for recommender systems optimized for the insurance industry, which would take into account i) product holdings ii) user characteristics (segment, age, affluence, etc.). I want to stress that a) there are no product ratings available, thus collaborative filtering is not an option b) recommended products don't have to be similar to products that have already been purchased, thus item-to-item recommendations are most probably not relevant. Keep in mind that in insurance you rarely want to recommend similar products to those already purchased ones, as someone with the Car insurance is unlikely to want to buy another Motor product, rather Home or maybe Travel, etc. That's why I want to develop recommendations on similarities between the users based on their purchase history and/or demographics Ideally, I'd like to be able to implement it in R, if not possible, then in Python. Thanks for help and suggestions! AI: You could use Content based filtering but then you have to intelligently pre process the data to extract all the contents of the products. Also, that might lead to leaving a some features, This article is a great head start after you preprocess all the data. Also, you could make pseudo ratings for product vs a customer. That would depend on your problem statement. Some few suggestions could be the number of times the customer bought the particular product in last one month or you could also take an index which would define how frequently the customer buys that product which, mathematically would be last_two_purchases/interval_of_purchase or could also take an average of last few purchases and intervals. After making this pseudo rating you could convert all the content based features into numerical ones and use Latent factor model for collaborative filtering. Refer this video. Python could be used for this.
H: ICD-10 codes in Machine Learning Can anyone provide specific techniques with using ICD-10 codes in Machine Learning? I have usually used a simply approach of creating multiple binary column representing ICD-10 codes… which can get extremely long. Or I have used Hashing features. Are there other techniques or ways to use ICD-10 codes in ML? Anyone can provide a useful link to see various approaching to modeling such features? AI: Most machine learning around ICD codes deals with auto-encoding documents or NLP to extract ICD codes automatically from documents. Once you have them, most applications I have seen use them as is. One simple alternative would be to assign the codes to relevant categories to reduce the number of levels and make them more interpretable. For example injury types (e.g. F00–F99 for mental disorders) body parts (e.g. S10-S19 & .. for neck injuries) severity of injury etc. depending on your application. Of course, if you don't want to define the categories beforehand, this can be done using some clustering or embedding method etc.
H: How do you calculate sample difference in terms of sensor signals? A paper I read called Preprocessing Techniques for Context Recognition from Accelerometer Data refers to sample difference as the delta value between signals in a pairwise arrangement of samples that allows a basic comparison between the intensity of user activity. How would you do the pairwise arrangement? Would it require you to have different files of data representing different classes? For example, I have a CSV: 1495573445.162, 0, 0.021973, 0.012283, -0.995468, 1 1495573445.172, 0, 0.021072, 0.013779, -0.994308, 1 1495573445.182, 0, 0.020157, 0.015717, -0.995575, 1 1495573445.192, 0, 0.017883, 0.012756, -0.993927, 1 where the second, third, and fourth columns are the axes of accelerometer data. I have several files named for one gesture and several others for another gesture and would like to use this sample difference statistic to help classify the data. Also as a secondary question, this was listed as a preprocessing technique but it sounds like it's more of a feature. Could I get clarification on that as well? AI: Differencing is a common preprocessing step for time series. Here's an example in python: from pandas import DataFrame data=[[1495573445.162, 0, 0.021973, 0.012283, -0.995468, 1], [1495573445.172, 0, 0.021072, 0.013779, -0.994308, 1], [1495573445.182, 0, 0.020157, 0.015717, -0.995575, 1], [1495573445.192, 0, 0.017883, 0.012756, -0.993927, 1]] df = DataFrame(data, columns=['timestamp', 'foo', 'x', 'y', 'z', 'bar']).set_index('timestamp') df.assign(dx=df.x.diff(), dy=df.y.diff(), dz=df.z.diff()) The result: foo x y z bar dx dy dz timestamp 1.495573e+09 0 0.021973 0.012283 -0.995468 1 NaN NaN NaN 1.495573e+09 0 0.021072 0.013779 -0.994308 1 -0.000901 0.001496 0.001160 1.495573e+09 0 0.020157 0.015717 -0.995575 1 -0.000915 0.001938 -0.001267 1.495573e+09 0 0.017883 0.012756 -0.993927 1 -0.002274 -0.002961 0.001648
H: Merge two models - Keras I was reading through many blogs and understood the relevance and scenario of having merging two model. But, how to decide the Merge mode for two models. ie. concat, sum, dot, etc. For eg. I am working on the task of Auto Image captioning. So, captions and Images are 2 kinds of input that I need to handle and merge them at certain point for model to let know which caption is for which image. I am learning the text representation and Image representation by 2 different network designs. Now, at the state after learning representation for both the inputs, how do i decide what ways(concat, add, etc) to use to join/merge two representations. AI: Generally, I'd say its the same way as finding your "best" feature list, pre-processing methods and algorithm - you'd find most answers starting with "give it a try and compare", "check which combination gives you a better cross-validation?" etc.. More specifically, when there is no farther documentation (and its an open source project) I usually take a look at the code for more intuitions. In this case concat, dot, average and the like where strait forward implemented as it sounds (concatenating date, averaging, etc..), but it does imply about the amount of processing needed: some of the merging functions adds to the input needed to be processed (concat, add) and some reduces it like an aggregation function (average, min, max) Beyond that the resulted tensor object will perform a bit different for different datasets. Please give it a try and let us know what merging combination worked best for you, and the dataset type :)
H: Difference between rand index and adjusted rand index? I am unable to understand what is the adjusted term in ARI. The expected index term in the ARI is from a prior. Kindly explain AI: The adjustment is simply (Rand index - Expected value)/(Optimal value - Expected value) The purpose is to scale it in an interpretable way. 0 is "as good as random", less than 0 is worse, and close to 1 is good. The problem with the non adjusted Rand index is that a random result on certain data sets can achieve a high score otherwise.
H: Is pre-processing always neccessary? I'm working on classification of two classes of Raman spectra. And while I was working on finding the optimal steps for pre-processing, I started to wonder if it is really necessary. I have a lot of Raman spectra data and and most of them differ between both classes even to the naked eye (characteristic peaks or the lack of). While I do have many datasets I know were not recorded under exact same conditions, I also have a few sets for which exact same conditions apply for both classes. They still look obviously different from each other. Since my primary goal is to develop a robust algorithm with an unsupervised classificator, I now start to wonder: Assuming the setup and thus noise and any sort of distortion is the same in both classes, even if dominant, do I need to bother with pre-processing when various classifiation models already return satisfying results with the raw data? Most (if not all) papers I read about Raman spectra pre-processing (there's a lot on that topic) gave me the impression, the authors are always interested in separating the Raman spectrum of interest from all the unwanted other signals (fluorescent background, white noise, cosmic spikes etc). While this certainly would be interesting, I actually only want to sort them. I'm asking this question from a rather hypothetical standpoint. I know that in reality you can hardly guarantee exact same conditions (especially with random occurrences like cosmic spikes). AI: [Not specific for Raman spectroscopy] There is a trend in some areas of machine learning to aim at end-to-end learning, that is, devise machine learning algorithms that take raw input data and gives the desired output (e.g. classification label, regression values). This way, end-to-end learning avoids introducing expert knowledge or preprocessing, usually relying only on huge amounts of data. This has been specially true in the last years in the field of deep learning, where images are taken with no preprocessing, audio is not filter-banked, and games are played by receiving only raw pixels. So it is certainly possible to skip data preprocessing completely. We should nevertheless take into account that most of the successful end-to-end machine learning approaches impose constraints that enable the learning to exploit the very aspects of the input data that were previously defined by experts. An example for this are Convolutional Networks, that impose a feature locality constraint that enables the learning to identify features, which could have been pre-defined as gabor filters instead. BUT, in more traditional data science, feature engineering is a key part of the process of devising ML systems and data exploration and expert knowledge are normally used to identify the kind of preprocessing to be applied to data.
H: Sequence Batching in RNNs I'm wondering why sequence batching in RNNs's target value loops back (I'm not sure what you call it), but let's take for example: We want to learn a sequence of numbers (our input) from 1 to 16: $$ \begin{bmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \end{bmatrix} $$ Batches: 2, Sequence Length: 4 First, we can divide the data to 2 batches: $$ \begin{bmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \end{bmatrix} $$ Then we can divide this into mini batches: $$ \begin{bmatrix} 1 & 2 & 3 & 4\\ 9 & 10 & 11 & 12 \end{bmatrix} $$ $$ \begin{bmatrix} 5 & 6 & 7 & 8\\ 13 & 14 & 15 & 16 \end{bmatrix} $$ Then we need to create targets for the inputs, and intuitively we want to to targets to be the next value of the input, so: $$ \begin{bmatrix} 2 & 3 & 4 & 5\\ 10 & 11 & 12 & 13 \end{bmatrix} $$ However, this is not what I usually see, instead I see the last value in a mini batch is swapped with the first value: $$ \begin{bmatrix} 2 & 3 & 4 & 1\\ 10 & 11 & 12 & 9 \end{bmatrix} $$ So what is the intuition in doing so? Since if we want to learn the sequence of 1, 2, 3, 4, but 1 was given as the target for the value 3, so 4 was not learnt but instead of 1. AI: It makes no sense to re-order inputs in the general case because the order might matter. In your example it does not; you can shuffle the columns as long as the corresponding outputs remain the same. I've seen the input reversed, which is a less arbitrary transformation than the one you cite, to improve prediction in sequence-to-sequence models, though that's not set in stone, either.
H: Why my training and validation loss is not changing? I used MSE loss function, SGD optimization: xtrain = data.reshape(21168, 21, 21, 21,1) inp = Input(shape=(21, 21, 21,1)) x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(inp) x = MaxPool3D(pool_size=(3, 3, 3),padding='same')(x) x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(x) x = Conv3D(filters=256, kernel_size=(3, 3, 3), activation='relu',padding='same')(x) encoded = Conv3D(filters=128, kernel_size=(3, 3, 3), activation='relu',padding='same')(x) print ("shape of decoded", K.int_shape(encoded)) x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(encoded) x = Conv3D(filters=256, kernel_size=(3, 3, 3), activation='relu',padding='same')(x) x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(x) x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(x) x = UpSampling3D((3, 3, 3))(x) decoded = Conv3D(filters=1, kernel_size=(3, 3, 3), activation='relu', padding='same')(x) print ("shape of decoded", K.int_shape(decoded)) autoencoder = Model(inp, decoded) autoencoder.compile(optimizer='sgd', loss='mse') autoencoder.fit(xtrain, xtrain, epochs=30, batch_size=32, shuffle=True, validation_split=0.2 ) Epoch 1/30 16934/16934 [==============================] - 446s - loss: 34552663732314849715 15904.0000 - val_loss: 1893.9425 Epoch 2/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 3/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 4/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 5/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 6/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 7/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 8/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 9/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 10/30 16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 11/30 16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 12/30 16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 13/30 16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 14/30 16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425 Epoch 15/30 16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425 AI: Your weights have diverged during training, and the network as a result is essentially broken. As it consists of ReLUs, I expect the huge loss in the first epoch caused an update which has zeroed out most of the ReLU activations. This is known as the dying ReLU problem, although the issue here is not necessarily the choice of ReLU, you will probably get similar problems with other activations in the hidden layers. You need to tone down some of the numbers that might be causing such a large initial loss, and maybe also make the weight updates smaller: Normalise your input data. The autoencoder is trying to match the input, and if the numbers are large here, this multiplies up to a large loss. If the input can have negative values (either naturally or due to the normalisation) then you should not have ReLU activation in the output layer otherwise it is not possible for the autoencoder to match the input and output values - in that case just have a linear output layer. Reduce the learning rate - in Keras SGD has default lr=0.01, try lower e.g. lr=0.0001. Also consider a more sophisticated optimiser than plain SGD, maybe Adam, Adagrad or RMSProp. Add some conservative weight initialisations. In Keras you can set the weight initialiser - see https://keras.io/initializers/ - however, the default glorot_uniform should already be OK in your case, so maybe you will not need to do this.
H: Linear Regression and k-fold cross validation I am totally new to the topic of Data Science. With the help of the following sources, I think I have managed to do a very simple and basic Linear regression on a train dataset: SkLearn documentation - Linear regression Some Kernel, that I percieved as intuitive the test dataset My Python code (written as an iPython notebook) that actually does the computation looks like this: ### Stage 0: "Import some stuff" %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model from sklearn.linear_model import LinearRegression ### Stage 1: "Prepare train dataset" my_train_dataset = pd.read_csv("../train.csv") ### remove categorical cols only_numerical_train_dataset = my_train_dataset.loc[:, my_train_dataset.dtypes!=object] ### remove 'Id' and 'SalePrice' columns my_train_dataset_X = only_numerical_train_dataset.drop(['Id','SalePrice'], axis = 1) ### insert median into cells with missing values print("Before: Number of cells with missing values in train data: " + str(np.sum(np.sum(my_train_dataset_X.isnull())))) null_values_per_col = np.sum(my_train_dataset_X.isnull(), axis=0) cols_to_impute = [] for key in null_values_per_col.keys(): if null_values_per_col.get(key) != 0: cols_to_impute.append(key) print("Before: Need to replace values in the columns in train data: " + str(cols_to_impute) + "\n") imputation_val_for_na_cols = dict() for col in cols_to_impute: if (my_train_dataset_X[col].dtype == 'float64' ) or (my_train_dataset_X[col].dtype == 'int64'): #numerical col imputation_val_for_na_cols[col] = np.nanmedian(my_train_dataset_X[col]) #with median for key, val in imputation_val_for_na_cols.items(): my_train_dataset_X[key].fillna(value= val, inplace = True) print("After: Number of cells with missing values in train data: " + str(np.sum(np.sum(my_train_dataset_X.isnull())))) null_values_per_col = np.sum(my_train_dataset_X.isnull(), axis=0) cols_to_impute = [] for key in null_values_per_col.keys(): if null_values_per_col.get(key) != 0: cols_to_impute.append(key) print("After: Need to replace values in the columns in train data: " + str(cols_to_impute) + "\n") ### Stage 2: "Sanity Check - the better the quality, the higher the price?" plt.scatter(my_train_dataset.OverallQual, my_train_dataset.SalePrice) plt.xlabel("Overall Quality of the house") plt.ylabel("Price of the house") plt.title("Relationship between Price and Quality") plt.show() ### Stage 3: "Prepare the test dataset" my_test_dataset = pd.read_csv("../test.csv") ### remove categorical cols only_numerical_test_dataset = my_test_dataset.loc[:, my_test_dataset.dtypes!=object] ### remove 'Id' column my_test_dataset_X = only_numerical_test_dataset.drop(['Id'], axis = 1) ### insert median into cells with missing values print("Before: Number of cells with missing values in test data: " + str(np.sum(np.sum(my_test_dataset_X.isnull())))) null_values_per_col = np.sum(my_test_dataset_X.isnull(), axis=0) cols_to_impute = [] for key in null_values_per_col.keys(): if null_values_per_col.get(key) != 0: cols_to_impute.append(key) print("Before: Need to replace values in the columns in test data: " + str(cols_to_impute) + "\n") imputation_val_for_na_cols = dict() for col in cols_to_impute: if (my_test_dataset_X[col].dtype == 'float64' ) or (my_test_dataset_X[col].dtype == 'int64'): #numerical col imputation_val_for_na_cols[col] = np.nanmedian(my_test_dataset_X[col]) #with median for key, val in imputation_val_for_na_cols.items(): my_test_dataset_X[key].fillna(value= val, inplace = True) print("After: Number of cells with missing values in test data: " + str(np.sum(np.sum(my_test_dataset_X.isnull())))) null_values_per_col = np.sum(my_test_dataset_X.isnull(), axis=0) cols_to_impute = [] for key in null_values_per_col.keys(): if null_values_per_col.get(key) != 0: cols_to_impute.append(key) print("After: Need to replace values in the columns in test data: " + str(cols_to_impute) + "\n") ### Stage 4: "Apply the model" lm = LinearRegression() lm.fit(my_train_dataset_X, my_train_dataset.SalePrice) ### Stage 5: "Sanity Check - the better the quality, the higher the predicted SalesPrice?" plt.scatter(my_test_dataset.OverallQual, lm.predict(my_test_dataset_X)) plt.xlabel("Overall Quality of the house in test data") plt.ylabel("Price of the house in test data") plt.title("Relationship between Price and Quality in test data") plt.show() ### Stage 6: "Check the performance of the Prediction" from sklearn.model_selection import cross_val_score scores = cross_val_score(lm, my_train_dataset_X, lm.predict(my_test_dataset_X), cv=10) print("scores = " + str(scores)) My questions are: 1. Why am I getting an error in Stage 6 and how to fix it? ValueError Traceback (most recent call last) <ipython-input-2-700c31f0d410> in <module>() 85 ### test the performance of the model 86 from sklearn.model_selection import cross_val_score ---> 87 scores = cross_val_score(lm, my_train_dataset_X, lm.predict(my_test_dataset_X), cv=10) 88 print("scores = " + str(scores)) 89 ValueError: Found input variables with inconsistent numbers of samples: [1460, 1459] 2. Is there something fundamentally wrong with my approach to a simple and basic Linear Regression? Edits for comments: @CalZ - First comment: my_test_dataset_X.shape = (1459, 36) my_train_dataset_X.shape = (1460, 36) @CalZ - Second comment: I will consider refactoring the code as soon as I am sure that my approach is not fundamentally wrong. AI: As the error message states, the invocation to cross_val_score fails because the shape arguments differ in their first dimension (1460 vs. 1459). This is consistent with the number of lines in the CSV files. However, the underlying problem is that you are mixing the test and the training sets. You should invoke it only with the test set: cross_val_score(lm, my_test_dataset_X, lm.predict(my_test_dataset_X), cv=10). Update: My initial suggestion was NOT correct, you cannot use your own predictions to validate! You should leave a subset of the labeled data for hold out on which to compute the cross validation. Yours is not only a linear regression. The bulk of your code is in charge of data manipulation (feature selection, data imputation) and not linear regression. Actually, you are reusing scikit-learn's implementation of linear regresion, not coding your own. If you want a code review of your snippet, maybe you should try in http://codereview.stackexchange.com (I don't know if this fits there either, you'd better check their help center). UPDATE: About whether your code is sound from a data science point of view, it seems to me (after only a quick review) that you are doing reasonable things. There are some things that could be improved, like only handling float64 and int64 (while you can do as described here), only imputing NaNs and Nones (while there can be other values that should be imputed in certain cases, like outliers), or imputing blindly with the median (which is a safe decision but should be assessed taking into account the nature of each variable). But generally speaking seems Ok.
H: Tool for analyzing a Python matrix and generating a report on the contents (column types, NaN counts, means, etc.) I'm looking for a tool/library that will take a numpy or pandas matrix and generate a list of statistics for the matrix and columns. Specifically, for each column, I'd want info like the following: Assumed data type (numerical vs string) Assuming it's determined to be a numerical column: Mean and Std Deviation Max and min values Number of NaN's % of NaN's Assuming it's determined to be a string column: Number of distinct string values So the tool takes the matrix and outputs some kind of report along these lines. The goal is to do some sanity checking on my input data set as well as checking data transformations along the way. For example, if I'm doing some kind of transformation and values in column A go from 1% NaN to 60% NaN, then I did something really bad. Does such a tool exist? AI: You should take a look at pandas_profiling, I don't think it works with numpy arrays but it does exactly what you want for Pandas dataframes. It can output to PDF or to a nice looking HTML format within your Jupyter Notebooks. https://github.com/JosPolfliet/pandas-profiling
H: Determining Statistically Significant Differences in Views per Day of Week? I have some views per day of week data. It's something like: Mondays: 100k views Tuesday: 110k views Wednesday: 140k views ... Sunday: 80k views So naively, it seems like Wednesday is a better day for website traffic. However, I also understand that it's possible these differences are attributable to variance (maybe each day has an equal probability of getting a given viewer). How would I determine that Wednesday (for instance) is a better day for website traffic in a statistically significant fashion? Happy to clarify if this question is unclear. Thanks! AI: First, you need a sample of the views data per day, that is: views in Mondays: 120, 111, 90, 150, .. views in Tuesdays: 50, 50, 57, 21, .. ... views in Sundays: 201, 184, 126, 191, .. These are random variables. Let's refer to them as $V_d$, where $d$ is the week day, e.g. $V_{monday}$. With those data, one option is to perform a hypothesis test. Depending on the distribution of $V_d$, you may do different things. If they are normal (which can be also tested by means of a hypothesis test) (so that $V_d \sim \mathcal{N}(\mu_d, \sigma_d)$), you can compare the means of two days, having a null hypothesis $H_0: \mu_{wednesday} = \mu_{sunday}$ and the alternative hypothesis $H_1: \mu_{wednesday} > \mu_{sunday}$ and choose an appropriate two sample test (depending on the variances of the distributions, sample size, etc). If your distributions are not gaussian, you may use a Kolmogorov-Smirnov test or a Mann-Whitney U test. To know how to choose between them, you can check this.
H: Depth of the first pooling layer outcome in tensorflow documentation Let's say that we have a CNN with two convolutional layers (https://www.tensorflow.org/tutorials/layers). My question regards the dimension of the tensor, which is the output of the pooling layer 1. In the first convolutional layer, we apply $32$ filters to the input image (let's say that the output will be $28\times 28 \times 32)$, so as far as I can understand we will get $32$ separate feature maps, because of the number of filters. In the next step, we can apply an activation function, which does not change the dimensionality. The $\max$ pooling layer takes as input a tensor of $28\times 28\times 32$ and the output is going to be a $14 \times 14 \times \color{red}{1}$ tensor (according to the link above). I cannot understand the unit as the depth, since we apply $32$ filters and we apply the $\max$ pooling layer to every feature map. So, why is the output tensor $14 \times 14 \times 1$? According to my understanding the input in the second convolutional layer should be a $14 \times 14 \times 32$ tensor. Probably, I am missing something here. AI: I kept spinning my head around this question because I seem to come along the same conclusion as you do. However, it appears to be a mistake in the documentation. https://stackoverflow.com/questions/43453712/what-is-output-tensor-of-max-pooling-2d-layer-in-tensorflow
H: Find average sequence from a set of sequences I have a set of user sessions. Session consists of an ordered list of types of actions that user made (for example, bought a gun, played a mission, etc). I want to create/calculate session that have most possible similarity to all provided sessions (most common types of actions users make in order they make them) Unfortunately, I know nothing about data science but I tried to google a way to do that. I've found this document: https://cran.r-project.org/web/packages/TraMineR/vignettes/TraMineR-state-sequence.pdf And it looks like 9.1 and 9.2 describe things similar to what I want. But I dont know this for certain and even if it's true I stil don't know how to use it for my scenario. AI: One way would be not to approach this as a calculation per session. Most data science solutions like to end up with a number, probability or classification. I suggest you structure your data differently so that you try to answer the question - what next action is likely given the last action. In order to do this you would have to restructure your session data and use information from across all your sessions. For example, if you compare in how many sessions a player 'buys a gun', and if so record over all those sessions what their next action is, e.g. in 60% they 'play a mission' next. You will then have a probability of their next action based on the number of choices players made in all those sessions. Once you have those probabilities, you will be able to answer the question, 'What comes next?'. This will in turn enable you to build that most average session that you are after by stepping through a session and building it by the most probable next step.
H: How to visualize multi-instance multiclass classification? Let's say I have 3 classes and 1 score for each data point score z class 1 class 2 class 3 E.g. the input looks like: 0.529 5 7 4 0.310 3 4 2 0.774 10 7 6 0.774 10 8 5 0.172 3 0 2 In code: >>> import pandas as pd >>> df = pd.DataFrame([[0.529, 5.0, 7.0, 4.0], [0.31, 3.0, 4.0, 2.0], [0.774, 10.0, 7.0, 6.0], [0.774, 10.0, 8.0, 5.0], [0.172, 3.0, 0.0, 2.0]]) >>> df 0 1 2 3 0 0.529 5.0 7.0 4.0 1 0.310 3.0 4.0 2.0 2 0.774 10.0 7.0 6.0 3 0.774 10.0 8.0 5.0 4 0.172 3.0 0.0 2.0 Each row represents a data point and column 0 is the score and column 1,2,3 are some sort of coefficients for the classes. The score is some sort of computed entity independent of the coefficients and the goal is to visualize the interaction between the coefficients and the score. How to visualize the interaction between the 3 classes (column 1,2,3) and the score (column 0)? AI: You could use a 3d-scatter plot, where each class would correspond to one axis and the color-intensity of the point would indicate the score value(e.g. for a grayscale colormap, the whiter the closer to 1 the value of the score). Using the above format: from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame([[0.529, 5.0, 7.0, 4.0], [0, 3.0, 4.0, 2.0], [0.774, 10.0, 7.0, 6.0], [0.774, 10.0, 8.0, 5.0], [1, 3.0, 0.0, 2.0]]) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(df[1].values, df[2].values, df[3].values, c=df[0].values, cmap='gray', s=50, vmin=0.,vmax=1) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show() You can change the scatter plot attributes for marker-size, colormap etc. according to the documentation to match your tastes.
H: What is PCA and MICE I am doing an experiment on Azure ML. While pre processing my data, there is an option to clean missing data using either PCA or MICE. Please provide me an example of how I can decide on which option to choose. AI: I don't know about Azure ML. But: PCA is principal components analysis. It takes a dataset and "rotates" it, taking the original axes defined by the original variables, and creating new axes that are linear combinations of the old data. The precise linear combinations are chosen such that each successive component maximizes variance along that new dimensions. A quick google search turns up lots of tutorials. Here is a snipped of Hastie & Tibshirani's lecture on PCA https://www.youtube.com/watch?v=ipyxSYXgzjQ MICE is "multiple imputaiton by chained equations". Basically, missing data is predicted by observed data, using a sequential algorithm that is allowed to proceed to convergence. (1) Start by filling in the missing data with plausible guesses at what the values might be. (2) for each variable, predict the missing values by modeling the observed values as a function of the other variables. At each step, update the predictions of the missing values. There are many tricky details, and many online tutorials. here is an article aimed at biostat practitioners: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074241/
H: Clustering a very large number of very small clusters with most data unrelated I'm trying to detect duplicates in a data set of about 34k distinct items. When I say "duplicate," I don't mean identical items, just very similar. I have an algorithm that will Cartesian join the items and return a sparse matrix of similar items with a similarity from 0.0 to 1.0. This matrix has about 5k non-zero entries in it, so it's very sparse. I already know what the similar items are by pairs. What I need is a good way to cluster them together. I expect there to be many clusters with only 2 items in them, and a couple thousand with more than 2 items, with the vast majority of items being unclustered. Is there a clustering algorithm that fits this scenario well? I've tried several, but they either give poor results, or are designed with few large clusters in mind instead of many small clusters, and so tend to crash. Is clustering the wrong approach? I'm using Spark, but the sparse matrix is small enough that I don't mind exporting to something else if necessary. AI: At this data size, you can still use hierarchical clustering. You can stop the clustering early when the similarity is too low. But all of these approaches are pretty inefficient. The proper approach is to not use clustering at all. Instead, use similarity search or, e.g., minhash.
H: How to more simply see overlapping data for dozens of variables? I'm trying to think of the best way to see how multiple variables (about 40) related to a very large userbase can be seen to interact with one another. As an analogy, imagine I had survey data of superheros liked by thousands of school children, such that Adam Agner only likes Batman, Brian Agosto likes Superman and Batman, Cian Ailiff likes Wonderwoman, Superman and the Flash, etc. The index would be the list of children's names (tens of thousands), and the variables would be each superhero (from a list of 50) each with a True or False value. Rather than just the amount of children who like each superhero, if I can see the overlap information in some easy way, I might find that an unusually large number only like Batman, or that most of those who like Superman are likely to also like the Flash. The easiest would be to do it visually, but Venn diagrams wouldn't be practical with a large number (50 here) of variables. So it would seem like a correlation matrix with a heat-map base would be the way to go, with something like this: I can imagine a heat map with superheros plotted on both axes could work for seeing interesting matches of one variable against all other ones, but it would still require some less ideal extra steps to see when three or more superheros matched, like seeing the colour overlap of superhero1 with superhero2 & superhero3 both be high, and that's kind of guesswork, because the actual children who like all 3 could still be low. At the moment, the best solution I can think of is to try to reduce my variables down into categories, so the example could be Marvel, DC, male, female, but that loses some potential data that could be helpful from overlaps within those categories. Maybe if I did something like the image above, the circle size would be the number for that overlap, and the colour could be number matches with other variables (without listing them, but I could further investigate). That kind of coding is a little out of my comfort zone, but I could try. Ideas appreciated! I'd ideally do this in matplotlib or some other means within python, but if I have to use Matlab or another tool then I will consider that. Ideas or suggestions appreciated! Hopefully this was clear, thank you! AI: For this example specifically, I would suggest visualizing the data using a Chord Diagram. Chord diagram from Delimited.io: A chord diagram would allow you to see interactions and co-occurrences of likes between each of the heroes directly, including relative magnitude of the effects intuitively and immediately. You could also include other properties of the character (universe of origin, gender, time period of introduction etc) by use of color and/or positioning of the hero in question on the circumference of the diagram. As chord diagrams are a graph-based approach, you'd want to transform the individual observations you have into what would be a (this has to be a first for this term) hero-like-cofrequency matrix formatted as follows: +-----------+--------+------------+----------+ | | Batman | Spiderrman | Superman | +-----------+--------+------------+----------+ | Batman | 0 | 2 | 3 | +-----------+--------+------------+----------+ | Spiderman | 2 | 0 | 1 | +-----------+--------+------------+----------+ | Superman | 3 | 1 | 0 | +-----------+--------+------------+----------+ Note that this represents an undirected graph, and that it is symmetric about the identity column. As you're using Python, you may want to look into using Bokeh to create the chord diagram. One tutorial for doing so is available here. Best of luck, true believer!
H: How to choose validation set for production environment? I am using XGBoost for a time-series regression problem. During development, i choose my validation set on last %10 percentage of data. Using timeseries split cross validation and grid-search, I got my best model on this with corresponding xgb hyperparameters. My question is, how to choose validation set (for early stopping) on my production environment? 1) i have chosen last %10 percentage of my data as validation set, but this set is also included on training data. therefore overfit. very sensitive to noisy data. 2) my predicted data (lets say Y) changes over time, when i choose random rows within last year (%10 amount) and dont include them in training set, it gave me worse results on production than first option. 3) when i choose last week's data as validation data, not included in training set, it gave better result on 2. option. but i am not including last week's data to training procedure. 4) Or do i need a validation set on my production environment? should i set iteration count from the experiments done in development stage? (e.g i got best result on 10k th iteration, so i should limit my production-setup iteration count with 10k without using validation set at all?) -- So, how can i choose validation set for my production environment? Best practices, or are there any tips/tricks for this? AI: If you're simply re-training the XGBoost model periodically in order to account for the changing nature of your data, the best option is to hold out a recent set of data for testing (some variation of your option 3). As you mention, option 1- that is, training on the entire available set and validating on the most recent 10 percent of the data, is very likely to overfit (and therefore overestimate its performance on future data). Option 2 has the downfall that you're attempting to predict information that you have post-hoc (after the fact) information about- that is, you know what subsequent observation values were, which is very relevant to a time-series prediction. For example, if you're trying to predict what the value is tomorrow, and you know what the value is for the day after tomorrow already, you can do a much better job of predicting tomorrow. Unfortunately, you can never know this before the day after tomorrow begins. Therefore, option 3 is the most valuable practice to determine the likelihood of your model to accurately predict future observations- you're using only data that would be available from the point of prediction (in the past relative to the predicted period) and holding that data out of the training set for the model. The best practice in these circumstances is to hold yourself accountable for "not cheating"- that is, try your best to provide a fair and honest experiment on the prediction of your model on data it hasn't seen and would be available in the context of a future use situation.
H: Range to define emotions We are capturing emotions as survey responses. We need to assign values for the responses(emotions) for analysis purposes. Is there an optimum range that can be assigned to achieve this? (like from -100 to 100). An example of a question and a set of answers are as follow. Question: "How are you feeling today?" Answers: Terrible, Sad, Ok, Good, Great A suitable approach we could think of is to assign values from 1 to 100 with equal distance. Is this statistically valid? What are the things that we should consider when achieving this? In this case, only positive integers are assigned because we need to calculate statistics like weighted average. Can't we assign negative numbers as well? AI: The final range of emotion is completely arbitrary. No matter the interval [a, b], you can adjust the emotions to fit inside. [-100, 100] is perfectly reasonable and is common. An example of use is from GDELT, which provides this interval for average tone of news documents. Asking if equally distancing the emotions is statistically correct does not make sense. This entirely depends on your use case and opinions. Also, there is absolutely no reason why you can't use negative numbers in a weighted average. If you mentioned what you are doing and how you are evaluating emotion, there might be more to say.
H: How predictions of level 1 models become training set of a new model in stacked generalization. In stacked generalization, if I understood well, we divide the training set into train/test set. We use train set to train M models, and make predictions on test set. Then we use the predictions as input of a new model. Thus, the new training set will have M features corresponding to the M models predictions. Finally, we use the last model to make the final predictions. First, is my understanding correct ? If so, how is it possible to use the last model to make predictions as it has different features. AI: You have created a model pipeline and must run all trained models ("lower level" ones first) in order to make a prediction on new data using the stack. With test data set, it is slightly easier, since you can store the predictions from the "level 1" models when testing them, and only run the final model across this stored data. In addition to your brief description, usually to avoid bias from re-using training data, you would use k-fold cross-validation or similar mechanism, and your training data for the final model should be the cv predictions from each model. You do not want to use the training predictions from those models, because they are likely to be overfit whilst "level 1" test and production predictions will not be, and this would introduce population differences between train and test data in your "level 2" model. It is also quite a common variation to use the M new features from your "level 1" models alongside some or all of the original features. This gives the meta model more data to base its decision on when deciding the relative weights between the first stage models (assuming this top-level model is non-linear).
H: How to change a cell in Pandas dataframe with respective frequency of the cell in respective column I have a pandas dataframe with binary value columns. I would like to replace values in each cell with its frequency in rspective column in place. My question is how to keep track of the current column while using apply on the subset of columns like here: (to be applied from 8th columns to the end) : train_data.ix[:,8:] = train_data.ix[:,8:].apply(x: what should come here?) I know that train_data.ix[:,col_number].value_counts()[0] will return number of zeros in col_number but how can I use it inside apply function? AI: import pandas as pd import numpy as np df = pd.DataFrame(np.random.randint(2,size=(10, 4)), columns=list('ABCD')) df A B C D 0 0 1 1 1 1 1 0 1 0 2 1 0 0 1 3 0 1 1 1 4 0 0 1 1 5 0 1 1 1 6 0 0 0 0 7 1 1 1 0 8 1 1 1 0 9 1 0 1 0 values = df.apply(pd.value_counts) values A B C D 0 5 5 2 5 1 5 5 8 5 new_df = pd.DataFrame() for x in df.columns: new_df[x] = df[x].apply(lambda row: values[x][row]) new_df A B C D 0 5 5 8 5 1 5 5 8 5 2 5 5 2 5 3 5 5 8 5 4 5 5 8 5 5 5 5 8 5 6 5 5 2 5 7 5 5 8 5 8 5 5 8 5 9 5 5 8 5 I created a df with random integers 0 and 1. Then count them by column in values df. Then loop through each column and replace each cell with their respective count. Being random it is close to 5/5 split, but you can see the C column has 8/2 split.
H: How to score arbitrary sentences based on how likely they are to occur in the real world? According to article about LSTM here, I know that: it allows us to score arbitrary sentences based on how likely they are to occur in the real world. This gives us a measure of grammatical and semantic correctness. Such models are typically used as part of Machine Translation systems. But, it seems that this article doesn't point out how to compute the score with LSTM. Any way to compute the score? AI: Typically you would use a perplexity value. For example, if your LSTM model is word-based and you have a sentence $[x_1, x_2 . . . x_N]$, and your model predicts the words that appear in that sentence with probability $p(x_i|x_0..x_{i-1})$ (where $x_0$ is a "start token" or whatever you use to start your RNN prediction sequence). Then you might quote a per-word perplexity for that sentence under your model as $- \frac{1}{N}\sum_{i=1}^N log_2(p(x_i|x_0..x_{i-1}))$ Using an LSTM to predict consecutive words, it is practical to construct an array of probabilities $[p_1, p_2 . . . p_N]$ by running the network on the sentence and noting the probabilities for the correct matching word - i.e. $p_i = $ the predicted probability of the correct class $x_i$ at each step, which simplifies the expression: $- \frac{1}{N}\sum_{i=1}^N log_2(p_i)$
H: Time series forecast using SVM? I have a pandas data frame like this: (index) 0 sie 0 1997-01-01 11.2 1 1997-01-03 12.3 2 1997-01-04 11.5 ... 12454 2017-02-01 13.2 I would like to use SVM to predict the future values of the sie. How can I implement python code to predict these values? I am doing something like this: model = svm.SVR().fit(df[0],df['sie']) But it is giving me this error: ValueError: Found input variables with inconsistent numbers of samples: [1, 12455] Although both df[0]anddf['sie'] have same shape of (12455,) Note: I don't have continuous data (some dates, in between, are missing), also values in 0 are datetime.date() objects. AI: Here, a very good article: http://machinelearningmastery.com/time-series-forecasting-supervised-learning/ In a few words, define a window of size n and that is the size of your feature vector. Reshape the dataset and play.
H: How can I have an "undefined" category in multi-class classification I'm trying to classify several websites by category (finance, health-care, IT, etc...). I have at my disposal the content of the pages of the websites, and I use the words to classify. For now, I have manually classified some website to train a naive bayes model. As I'm mostly interested in a high precision (e.g a classified website must be in the right category), I would like to add an "undefined" category in which a website would end up if it is not close enough to the other categories. To be clear, it's not a problem for me if a website is not classified, it's a problem if it is misclassified. Are there algorithms that would allow this, or a way to train an "undefined" category ? My best guess for now is to train something like a random forest and define a "minimum score" below which a website is "undefined". AI: Absolutely you can create an approach that forces high-precision class tagging algorithm (at the natural cost of recall). What's more- you can do this with (at least) any method that provides a percentage calue for predictions, which is the vast majority of classifiers. The key is, as you mention, to find the minimum acceptable value of precision and cut the predictions at that value. If a minimum precision is your only constraint and your solution is not sensitive to the recall (getting all or the highest possible proportion of the websites correctly classified), this is a very simple matter. Some lower percentage of your observations will be classified, but those that are will be more likely to be correctly classified. For example- if your Precision floor is 70%, your cut could look something like this: Obeservation predictions falling into the green segment could be classified as positive examples, and predictions falling into the red segment would be unclassified. This approach would be sufficient for Naive Bayes. Some other approaches (SVM, gradient boosting machines etc) may benefit from a custom loss function definition, in which you define a function that disproportionately punishes false positive predictions. Something like: $(y_i = 0) \rightarrow (d = s)\wedge (y_i = 1) \rightarrow (d=1)$ $L(p) = \frac {\sum_i\frac{(p_i-y_i)^2}{d}}{n_i}$ $L(p)$ = loss function $p_i$ = predicted class likelihood $y_i$ = actual class (0 for negative example, 1 for positive) $n_i$ = number of observations $s$ = Penalty parameter for false positives. Would heavily punish false positives relative to false negatives. It can also be adjusted to your needs, to more or less heavily punish false positives. For $s = .5$, the function looks something like: Please note that this function does not necessarily define the approach you should take, but only one possible approach. To create a custom loss function perfectly tailored to your use case, you'd want to know the relative cost of a false positive and a false negative, and customize the function accordingly.
H: Explain output of logistic classifier [Note : There is some serious problem with logic used to get the best banner. I got it late. Directly read the answer to get general info, or you can also try to the find the mistake.] Problem: Given a set of user features, select an ad with the highest probability to be clicked. Dataset - https://www.kaggle.com/c/avazu-ctr-prediction/data (First 100000 tuples from training set and split that it 80:20 training:testing) Tutorial followed - https://turi.com/learn/gallery/notebooks/click_through_rate_prediction_intro.html C14 is my ad id. Problem :- Given ('device_type', 'C1', 'C15') return ad id. Training:- I have taken 'click' as my target and ('device_type', 'C1', 'C15', 'C14') as my input features. I used logistic regression classifier in graphlab library to train the model. I am doing ad selection in the following way:- Given a set of features ('device_type', 'C1', 'C15', X) Iterate X over all possible values of C14 and pass the features to predictor to get the probability that X ad will be clicked. Return the ad with maximum click probability. MY PROBLEM IS LOGISTIC CLASSIFIER IS ONLY RETURNING ONE AD for every test tuple, though with different click probability, it means only one ad is getting the highest probability to be clicked. Can anyone explain this observation? When using boosted tree classifier instead of logistic classifier for the above prediction, I am able to get different ads as my prediction and hence getting better results. AI: It means your logistic classifier is biased towards one class, this could be because of below reasons that I can think of. Class Imbalance: This article explains how to identify and overcome the class imbalance problem. Overfitting: This article explains how to tackle over fitting. Logistic classifier works better if data is linearly related, if you find non-linear relationships in data I would suggest use better algorithms like GBM/SVM/Random forest, which will give you much better and accurate results.
H: Is Gini coefficient a good metric for measuring predictive model performance on highly imbalanced data I am evaluating a Credit Risk model that predicts the estimated likelihood of customers defaulting on their mortgage accounts. The model is a Logistic Regression estimator and was built by another team. They use the Gini metric to measure the performance of the model. They achieved 87%. Upon evaluation, I found that the recall was 51% whilst the error rate of the non rare event class (do not default) was 0.9%. Am I correct in thinking that the Gini is actually a misleading metric in this case because it doesn't really show the extremely poor predictive performance of the rare event class? I have questioned them about this and tried to recommend them to use precision/recall metrics as well as confusion matrices and a precision-recall trade-off graph but they quickly dismissed me. Any advice would be much appreciated. AI: The Gini Coefficient can also be expressed in terms of the area under the ROC curve (AUC): G = 2*AUC -1 link. The ROC curve, on the other hand, is influenced by class imbalance through the false positive rate FP/(FP+TN). If the number of negatives is a lot larger, this could be a potential issue. In short, the Gini Coefficient has similar pros and cons as the AUC ROC metric.
H: How to implement Python's MLPClassifier with gridsearchCV? I am trying to implement Python's MLPClassifier with 10 fold cross-validation using gridsearchCV function. Here is a chunk of my code: parameters={ 'learning_rate': ["constant", "invscaling", "adaptive"], 'hidden_layer_sizes': [(100,1), (100,2), (100,3)], 'alpha': [10.0 ** -np.arange(1, 7)], 'activation': ["logistic", "relu", "Tanh"] } clf = gridSearchCV(estimator=MLPClassifier,param_grid=parameters,n_jobs=-1,verbose=2,cv=10) Though,I am not sure if hidden_layer_sizes: [(100,1), (100,2), (100,3)] is correct. Here, I am trying to tune 'hidden layer size' & 'number of neurons'. I would like to give this 'tuple' parameter for hidden_layer_sizez: 1, 2, 3, and neurons: 10, 20, 30,...,100. But I do not know if it is the correct way to do it. Therefore, I am choosing default neurons to be 100 in each layer. Can anyone advise please? AI: A tuple of the form $(i_1, i_2, i_3, ... , i_n)$ gives you a network with $n$ hidden layers, where $i_k$ gives you the number of neurons in the $k$th hidden layer. If you want three hidden layers with $10,30$ and $20$ neurons, your tuple would need to look like $(10,30,20)$. $(100,1)$ would mean that the second hidden layer only has one neuron.
H: Why does gradient descent gives me much better Relative Squared Error then the Least Squares approach? Am I doing regression task with 7 dependent variables and 10000 data points. The SGD gives me 22% of mean absolute percentage error on test and train dataset. And Least Squares method using numpy scipy.optimize.least_squares give me only 58% (I have tried different settings.). I thought that Least Squares should give the same or better performance with such size dataset. What can be the reasons? AI: The reason is that the two metrics, mean absolute percentage error (MAPE) and mean square error (MSE) are optimising to different targets. Improving one can be done at the expense of the other. As a simple example, consider this data: x = [ 0, 1, 2, 3, 4, 5] y = [ 3, 5, 10, 10, 11, 15] The best fit mean squared error (MSE) for a line on this data is $\hat{y} = 2.23x + 3.43$, which has MSE of $1.18$, and a mean absolute percentage error (MAPE) of $11.0$%. The best fit mean absolute percentage loss for a line on this data is $\hat{y} = 2.35x + 2.99$, which has a MSE of $1.24$, and a MAPE of $8.34$%. You can see that optimising for MAPE gives a worse MSE, and vice-versa. The difference can get extreme when there is a large range for y values (in terms of orders of magnitude covered), because optimising for MAPE will favour being more accurate on small values at the expense of larger ones. So if we change y to be: y = [ 1, 2, 10, 10, 11, 20] Then optimising mean abs percentage gives the line $\hat{y} = 3.78x + 1.05$ with MSE $7.09$ and MAPE $21.9$%. But optimising for mean square error gives $\hat{y} = 3.49x + 0.286$ with MSE $4.56$ and MAPE $62.9$% - this is a larger difference, and I suspect that your data has a large range of target variable causing a similar effect. You can potentially get closer results using the Least Squares regressor by using a transformed target variable $z = log(y)$ and transforming back at the end. This still won't be quite the same, but it does reduce the difference significantly - in my last example if I try this, I get MAPE $24.2$% - compared to $21.9$% for optimising MAPE directly.
H: Categorical Variables - Classification I have a categorical variable, country which takes on values like India, US, Pakistan etc. I am currently using a linear SLM for a classification task. So my country value varies from 1-20. How should this be a feature in the classification task. Should i have a one hot vector like (1,0,0..) for us and assign this vector 20 weights, or should i have integer from 1_20 and assign a single weight? I am using scikit learn. Does the answer depend on classifier? AI: The answer depends less on the classifier and more on the nature of the variable. In your case One Hot Encoding might be the best answer. Label Encoding (Replacing categorical variables with integers) is useful when the variable is ordinal, i.e. it has a sense of order. For example the days of the week or the months of the year. Since they follow a fixed order, you can encode January as 1 and February as 2 and so on. The classifier would interpret Feb as being greater than Jan in some way (which is okay for a task like weather prediction and so on). Can your countries be considered to be ordinal? If not, One Hot Encode them.
H: is there any way to plot ROC curves from weka i am using some algorithms from weka . i was willing to plot some algorithms' roc curve for comparison . Is it possible and how ? AI: In the Weka explorer, go to the classify tab and train/test your algorithm. The result buffer appears in the bottom left box under the section labeled "result list" Right click the result buffer and click visualize threshold curve, then select the class you want to analyze to save the ROC curve as an image, hold shift + alt and left click on the graph
H: Why some people add results from PCA and other dimensional reductions techniques as features I often see in a well-known datascience competition platform, that a lot of people apply some dimension reductions techniques, but instead of using it to reduce the number of features (complexity) of their models, they append the resulting features to their datasets. Is'nt that adding complexity to their model rather than simplifying it ? AI: This is feature engineering. You just give the algorithm another look at the data, from another point of view. It often helps to understand better data when you have different point of views. For example, let's assume you want to learn the US road directions to a simple ML model. You give him all examples from roads 1 to 100 mapping to 0 (if the road is east-west), or 1 (road is north-south), except for the number 47. Then you ask the model to predict 47 and it will answer 0 because it's between 46 and 48 who are labelled 0. Now if you give the model another point of view (here, the parity of the road number), then it will obviously learn efficiently the road directions. You can view the results of the PCA as the parity, just another view of the data to understand it better.
H: Is it good to remove outliers from the dataset? Suppose you have some training dataset that you want to use to train some ML models, where targets are comprised between let's say 1 and 100. However, from the 4000 samples, there are few of them (less than 10) which have values out of the previous range, much higher than 100, let's say 300. Is it reasonable to ignore these samples and remove them from the data set, or should they be kept ? I saw people react differently, some of them say that they may hurt the model, while others say no as these samples give additional information to the model. AI: It mostly depends on what you are trying to achieve with your model. Sometimes the information carried by outliers is indeed negligible if not of interest (say, for example, that those high values are caused by data collection/input errors), and may affect your model performance. In some other cases though, outliers carry a lot of meaning and you might want your model to be aware of their existence/possibility. In other scenarios, the outliers are what you actually care about (see Anomaly/Novelty detection e.g.). Long story short, if these outliers are really such (i.e. they appear with a very low frequency and very likely are bad/random/corrupted measurements) and they do not correspond to potential events/failures that your model should be aware of, you can safely remove them. In all other cases you should evaluate case by case what those outliers represent.
H: Evaluating Logistic Regression Model in Tensorflow Following this tutorial, I have a doubt about the evaluation part in: # test the model n_batches = int(mnist.test.num_examples/batch_size) total_correct_preds = 0 for i in range(n_batches): X_batch, Y_batch = mnist.test.next_batch(batch_size) _, loss_batch, logits_batch = sess.run([optimizer, loss, logits], feed_dict={X: X_batch, Y:Y_batch}) preds = tf.nn.softmax(logits_batch) correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y_batch, 1)) accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32)) # need numpy.count_nonzero(boolarr) :( total_correct_preds += sess.run(accuracy) print 'Accuracy {0}'.format(total_correct_preds/mnist.test.num_examples) Note that this is done on the test set, so the goal is purely to obtain the accuracy using a previously trained model. However isn't calling the line: _, loss_batch, logits_batch = sess.run([optimizer, loss, logits], feed_dict={X: X_batch, Y:Y_batch}) equivalent to re-optimizing the model using the test data (and labels)? Shouldn't we avoid re-running optimizer and loss and just compute the predictions? AI: I think you are correct. The line should be loss_batch, logits_batch = sess.run([loss, logits], feed_dict={X: X_batch, Y:Y_batch})
H: How to transform raw data to fixed-frequency time series? How to transform raw data to fixed-frequency time series? For example I have the following raw data in DataFrame A B 2017-01-01 00:01:01 0 100 2017-01-01 00:01:10 1 200 2017-01-01 00:01:16 2 300 2017-01-01 00:02:35 3 100 2017-01-01 00:02:40 4 100 I'd like to transform it into a time series: 1 minute frequency column A should have sum of values in time interval column B should have mean of values in time interval possibly other functions over other columns Note: Raw data is not periodic. Transformed data should be: A B 2017-01-01 00:01:00 3 200 2017-01-01 00:02:00 7 100 AI: This sort of effect can be achieved with pandas.DataFrame.resample() combined with Resampler.aggregate() like: Code: df.resample("1Min").agg({'A': sum, 'B': np.mean}) Test code: df = pd.read_fwf(StringIO(u""" A B 2017-01-01T00:01:01 0 100 2017-01-01T00:01:10 1 200 2017-01-01T00:01:16 2 300 2017-01-01T00:02:35 3 100 2017-01-01T00:02:40 4 100"""), header=1, parse_dates=[0], index_col=0) print(df) print(df.resample("1Min").agg({'A': sum, 'B': np.mean})) Results: A B 2017-01-01 00:01:01 0 100 2017-01-01 00:01:10 1 200 2017-01-01 00:01:16 2 300 2017-01-01 00:02:35 3 100 2017-01-01 00:02:40 4 100 A B 2017-01-01 00:01:00 3 200 2017-01-01 00:02:00 7 100
H: Best Programming Language for Data Science I'm learning JS, HTML and CSS, but I doubt JS is very good at Data Analysis. So, what would you guys recommend me learning to start my "career" in Data Science? What's the best programming language for processing data? P.S. I love statistics and programming so I think this will be fun. AI: This is no doubt a duplicate, but here's how I'd weigh in on the major languages: R: Fantastic support for packages and specialised stats analysis community, you can find a package to do just about anything you need and it will be relatively easy to use. Is good for throwing together prototypes and performing exploratory analysis. Is Free and Open Source. Slower than Python. Basically don't loop over anything. It's an odd language for a programmer to use (coming from a software dev background). Clearly designed by mathematicians. Relatively little choice of good IDEs Python: Fast. Also very good as a general purpose language so has 'broader' package support. Free and Open Source. Easy to use for Big Data applications. Not as streamlined for analysis as R. Syntax can be difficult to read (no surrounding braces to make it obvious where functions/ if statements end). Can be particularly tedious working with Dataframes compared to R. MATLAB: Generally slower. Has very impressive packages for signal processing/image recognition and all the cool stuff. Is very readable and easy to comprehend generally. Is NOT free. Student licenses are available. Was quite complicated for me to get my hands on one though... Has very good support for mathematical analysis similar to R, but much better matrix functions. Personal recommendation: Python. Kill two birds with one stone, learn good general to advanced programming concepts and data science at the same time. Good article: https://www.linkedin.com/pulse/r-vs-python-matlab-octave-julia-who-winner-siva-prasad-katru
H: why is the model prediction output in keras lstm imdb, a vector? If you run the example of the LSTM sentiment classification task example in keras https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py and add p = model.predict(x_test[0]) print(p.shape) you get: (80,1) .. why? I thought the point of the model was to classify each sample as 0 or 1 as per each of the y_train elements (binary classification) AI: x_test is of shape (25000, 80), that means 25000 samples with each sample corresponding to a sentence with a maximum of 80 tokens (see maxlen). Accordingly x_test[0] is a vector of size 80. model.predict does work on batches of samples, so you can call all model.predict(x_test) you will get a vector of n predictions where n is the number of test samples, i.e. 25000 in that case. You could also predict the outcomes for just a part of the test set with slicing, e.g. model.predict(x_test[:10]) for the 10 first samples. What happens when you are calling this function on just one sample x_test[0] is that the input is interpreted as a batch of 80 samples each with just one token. (To the best of my knowledge, this "interpretation" happens somewhere down the function tree in keras.engine.training._standardize_inpupt_data.) The model "doesn't know" the length of the input samples beforehand and thus doen't complain about these very short samples. So you get 80 predictions for all these samples as if they were all sentences with just one token. To get a prediction for the first sample, you could use model.predict(x_test[:1]). So you predict a batch containing only one sample and get the corresponding batch of one prediction.
H: Feature Normalization/Scaling: Prediction Step I'm just doing a simple linear regression with gradient descent in the multivariate case. Feature normalization/scaling is a standard pre-processing step in this situation, so I take my original feature matrix $X$, organized with features in columns and samples in rows, and transform to $\tilde{X}$, where, on a column-by-column basis, $$\tilde{X}=\frac{X-\bar{X}}{s_{X}}.$$ Here, $\bar{X}$ is the mean of a column, and $s_{X}$ is the sample standard deviation of a column. Once I've done this, I prepend a column of $1$'s to allow for a constant offset in the $\theta$ vector. So far, so good. If I did not do feature normalization, then my prediction, once I found my $\theta$ vector, would simply be $x\cdot\theta$, where $x$ is the location at which I want to predict the outcome. But now, if I am doing feature normalization, what does the prediction look like? I suppose I could take my location $x$ and transform it according to the above equation on an element-by-element basis. But then what? The outcome of $\tilde{x}\cdot\theta$ would not be in my desired engineering units. Moreover, how do I know that the $\theta$ vector I've generated via gradient descent is correct for the un-transformed locations? I realize all of this is a moot point if I'm using the normal equation, since feature scaling is unnecessary in that case. However, as gradient descent typically works better for very large feature sets ($> 10k$ features), this would seem to be an important step. Thank you for your time! AI: I have learned what the correct answer is: you have to transform your prediction location in precisely the same way you do for the columns of matrix $X$: first, subtract the means element-wise from each component of the prediction location $x$. Second, divide element-wise by the standard deviation. Third, prepend a $1$ to allow for the bias. Finally, perform the dot product with your $\theta$ vector. Since the $\theta$ vector was calculated on transformed data, it's meant to operate on transformed data. It will contain units and scaling appropriate to produce an answer in engineering units. Reference: see Week 2, FAQ, Question 8 in Andrew Ng's Machine Learning course on Coursera. (Login is probably necessary.)
H: How to further Interpret Variable Importance? Forgive me if this is a duplicate question, I haven't found anything that answers my question specifically after searching for a while. I have a dataset which I'm using to predict mobile app user retention, using the RandomForestClassifier in the SciKit Learn package. I'm pleased with the accuracy I'm getting and I'm planning on including a number of other metrics including precision, recall, Matthew's Corr Coefficient etc. I'm pretty sure the model is good. The key thing that I'm interested in here are the features themselves. I want to know what is contributing to my user churn. I have extracted the feature importances and plotted a nice looking graph, but now I'm stuck. I'd ideally like to know how each variable influences the churned/not churned outcome. The problem with GINI feature importances is that I can see which ones are most influential, but for example with continuous variables I want to know at which value the RF found best to split on. I don't need to see this for every feature as I have 70+ only the most 'important' ones. I saw a very nice decision tree plot here, but cannot find any way of reproducing something similar using scikit learn. I'm open to other suggestions. Thanks in advance :) AI: The documentation offers a couple options. To plot the individual trees in your forest, one can access them like model.estimators_[n].tree_ and then plot them with export_graphviz as explained in the documentation, or you can follow this example that directly prints the structure in text format. However, I would say this is not the best idea, because a feature can occur in different trees and nodes with different split points. You probably get a better intuition about your features from partial dependence plots that try to isolate the effect of one variable on your response variable. As a bonus, here is a good article about more alternatives to gain insight into your model (not all applicable to random forests).
H: Does increasing the n_estimators parameter in decision trees always increase accuracy I'm using some ML algorithms (from sklearn lib) and on most of them there is a parameter n_estimators which is (if I understood well) the number of used trees. Intuitevely I would say that the more trees you use, the more accurate results you get. It turned out to be not exactly true, sometimes, a very few number of trees give much better results, but I can't figure out why ? Edit Some precisions: this is a regression problem, with a dataset containing about 4000 samples and 500 features. I'm using GradientBoostingRegressor, ExtraTreeRegressor, AdaBoost, RandomForest. Edit 2 Additional info: I use a cross-validation with KFold=10 to evaluate the accuracy of the algorithms. The best n_estimators value seems to be 50, which give a R2 score of ~56/57% +- 8% for all above cited algo. When I try to increase it, the score quickly decreases. I tried several values, from 100 to 500, it keeps decreasing even reaching 52%. AI: There are a lot of misconceptions about regression random forest. Those misconceptions about regression rf are seen also in classification rf, but are less visible. The one I will present here is that regression random forests do not overfit. Well, this is not true. Studying the statistical properties of the random forests shows that the bootstrapping procedure decreases the variance and maintain the bias. This property should be understood under the bias-variance tradeoff framework. It is clear that the random forests approximate an expectation, which means the mean of the true structure remains the same, while the variance is reduced. From this perspective, the random forests do not overfit. There is a problem, however, that problem is the sample itself which is used for training. The expectation is taken conditional on data. And if the data is not representative of the problem, it is normal that in the limit when the number of the tree grows to infinity. In plain English, this means that the regression forest will learn too well the data and if the data is not representative, then the results are bad. In which way the data might not be representative? In many ways, one would be that you do not have enough data points in all region of interests for example. This problem is seen often with testing error, so it might not affect you so much, but is possible to see that in CV also. Another issue with regression trees is the number of significant variables and the number of nonsignificant variables in your data set. It is known that when you have few interesting input variables and a large number of noise variables the regression forests does not behave well. Boosting procedures does not have this behavior. There is a good reason for that. Regression forests produce more uninteresting trees which have the potential to move the learned structure away from the true underlying structure. For boosting this does not happen since at each iteration only the region of interests have large weight, so the already learned regions are affected less. The remedy would be to play with the number of variables selected on learning time. There is a drawback however even if you increase the number of variables take into account at learning time. Consider two randomly grown trees. If you would have 100 input variables and select 10 of them for learning, there are small chances that the trees look similar. If instead you would select 50 variables for learning then your trees have better chances to look similar. This is translated into the fact that if you increase the number of variables for testing candidates at learning time, then the correlations between them increases. If the correlation increases, then they will not be able to learn a more complex structure, because of their correlations. If the number of variables selected is small you have the potential to learn more due to diversity, but if the significant variables are small compared to nonsignificant, this would lead to learn noise, too much noise. This affects most of the time the CV performance.
H: Xgboost - How to use feature_importances_ with XGBRegressor()? How could we get feature_importances when we are performing regression with XGBRegressor()? There is something like XGBClassifier().feature_importances_? AI: Finally I have solved this issue by: model.booster().get_score(importance_type='weight')
H: What kind of algorithms can be used as a stacker in stacked generalization? In stacked generalization, several algorithms (I use some random trees, booster trees, etc.) are first trained and used to make the predictions which are used as input for another algorithm. However, can I use any kind of algorithms, or is there a preference? P.S.: I often see people using linear models in this case. AI: There is no preference for stacked generalization. You can use any algorithm whether it be a NN, RF model, XGB model, etc. The only thing that you need to take care of is the fact that the algorithm which you are applying to your model is useful to your data or not. Also, model stacked at level 1 may not be a very good model for level 2. For example, suppose at level 1 you stacked an XGB model with max_depth=12, eta=0.01, colsample_bytree=0.7, but at second level it might happen that the same depth is just an overkill and hence you should discard using that model at second level. So, in short there is no preference for stacked models, the preference is for the data on which you are going to train your model.
H: How can I deal with data that is on the format "Image + single number"? Let say I have a data set where every sample is an image of a landscape and a temperature associated with the landscape. How do I incorporate the temperature into my convolutional neural network for classifying if the data is e.g a winter or summer landscape? Can I simply add the temperature as a feature after the feature learning part of the network? I can not find similar questions but maybe this has a name that I am not aware of? AI: After the convolutional part you will need to add a normal, dense layer. Concatenate it to this layer and add some more layers if necessary, to add more interactions between the temperature and the image. This wouldn't necessarily need to be too deep because it can learn to represent the image features in a way that it will combine nicely with the temperature already, hopefully.
H: Cluster documents based on topic similarity I have set of documents where I have assigned topics per each document. E.g., Topics of document 1 -> 1.0 Science, 1.0 politics, 0.8 History, 0. 8 Information and Technology Now I want to cluster these documents and find what are the documents that share similar like topics. Can you please suggest/recommend suitable clustering technique for me to start with PS: I am interested in something like this -> https://www.quora.com/How-do-I-cluster-documents-using-topic-models However, the topics assigned for my documents varies AI: As @Emre suggested, if you already have the distribution of topics in each document, you can represent each document as a vector $x_d \in \mathbb{R}^N$, where $N$ is the number of the unique topics in your collection. For documents, not exhibiting specific topics just fill the specific cells in each feature vector with zeros. Then, you can use some clustering algorithm such as nearest neigbors, using those feature vectors. Example usage code in python below: import pandas as pd import numpy as np from sklearn.metrics import pairwise_distances # Initialize some documents doc1 = {'Science':0.7, 'History':0.05, 'Politics':0.15, 'Sports':0.1} doc2 = {'News':0.3, 'Art':0.5, 'Politics':0.1, 'Sports':0.1} doc3 = {'Science':0.8, 'History':0.1, 'Politics':0.05, 'News':0.1} doc4 = {'Science':0.2, 'Weather':0.2, 'Art':0.6, 'Sports':0.1} collection = [doc1, doc2, doc3, doc4] df = pd.DataFrame(collection) # Fill missing values with zeros df.fillna(0, inplace=True) # Get Feature Vectors feature_matrix = df.as_matrix() # Get cosine similarity (i.e. 1 - cosine_distance) between pairs sims = 1-pairwise_distances(feature_matrix, metric='cosine') # Get the ranking of the documents given document 0 # from most similar to least similar. Don't take into account # the first document, because it will be the same that the query was about ranking_1 = np.argsort(sims[0,:])[::-1][1:2] print ranking_1 ranking_2 = np.argsort(sims[1,:])[::-1][1:2] print ranking_2 This takes 4 documents with $N=7$ unique topics, fills missing values with zeros and creates a similarity matrix between all documents. Then querying for documents 1(Science=0.7) and 2(Art:0.5) the most similar other document in the collection, we surely get documents 3(Science:0.8) and 4(Art:0.6) correspondingly. You can try more sophisticated approaches regarding clustering and other distance metrics.
H: How to measure the correlation of different algorithms In stacked generalization, several algorithms are trained on the training set (i.e. at layer 1) and their predictions are then stacked using a layer 2 model. In many documentations, it is said that it is better that the layer 1 algorithms should be of low correlation. How can one compute this correlation between algorithms ? AI: For regression tasks correlation will be simply the correlation between the predicted values, for binary classification it will be correlation between predicted probabilities. In multiclass classification you can find correlation between predicted factor variables using the hetcor package in R
H: Using simulations to train ML algorithms Possibly similar question: Is it ok to collect data using algorithm to train another? I have a model that accurately describes an underlying physical, complex, system. The model is basically a set of ODEs based on the physics of the system, validated against measurements. When a system perturbation occurs, I can run thousands of simulations to assess if the new system status is secure or not. This is basically a classification procedure (yes/no). This procedure is very time consuming and has to be performed in real-time (thus, requiring huge computational resources). There are thousands of possible perturbations and infinite number of initial points. The same perturbation from 1 initial point can lead to stable system, while from another, to an unstable. My question is: Is it possible to use data generated by a huge number of simulations to train a classification algorithm to perform this detection online? What are the considerations when using simulated data to train an algorithm that will then be used online with real data (expect from the obvious that the simulation needs to be very very accurate)? Any references to such examples? I apologise if this is a basic question. I am new to data science techniques with a more physics/engineering background. AI: Is it possible to use data generated by a huge number of simulations to train a classification algorithm to perform this detection online? Yes, it is always possible to train a classification algorithm when you have labeled i.i.d. training data, and there is no hard reason why you cannot use a simulator to generate that. Whether or not such a trained model is fit for purpose is hard to say in advance of trying it. Using a simulation as your data source has some benefits: Generating more training and test data is straightforward. You will automatically have high quality ground truth labels (assuming your goal is to match the simulation). If you find a problem with certain parameter values, you can target them when collecting more training data. Just as with data taken from real world measurements, you will need to test your results to get a sense of how accurate your model is. What are the considerations when using simulated data to train an algorithm that will then be used online with real data (expect from the obvious that the simulation needs to be very very accurate)? Your model is a function approximator. At best it will match the output of the simulator. In practice it will usually fall short of it by some amount. You will have to measure this difference by testing the model, and decide whether the cost of occasional false negative or false positive is outweighed by the performance improvement. Statistical machine learning models perform best when interpolating between data points, and often perform badly at extrapolating. So when you say that inputs can vary infinitely, hopefully that is within some constrained parameter space of real values, as opposed to getting inputs that are completely different from anything you have considered before - the simulation would cope with such inputs, but a statistics-based function approximator most likely would not. If your simulation has areas where the class switches rapidly with small changes in parameter values, then you will need to sample densely in those areas. If your simulation produces near chaotic behaviour in any region (class value varies a lot and is highly sensitive to small changes in value of one or more parameters), then this is something that is very hard to approximate. If you have some natural scale factor, dimensionless number or other easy to compute summary of behaviour in your physical system, it may be worth using it as an engineered feature instead of getting the machine learning code to figure that out statistically. For instance, in fluid dynamics, the Reynolds number can characterise flow, and could be useful feature for neural network predicting vortex shedding. Any references to such examples? The examples I have found here are about are in renderings of fluid simulations and other complex physical systems where a full simulation can be approximated and they all use neural networks to achieve a speed improvement over full simulation. Accelerating Eulerian Fluid Simulation With Convolutional Networks - I saw a video about this on YouTube's Two Minute Papers channel. Using neural networks in weather prediction ensembles to improve performance Fast Neural Network Emulation of Dynamical Systems for Computer Animation However, I don't think any of these are classifiers.
H: Techniques to clean the topics I have set of topics as follows. "web based", "web-based" -> with surplus symbols "technology","technologies" -> with singular and plural "learned", "learnt", "learning" ->suffix stripping Can you please recommend an accurate tool to perform aforementioned tasks. AI: For text processing, try using Python and the NLTK package. For removing surplus symbols, you can use regular expressions. Install package 're' and use the built-in function re.sub to substitute symbols like '-' with empty character. For suffix stripping, you can either use regular expressions again or use the built-in word stemming functionality in the NLTK package. This tutorial should help.
H: Encoding Time Values I am using Python/Scikit to do data encoding before I go ahead and train my Neural Network. I have a few columns that look like this 07:05:00 08:41:00 17:25:00 12:58:00 08:56:00 11:59:00 17:25:00 15:24:00 Any suggestions on how to encode this? Or is leaving it like this fine? AI: I have decided to convert the strings into seconds. Since these are all Time of Day values, I will convert them to contiguous seconds. https://stackoverflow.com/questions/10663720/converting-a-time-string-to-seconds-in-python
H: Word2Vec vs. Sentence2Vec vs. Doc2Vec I recently came across the terms Word2Vec, Sentence2Vec and Doc2Vec and kind of confused as I am new to vector semantics. Can someone please elaborate the differences in these methods in simple words. What are the most suitable tasks for each method? AI: Well the names are pretty straight-forward and should give you a clear idea of vector representations. The Word2Vec Algorithm builds distributed semantic representation of words. There are two main approaches to training, Continuous Bag of Words and The skip gram model. One involves predicting the context words using a centre word, while the other involves predicting the word using the context words. You can read about it in much detail in Mikolov's paper. The same idea can be extended to sentences and complete documents where instead of learning feature representations for words, you learn it for sentences or documents. However, to get a general idea of a SentenceToVec, think of it as a mathematical average of the word vector representations of all the words in the sentence. You can get a very good approximation just by averaging and without training any SentenceToVec but of course, it has its limitations. Doc2Vec extends the idea of SentenceToVec or rather Word2Vec because sentences can also be considered as documents. The idea of training remains similar. You can read Mikolov's Doc2Vec paper for more details. Coming to the applications, it would depend on the task. A Word2Vec effectively captures semantic relations between words hence can be used to calculate word similarities or fed as features to various NLP tasks such as sentiment analysis etc. However words can only capture so much, there are times when you need relationships between sentences and documents and not just words. For example, if you are trying to figure out, whether two stack overflow questions are duplicates of each other. A simple google search will lead you to a number of applications of these algorithms.
H: How do I load FastText pretrained model with Gensim? I tried to load fastText pretrained model from here Fasttext model. I am using wiki.simple.en from gensim.models.keyedvectors import KeyedVectors word_vectors = KeyedVectors.load_word2vec_format('wiki.simple.bin', binary=True) But, it shows the following errors Traceback (most recent call last): File "nltk_check.py", line 28, in <module> word_vectors = KeyedVectors.load_word2vec_format('wiki.simple.bin', binary=True) File "P:\major_project\venv\lib\sitepackages\gensim\models\keyedvectors.py",line 206, in load_word2vec_format header = utils.to_unicode(fin.readline(), encoding=encoding) File "P:\major_project\venv\lib\site-packages\gensim\utils.py", line 235, in any2unicode return unicode(text, encoding, errors=errors) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xba in position 0: invalid start byte Question 1 How do I load fasttext model with Gensim? Question 2 Also, after loading the model, I want to find the similarity between two words model.find_similarity('teacher', 'teaches') # Something like this Output : 0.99 How do I do this? AI: Here's the link for the methods available for fasttext implementation in gensim fasttext.py from gensim.models.wrappers import FastText model = FastText.load_fasttext_format('wiki.simple') print(model.most_similar('teacher')) # Output = [('headteacher', 0.8075869083404541), ('schoolteacher', 0.7955552339553833), ('teachers', 0.733420729637146), ('teaches', 0.6839243173599243), ('meacher', 0.6825737357139587), ('teach', 0.6285147070884705), ('taught', 0.6244685649871826), ('teaching', 0.6199781894683838), ('schoolmaster', 0.6037642955780029), ('lessons', 0.5812176465988159)] print(model.similarity('teacher', 'teaches')) # Output = 0.683924396754
H: Schema matching using machine learning I'm facing the following problem of integrating data from another company (data base) to an internal one. It's about personal core data, i.e. name address etc. I would like to come up with a mapping of these keys in an automated way. I've read about HMM to use for this. However, I'm still gathering some information of some feasible and standard ways to do it. I'm looking therefore for some reference which describes a possible solution to this. AI: Michael Stonebraker started a company that claims to do just that, schema matching using machine learning: https://www.tamr.com/ Their site no longer has many details on their approach but this article talks about some of the techniques they used, like : Perform fuzzy string comparisons over attribute names using trigram cosine similarity. Treats a column of data as a document and tokenize its values with a standard full text parser. Then, measure TF-IDF cosine similarity between columns. This method is suitable for text fields. Use minimum description length (MDL) to compare values of two attributes. Compute the ratio of the size of the intersection of two columns' data to the size of their union. This method is well suited for categorical fields with small number of values. Compute Welch's t-test for a pair of columns that contain numeric values and get the probability the columns were drawn from the same distribution.
H: Clustering algorithms I have sparse vectors and found that cosine similarity is very efficient to to measure the similarity. Now I want to cluster these vectors based on similarity. Hence, can someone please suggest/recommend clustering algorithms that make use of cosine similarity? P.S.: I do not have a predefined number of clusters beforehand and want the clustering algorithm itself to decide it. AI: You can see your affinity matrix as a weighted adjacency matrix of a graph and apply modularity-based community detection algorithms on that. Just note that modularity based algorithms have resolution problem i.e. finding very small communities is difficult in presence of large ones.
H: Is it better to have binary features rather than class ones I have a dataset containing several features, which have class values (e.g. DBF4, JUL23, ...). In a classification problem and when using decision trees, is it better to convert these values as new binary features: so DBF4 will become a feature, which value is either 0 or 1, or is it better to keep them as they are. Knowing that there are a lot of values (more than the number or rows actually). If this is the case, is there a reason why is it better ? AI: In general, it’s better NOT to binarize them if the decision tree algorithm that you are using supports categorical features, as variables with fewer levels are less likely to be used in splits. Note however that not all implementations in all software are able to use categorical features without one-hot encoding – for example, most R implementations of decision trees and tree ensembles support categorical features natively, whereas scikit-learn’s and spark’s do not. Here’s a blog post with a comparison of categorical as-is vs. one-hot encoded for random forests: https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/
H: Advantages of pandas dataframe to regular relational database In Data Science, many seem to be using pandas dataframes as the datastore. What are the features of pandas that make it a superior datastore compared to regular relational databases like MySQL, which are used to store data in many other fields of programming? While pandas does provide some useful functions for data exploration, you can't use SQL and you lose features like query optimization or access restriction. AI: I think the premise of your question has a problem. Pandas is not a "datastore" in the way an RDBMS is. Pandas is a Python library for manipulating data that will fit in memory. Disadvantages: Pandas does not persist data. It even has a (slow) function called TO_SQL that will persist your pandas data frame to an RDBMS table. Pandas will only handle results that fit in memory, which is easy to fill. You can either use dask to work around that, or you can work on the data in the RDBMS (which uses all sorts of tricks like temp space) to operate on data that exceeds RAM.
H: Gradients for bias terms in backpropagation I was trying to implement neural network from scratch to understand the maths behind it. My problem is completely related to backpropagation when we take derivative with respect to bias) and I derived all the equations used in backpropagation. Now every equation is matching with the code for neural network except for that the derivative with respect to biases. z1=x.dot(theta1)+b1 h1=1/(1+np.exp(-z1)) z2=h1.dot(theta2)+b2 h2=1/(1+np.exp(-z2)) dh2=h2-y #back prop dz2=dh2*(1-dh2) H1=np.transpose(h1) dw2=np.dot(H1,dz2) db2=np.sum(dz2,axis=0,keepdims=True) I looked up online for the code, and i want to know why do we add up the matrix and then the scalar db2=np.sum(dz2,axis=0,keepdims=True) is subtracted from the original bias, why not the matrix as a whole is subtracted. Can anyone help me to give some intuion behind it. If i take partial derivative of loss with respect to bias it will give me upper gradient only which is dz2 because z2=h1.dot(theta2)+b2 h1 and theta will be 0 and b2 will be 1. So the upper term will be left. b2+=-alpha*db2 AI: The bias term is very simple, which is why you often don't see it calculated. In fact db2 = dz2 So your update rules for bias on a single item are: b2 += -alpha * dz2 and b1 += -alpha * dz1 In terms of the maths, if your loss is $J$, and you know $\frac{\partial J}{\partial z_i}$ for a given neuron $i$ which has bias term $b_i$ . . . $$\frac{\partial J}{\partial b_i} = \frac{\partial J}{\partial z_i} \frac{\partial z_i}{\partial b_i}$$ and $$\frac{\partial z_i}{\partial b_i} = 1$$ because $z_i = (\text{something unaffected by } b_i) + b_i$ It looks like the code you copied uses the form db2=np.sum(dz2,axis=0,keepdims=True) because the network is designed to process examples in (mini-)batches, and you therefore have gradients calculated for more than one example at a time. The sum is squashing the results down to a single update. This would be easier to confirm if you also showed update code for weights.
H: MOOCs for Python in Data Science Thanks also to SE, I've recently changed job and now I'm working in Data Science, mainly on Analytics for the IoT (Internet of Things). Analytics are applications on cloud platforms which collect real-time, streaming sensor data from industrial machines and allow to estimate their actual performance, predict the probability of a failure and the time before it happens, detect anomalies, etc.. Until now, I've been using R to build Statistical Learning models on datasets which would fit in the memory of my workstation, so I'm not a novice for what it concerns Statistical modeling and Data Science. However, I'm a novice with Python, and I need to learn it, expecially the part of the Python ecosystem related to Data Science, because that's what my team uses. I don't need to develop the cloud platform: I just need to develop the "core" Analytics. I got the book by Jake Van der Plas: https://www.amazon.com/Python-Data-Science-Handbook-Essential/dp/1491912057 but I would like to also follow a MOOC on using Python for Data Science. Can you suggest me one? DISCLAIMER I already asked this question on CV and it wasn't considered very appropriate there. Since here my other question on MOOCs MOOC or book on Deep Learning in Python for someone with a basic knowledge of neural networks was well-received on this site, I thought of asking again here, after deleting the one on CV (no cross-posting). Hope this is fine. AI: Check out Applied Data Science with Python Specialization from Coursera. It is a series of 5 courses from the University of Michigan.
H: How to check if sentiment analysis is required? I have a CSV file having a bunch of sentences related to science. Before I do sentiment analysis on the sentences I want to programatically decide whether sentiment analysis is required on the sentence or not. Basically some of the sentences are opinions of a particular topic in which case doing sentiment analysis makes sense. However some of the sentences are just definitions and sentiment analysis is not required on such sentences. So is there any way to detect the presence of a sentiment in a sentence? (Note that the length of a sentence varies from 9 to 30 words.) AI: TextBlob, a Python package, does this (and much more). It uses a pre-trained model, thus requires no training. Given a sentence, TextBlob will return the polarity from -1 to 1. -1 is negative, 1 is positive, 0 is neutral. It will also return subjectivity from 0 (very objective) to 1 (very subjective).
H: Issues with NLTK lemmatizer (WordNet) I want to lemmatize set of plural keywords automatically such as 'Web based technologies', 'Information systems' etc. I want to transform them to to 'Web based technology', 'Information system' respectively. I tried NLTK as follows from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() print(lemmatizer.lemmatize("Web based technologies")) Even though this perform fine for words such as 'cats' to 'cat', for the keywords I have mentioned I get the same plural form. Any idea how to solve this? Or are there any other tools/APIs that I can make use of? P.S. Given the keywords I only want to get the singular term of it AI: As far as I know the nltk lemmatizer works on words or rather ngrams. Your example is a trigram, an easier way to work through this is: word="web based technologies" splits=word.split() word=" ".join(lemmatizer.lemmatize(w) for w in splits)
H: neural network for binary classification of xor gate i have written this neural network for XOR function.the output is not correct.it is not classifying the test inputs correctly.can anyone please let me the reason why. import numpy as np import pandas as pd x=np.array([[0,0],[0,1],[1,0],[1,1]]) y=np.array([[0],[1],[1],[0]]) np.random.seed(0) theta1=np.random.rand(2,8) theta2=np.random.rand(8,1) np.random.seed(0) b1=np.random.rand(4,8) b2=np.random.rand(4,1) alpha=0.01 lamda=0.01 for i in range(1,2000): z1=x.dot(theta1)+b1 h1=1/(1+np.exp(-z1)) z2=h1.dot(theta2)+b2 h2=1/(1+np.exp(-z2)) dh2=h2-y #back prop dz2=dh2*(1-dh2) H1=np.transpose(h1) dw2=np.dot(H1,dz2) db2=np.sum(dz2) W2=np.transpose(theta2) dh1=np.dot(dz2,W2) dz1=dh1*(1-dh1) X=np.transpose(x) dw1=np.dot(X,dz1) db1=np.sum(dz1) dw2=dw2-lamda*theta2 dw1=dw1-lamda*theta1 theta1=theta1-alpha*dw1 theta2=theta2-alpha*dw2 b1+=-alpha*db1 b2+=-alpha*db2 #prediction #test inputs input1=np.array([[0,0],[1,1],[0,1],[1,0]]) z1=np.dot(input1,theta1) h1=1/(1+np.exp(-z1)) z2=np.dot(h1,theta2) h2=1/(1+np.exp(-z2)) expected output=[0],[0],[1],[1] actual output=[[ 0.95678049] [ 0.99437206] [ 0.98686979] [ 0.98628204]] all are ones here. AI: There are a few mistakes in the code, so I am going to present a revised version here with comments. Setup import numpy as np import pandas as pd x=np.array([[0,0],[0,1],[1,0],[1,1]]) y=np.array([[0],[1],[1],[0]]) np.random.seed(0) # Optional, but a good idea to have +ve and -ve weights theta1=np.random.rand(2,8)-0.5 theta2=np.random.rand(8,1)-0.5 # Necessary - the bias terms should have same number of dimensions # as the layer. For some reason you had one bias vector per example. # (You could still use np.random.rand(8) and np.random.rand(1)) b1=np.zeros(8) b2=np.zeros(1) alpha=0.01 # Regularisation not necessary for XOR, because you have a complete training set. # You could have lamda=0.0, but I have left a value here just to show it works. lamda=0.001 Training - Forward propagation # More iterations than you might think! This is because we have # so little training data, we need to repeat it a lot. for i in range(1,40000): z1=x.dot(theta1)+b1 h1=1/(1+np.exp(-z1)) z2=h1.dot(theta2)+b2 h2=1/(1+np.exp(-z2)) Training - Back propagation # This dz term assumes binary cross-entropy loss dz2 = h2-y # You could also have stuck with squared error loss, the extra h2 terms # are the derivative of the sigmoid transfer function. # It converges slower though: # dz2 = (h2-y) * h2 * (1-h2) # This is just the same as you had before, but with less temp variables dw2 = np.dot(h1.T, dz2) db2 = np.sum(dz2, axis=0) # The derivative of sigmoid is h1 * (1-h1), NOT dh1*(1-dh1) dz1 = np.dot(dz2, theta2.T) * h1 * (1-h1) dw1 = np.dot(x.T, dz1) db1 = np.sum(dz1, axis=0) # The L2 regularisation terms ADD to the gradients of the weights dw2 += lamda * theta2 dw1 += lamda * theta1 theta1 += -alpha * dw1 theta2 += -alpha * dw2 b1 += -alpha * db1 b2 += -alpha * db2 Prediction This is where you can kick yourself, you forgot to use the biases! input1=np.array([[0,0],[1,1],[0,1],[1,0]]) z1=np.dot(input1,theta1)+b1 h1=1/(1+np.exp(-z1)) z2=np.dot(h1,theta2)+b2 h2=1/(1+np.exp(-z2)) print(h2) When I run the above code I get a correct-looking output [[ 0.01031446] [ 0.0201576 ] [ 0.9824826 ] [ 0.98584079]] In summary your three big errors were the wrong dimension for the bias vectors in setup, incorrect derivatives for sigmoid function (using correct form, but with wrong variable) and forgetting to use bias at all when predicting at the end. Other details are still worth noting, but would not have prevented you getting something working.
H: How do you make an NN for image classification invariant to translation **and** rotation? CNN are (I think) invariant to small translations of the input image, i.e., they will classify to the same class an image $X$ and an image $X'$ such that all pixels have been translated along a vector $\mathbf{v}$ by some "small" distance $d$. They're not, however, invariant to rotation. I.e., if $X'$ is obtained by $X$ applying a rotation of arbitrary degree $\theta$ around an axis $z$ orthogonal to the plane of the image, they won't necessarily classify it to the same class as $X$. Practitioners usually solve this by data augmentation, but this is unnecessarily wasteful and anyway not an option in my case. Is there a way to make the NN architecture invariant to both isometries? I don't have to use a CNN, but since CNN already enjoy some sort of translation invariance, I figured it would be easier to modify them to get rotational invariance , than to use a completely different architecture. AI: It is not possible to have general rotationally-invariant neural network architecture for a CNN*. In fact CNNs are not strongly translation invariant, except due to pooling - instead they combine a little bit of translation invariance with translation equivariance. There is no equivalent to pooling layers that would reduce the effect of rotation this way (although for very small rotations the translation invariance will still help). You can however construct features and create a pipeline that reduces incoming rotation differences in your inputs. For example, this answer on Signal Processing Stack Exchange suggests calculating dominant gradient of an image, then rotating the image so that this is always oriented the same way before further processing. If your image has strong straight edges, you could do similar by detecting those (e.g. by using a Hough transform) and rotating the input so that these are always oriented the same way. These approaches work only within certain image tasks, but can save on processing time and potentially increase accuracy if they are possible. Effectively they are a form of input normalisation. A more radical idea might be to perform a map to polar co-ordinates in your image before processing. This would effectively convert rotational (and radial) variance into translation variance in your image. A CNN processing this mapped image would effectively convert its translation invariance into rotational invariance on the original unmapped image. But the cost would be losing all translation invariance, so only worth considering if your inputs have high variance in rotation but low variance in translation. * Never say never. This is caveated by special cases, for example the paper Group Equivariant Convolutional Networks explains an architecture that adds support for multiples of 90 degree rotation (taking advantage of grid structure in computer images and the weight matrices that construct neural network layers). However, if you want to support free rotation of values other than 90, 180, 270 degrees, then as far as I know, there is no way to do that architecturally within the network.
H: What is the advantage of keeping batch size a power of 2? While training models in machine learning, why is it sometimes advantageous to keep the batch size to a power of 2? I thought it would be best to use a size that is the largest fit in your GPU memory / RAM. This answer claims that for some packages, a power of 2 is better as a batch size. Can someone provide a detailed explanation / link to a detailed explanation for this? Is this true for all optimisation algorithms (gradient descent, backpropagation, etc) or only some of them? AI: This is a problem of alignment of the virtual processors (VP) onto the physical processors (PP) of the GPU. Since the number of PP is often a power of 2, using a number of VP different from a power of 2 leads to poor performance. You can see the mapping of the VP onto the PP as a pile of slices of size the number of PP. Say you've got 16 PP. You can map 16 VP on them : 1 VP is mapped onto 1 PP. You can map 32 VP on them : 2 slices of 16 VP, 1 PP will be responsible for 2 VP. Etc. During execution, each PP will execute the job of the 1st VP he is responsible for, then the job of the 2nd VP etc. If you use 17 VP, each PP will execute the job of their 1st PP, then 1 PP will execute the job of the 17th AND the other ones will do nothing (precised below). This is due to the SIMD paradigm (called vector in the 70s) used by GPUs. This is often called Data Parallelism : all the PP do the same thing at the same time but on different data. See here. More precisely, in the example with 17 VP, once the job of the 1st slice done (by all the PPs doing the job of their 1st VP), all the PP will do the same job (2nd VP), but only one has some data to work on. Nothing to do with learning. This is only programming stuff.
H: train_test_split() error: Found input variables with inconsistent numbers of samples Fairly new to Python but building out my first RF model based on some classification data. I've converted all of the labels into int64 numerical data and loaded into X and Y as a numpy array, but I am hitting an error when I am trying to train the models. Here is what my arrays look like: >>> X = np.array([[df.tran_cityname, df.tran_signupos, df.tran_signupchannel, df.tran_vmake, df.tran_vmodel, df.tran_vyear]]) >>> Y = np.array(df['completed_trip_status'].values.tolist()) >>> X array([[[ 1, 1, 2, 3, 1, 1, 1, 1, 1, 3, 1, 3, 1, 1, 1, 1, 2, 1, 3, 1, 3, 3, 2, 3, 3, 1, 1, 1, 1], [ 0, 5, 5, 1, 1, 1, 2, 2, 0, 2, 2, 3, 1, 2, 5, 5, 2, 1, 2, 2, 2, 2, 2, 4, 3, 5, 1, 0, 1], [ 2, 2, 1, 3, 3, 3, 2, 3, 3, 2, 3, 2, 3, 2, 2, 3, 2, 2, 1, 1, 2, 1, 2, 2, 1, 2, 3, 1, 1], [ 0, 0, 0, 42, 17, 8, 42, 0, 0, 0, 22, 0, 22, 0, 0, 42, 0, 0, 0, 0, 11, 0, 0, 0, 0, 0, 28, 17, 18], [ 0, 0, 0, 70, 291, 88, 234, 0, 0, 0, 222, 0, 222, 0, 0, 234, 0, 0, 0, 0, 89, 0, 0, 0, 0, 0, 40, 291, 131], [ 0, 0, 0, 2016, 2016, 2006, 2014, 0, 0, 0, 2015, 0, 2015, 0, 0, 2015, 0, 0, 0, 0, 2015, 0, 0, 0, 0, 0, 2016, 2016, 2010]]]) >>> Y array(['NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'], dtype='|S3') >>> X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/2.7/site-packages/sklearn/cross_validation.py", line 2039, in train_test_split arrays = indexable(*arrays) File "/Library/Python/2.7/site-packages/sklearn/utils/validation.py", line 206, in indexable check_consistent_length(*result) File "/Library/Python/2.7/site-packages/sklearn/utils/validation.py", line 181, in check_consistent_length " samples: %r" % [int(l) for l in lengths]) ValueError: Found input variables with inconsistent numbers of samples: [1, 29] AI: You are running into that error because your X and Y don't have the same length (which is what train_test_split requires), i.e., X.shape[0] != Y.shape[0]. Given your current code: >>> X.shape (1, 6, 29) >>> Y.shape (29,) To fix this error: Remove the extra list from inside of np.array() when defining X or remove the extra dimension afterwards with the following command: X = X.reshape(X.shape[1:]). Now, the shape of X will be (6, 29). Transpose X by running X = X.transpose() to get equal number of samples in X and Y. Now, the shape of X will be (29, 6) and the shape of Y will be (29,).
H: Cluster documents and identify the prominent document in the cluster? I have a set of documents as given in the example below. doc1 = {'Science': 0.7, 'History': 0.05, 'Politics': 0.15, 'Sports': 0.1} doc2 = {'Science': 0.3, 'History': 0.5, 'Politics': 0.1, 'Sports': 0.1} I want to cluster the documents and identify the most prominent document within the cluster. e.g, cluster 1 includes = {doc1, doc4, doc5. doc8} and I want to get the most prominent document that represents this cluster (e.g., doc8). (or to identify the main theme of the cluster) Please let me know a suitable approach to achieve this :) AI: A very simple approach would be to find some kind of centroid for each cluster (e.g. averaging the distributions of the documents belonging to each cluster respectively) and then calculating the cosine distance of each document within the cluster from the corresponding centroid. The document with the shorter distance will be the closest to the centroid, hence the most "representative". Continuing from the previous example: import pandas as pd import numpy as np from sklearn.metrics import pairwise_distances from scipy.spatial.distance import cosine from sklearn.cluster import DBSCAN from sklearn.preprocessing import StandardScaler # Initialize some documents doc1 = {'Science':0.8, 'History':0.05, 'Politics':0.15, 'Sports':0.1} doc2 = {'News':0.2, 'Art':0.8, 'Politics':0.1, 'Sports':0.1} doc3 = {'Science':0.8, 'History':0.1, 'Politics':0.05, 'News':0.1} doc4 = {'Science':0.1, 'Weather':0.2, 'Art':0.7, 'Sports':0.1} collection = [doc1, doc2, doc3, doc4] df = pd.DataFrame(collection) # Fill missing values with zeros df.fillna(0, inplace=True) # Get Feature Vectors feature_matrix = df.as_matrix() # Fit DBSCAN db = DBSCAN(min_samples=1, metric='precomputed').fit(pairwise_distances(feature_matrix, metric='cosine')) labels = db.labels_ n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0) print('Estimated number of clusters: %d' % n_clusters_) # Find the representatives representatives = {} for label in set(labels): # Find indices of documents belonging to the same cluster ind = np.argwhere(labels==label).reshape(-1,) # Select these specific documetns cluster_samples = feature_matrix[ind,:] # Calculate their centroid as an average centroid = np.average(cluster_samples, axis=0) # Find the distance of each document from the centroid distances = [cosine(sample_doc, centroid) for sample_doc in cluster_samples] # Keep the document closest to the centroid as the representative representatives[label] = cluster_samples[np.argsort(distances),:][0] for label, doc in representatives.iteritems(): print("Label : %d -- Representative : %s" % (label, str(doc)))
H: How to determine if my GBM model is overfitting? Below is a simplified example of a h2o gradient boosting machine model using R's iris dataset. The model is trained to predict sepal length. The example yields an r2 value of 0.93, which seems unrealistic. How can I assess if these are indeed realistic results or simply model overfitting? library(datasets) library(h2o) # Get the iris dataset df <- iris # Convert to h2o df.hex <- as.h2o(df) # Initiate h2o h2o.init() # Train GBM model gbm_model <- h2o.gbm(x = 2:5, y = 1, df.hex, ntrees=100, max_depth=4, learn_rate=0.1) # Check Accuracy perf_gbm <- h2o.performance(gbm_model) rsq_gbm <- h2o.r2(perf_gbm) ----------> > rsq_gbm [1] 0.9312635 AI: The term overfitting means the model is learning relationships between attributes that only exist in this specific dataset and do not generalize to new, unseen data. Just by looking at the model accuracy on the data that was used to train the model, you won't be able to detect if your model is or isn't overfitting. To see if you are overfitting, split your dataset into two separate sets: a train set (used to train the model) a test set (used to test the model accuracy) A 90% train, 10% test split is very common. Train your model on the train test and evaluate its performance both on the test and the train set. If the accuracy on the test set is much lower than the models accuracy on the train set, the model is overfitting. You can also use cross-validation (e.g. splitting the data into 10 sets of equal size, for each iteration use one as test and the others as train) to get a result that is less influenced by irregularities in your splits.
H: Should the depth on convolutional layers be set to a figure divisible by 2? I'm reading a book titled Python Deep Learning, and in Convolutional layers in deep learning on the chapter 5, the following is written: One more important point to make is that convolutional networks should generally have a depth equal to a number which is iteratively divisible by 2, such as 32, 64, 96, 128, and so on. This is important when using pooling layers, such as the max-pool layer, since the pooling layer (if it has size (2,2)) will divide the size of the input layer, similarly to how we should define "stride" and "padding" so that the output image will have integer dimensions. In addition, padding can be added to ensure that the output image size is the same as the input. As far as I know, the width and the height should be a figure divisible by 2, in order to use a pooling layer. However, I don't understand why the depth must be a figure divisible by 2. The pooling layer just operates on a 2-dimensional screen based on width and height, and it operates on each filter (depth) separately, right? Why should the depth also be set to a figure divisible by 2? AI: You are right, the way it says it, it means that the height and width should be divisible by 2. While the argument he makes is invalid, it is a good practice for the sakes of memory efficiency to make some parameters like the batch size divisible by 2. I'm guessing you want to make more efficient use of memory blocks. The same logic could apply to the depth of convolutional layers. This is no way as important as the author of the book says!
H: How to decide neural network architecture? I was wondering how do we have to decide how many nodes in hidden layers, and how many hidden layers to put when we build a neural network architecture. I understand the input and output layer depends on the training set that we have but how do we decide the hidden layer and the overall architecture in general? AI: Sadly there is no generic way to determine a priori the best number of neurons and number of layers for a neural network, given just a problem description. There isn't even much guidance to be had determining good values to try as a starting point. The most common approach seems to be to start with a rough guess based on prior experience about networks used on similar problems. This could be your own experience, or second/third-hand experience you have picked up from a training course, blog or research paper. Then try some variations, and check the performance carefully before picking a best one. The size and depth of neural networks interact with other hyper-paramaters too, so that changing one thing elsewhere can affect where the best values are. So it is not possible to isolate a "best" size and depth for a network then continue to tune other parameters in isolation. For instance, if you have a very deep network, it may work efficiently with the ReLU activation function, but not so well with sigmoid - if you found the best size/shape of network and then tried an experiment with varying activation functions you may come to the wrong conclusion about what works best. You may sometimes read about "rules of thumb" that researchers use when starting a neural network design from scratch. These things might work for your problems or not, but they at least have the advantage of making a start on the problem. The variations I have seen are: Create a network with hidden layers similar size order to the input, and all the same size, on the grounds that there is no particular reason to vary the size (unless you are creating an autoencoder perhaps). Start simple and build up complexity to see what improves a simple network. Try varying depths of network if you expect the output to be explained well by the input data, but with a complex relationship (as opposed to just inherently noisy). Try adding some dropout, it's the closest thing neural networks have to magic fairy dust that makes everything better (caveat: adding dropout may improve generalisation, but may also increase required layer sizes and training times). If you read these or anything like them in any text, then take them with a pinch of salt. However, at worst they help you get past the blank page effect, and write some kind of network, and get you to start the testing and refinement process. As an aside, try not to get too lost in tuning a neural network when some other approach might be better and save you lots of time. Do consider and use other machine learning and data science approaches. Explore the data, maybe make some plots. Try some simple linear approaches first to get benchmarks to beat, linear regression, logistic regression or softmax regression depending on your problem. Consider using a different ML algorithm to NNs - decision tree based approaches such as XGBoost can be faster and more effective than deep learning on many problems.
H: How to use live data to improve a existing model? I am using logistic regression to train a model to predict 'click/non-click' using ['browser info', 'publisher info', , 'location', 'time', 'day']. I wanted to know the ways in which I can use the new live data to improve the improve the already trained model. Does a solution exist which takes into account - change in feature set? AI: Suppose you have a model that has been trained on $N$ data over $E$ epochs. This means that the model has seen each of the $N$ examples, $E$ times. Now say you got $M$ more training data. Normally you would want to train the new ones for $E$ epochs as well. However if $N$ and $M$ don't come from the same underlying distribution (or don't represent it adequately), this would result in the model "forgetting" the first $N$ examples and "paying more attention" to the latter $M$ ones. You could try training your model for $<E$ epochs, so that it learns the latter but doesn't forget the former, but that is purely empirical and very hard to achieve in practice. You can a few things to avoid this: Retrain your whole model using both the $N+M$ examples (which you would shuffle). This would require a new complete training of the model on regular occasions and would be increasingly difficult to train (due to the ever increasing size of the training data). This is a very inefficient solution and wouldn't work for any on-line training application Make use of a model that supports on-line training. Some algorithms support incremental (on-line) training, without you needing to retrain the whole thing. A scikit-learn comparison is available. Customize an algorithm so that is has the desired effect. For example, you could train a linear SVM incrementally, with a large regularization penalty and an SGD classifier. This is discussed in more detail for scikit-learn here.
H: How to improve my self-written Neural Network? I created the following Neural Network in Python. It uses weights and biases which should follow standard procedure. # Define size of the layers, as well as the learning rate alpha and the max error inputLayerSize = 2 hiddenLayerSize = 3 outputLayerSize = 1 alpha = 0.5 maxError = 0.001 # Import dependencies import numpy from sklearn import preprocessing # Make random numbers predictable numpy.random.seed(1) # Define our activation function # In this case, we use the Sigmoid function def sigmoid(x): output = 1/(1+numpy.exp(-x)) return output def sigmoid_derivative(x): return x*(1-x) # Define the cost function def calculateError(Y, Y_predicted): totalError = 0 for i in range(len(Y)): totalError = totalError + numpy.square(Y[i] - Y_predicted[i]) return totalError # Set inputs # Each row is (x1, x2) X = numpy.array([ [7, 4.7], [6.3, 6], [6.9, 4.9], [6.4, 5.3], [5.8, 5.1], [5.5, 4], [7.1, 5.9], [6.3, 5.6], [6.4, 4.5], [7.7, 6.7] ]) # Normalize the inputs #X = preprocessing.scale(X) # Set goals # Each row is (y1) Y = numpy.array([ [0], [1], [0], [1], [1], [0], [0], [1], [0], [1] ]) # Randomly initialize our weights with mean 0 weights_1 = 2*numpy.random.random((inputLayerSize, hiddenLayerSize)) - 1 weights_2 = 2*numpy.random.random((hiddenLayerSize, outputLayerSize)) - 1 # Randomly initialize our bias with mean 0 bias_1 = 2*numpy.random.random((hiddenLayerSize)) - 1 bias_2 = 2*numpy.random.random((outputLayerSize)) - 1 # Loop 10,000 times for i in xrange(100000): # Feed forward through layers 0, 1, and 2 layer_0 = X layer_1 = sigmoid(numpy.dot(layer_0, weights_1)+bias_1) layer_2 = sigmoid(numpy.dot(layer_1, weights_2)+bias_2) # Calculate the cost function # How much did we miss the target value? layer_2_error = layer_2 - Y # In what direction is the target value? # Were we really sure? if so, don't change too much. layer_2_delta = layer_2_error*sigmoid_derivative(layer_2) # How much did each layer_1 value contribute to the layer_2 error (according to the weights)? layer_1_error = layer_2_delta.dot(weights_2.T) # In what direction is the target layer_1? # Were we really sure? If so, don't change too much. layer_1_delta = layer_1_error * sigmoid_derivative(layer_1) # Update the weights weights_2 -= alpha * layer_1.T.dot(layer_2_delta) weights_1 -= alpha * layer_0.T.dot(layer_1_delta) # Update the bias bias_2 -= alpha * numpy.sum(layer_2_delta, axis=0) bias_1 -= alpha * numpy.sum(layer_1_delta, axis=0) # Print the error to show that we are improving if (i% 1000) == 0: print "Error after "+str(i)+" iterations: " + str(calculateError(Y, layer_2)) # Exit if the error is less than maxError if(calculateError(Y, layer_2)<maxError): print "Goal reached after "+str(i)+" iterations: " + str(calculateError(Y, layer_2)) + " is smaller than the goal of " + str(maxError) break # Show results print "" print "Weights between Input Layer -> Hidden Layer" print weights_1 print "" print "Bias of Hidden Layer" print bias_1 print "" print "Weights between Hidden Layer -> Output Layer" print weights_2 print "" print "Bias of Output Layer" print bias_2 print "" print "Computed probabilities for SALE (rounded to 3 decimals)" print numpy.around(layer_2, decimals=3) print "" print "Real probabilities for SALE" print Y print "" print "Final Error" print str(calculateError(Y, layer_2)) Using 32,000 epochs I manage to get on average a final error of 0.001. However, compared to the MLPClassifier (Scikit-Learn package) using the same parameters: mlp = MLPClassifier( hidden_layer_sizes=(3,), max_iter=32000, activation='logistic', tol=0.00001, verbose='true') My result is pretty bad. The MLPClassifier gets a final error of 0 when I run it on the same data, after about 10,000 epochs. For both networks I use an input layer size of 2, hidden layer size of 3 and an output layer of 1. Why does my network need that many more epochs to train? Am I missing an important part? AI: A note on gradient direction As an aside, layer_2_error = Y - layer_2 Should be layer_2_error = layer_2 - Y And all your update functions should be gradient descent e.g. weights_2 += -alpha * layer_1.T.dot(layer_2_delta) This makes no difference to the performance of your network, but I assume you have done this in the rest of the answer. Scaling Input to Normalised Values One clue to performance problems was in your first version which included the code: X = preprocessing.scale(X) With this included before training, then the inputs were scaled nicely for working with neural networks and the network converged quickly. Without this, then the network will still operate, but converges much more slowly. Increasing the max iterations to 1,000,000, then I get the result: Goal reached after 209237 iterations: [ 0.00099997] is smaller than the goal of 0.001 You mention that you don't want to do scale the input, but really in general for NNs you should. It is worth looking at other differences though, because the lack of scaling does not prevent the MLPClassifier from converging. Bias Gradients This is one you spotted and mentioned in the comments. Your bias update is the same for each bias value. To correct this, you want something like: # Update the bias bias_2 += -alpha * numpy.sum(layer_2_delta, axis=0) bias_1 += -alpha * numpy.sum(layer_1_delta, axis=0) NB - I have assumed you have fixed the gradient direction here. Classification Loss Function MLPClassifier is running a classifier using logloss, which is more efficient loss function that the mean squared error you are using, if your targets are class probabilities. You can use this too, simply by changing: layer_2_delta = layer_2_error*sigmoid_derivative(layer_2) To layer_2_delta = layer_2_error This is the correct delta value to match the loss function def calculateError(Y, Y_predicted): return -numpy.mean( (Y * numpy.log(Y_predicted) + (1-Y)* numpy.log(1 - Y_predicted))) but only when your output layer is sigmoid (the sigmoid derivative cancels out the derivatives of this log function). if you use this, you will want to reduce the learning rate for numerical stability. I suggest e.g. alpha = 0.01 note you can still report your other loss function, for comparison with previous results. Just be aware that you are optimising the log loss Difference in Optimisers You are running batched Stochastic Gradient Descent, which is the most basic optimiser type. The MLPClassifier is using Adam, which is faster and more robust. If you want to compete with it on even terms you need to implement a better optimiser. The simplest improvement is probably to add some momentum.
H: Data Type required in Weka I want to run several Association rule mining techniques such as Apriori, Eclat and FP Growth. I want to know the format of the data to run these algorithms as they are disabled (marked in grey)to me. Also I don't see the algorithm Eclat in the 'Association' tab of Weka. Please recommend me a suitable tool/approach to perform this. AI: I want to run several Association rule mining techniques such as Apriori, Eclat and FP Growth. [...] Please recommend me a suitable tool/approach to perform this Christian Borgelt's website offers several tools for frequent pattern mining: FPgrowth, Eclat, Apriori, and some others. Check out the full descriptions to see how to use each one. Here are some requirements for Weka's FPgrowth algorithm.
H: I trained my data and obtained a training score of 0.957. Why can't I get the data to provide a prediction even against the same training data? I have tried to debug this, but have not made any headway. Any ideas on how to proceed? I believe I am invoking everything correctly. Here is a snippet of the code: if _trainWithModel: print("Training using model") svc = svm.SVC(C=0.001, verbose=10, kernel='rbf', gamma=0.00000001) fittedSvc = svc.fit(trainingData.data, trainingData.target) print("Scoring training results = ", fittedSvc.score(trainingData.data, trainingData.target)) print("Sanity check. Predicting using training data model") temp_predictedTarget = fittedSvc.predict(trainingData.data) realTarget = np.array([trainingData.target]) predictedTarget = np.array([temp_predictedTarget]) print"Prediction results shapes: trainingData.data=", trainingData.data.shape, ", temp_predictedTarget=", temp_predictedTarget.shape, \ ", realTarget=", realTarget.shape, ", predictedTarget=", predictedTarget.shape print"Prediction sanity checks predictData.target sum=", sum(trainingData.target), \ ", temp_predictedTarget sum=", sum(temp_predictedTarget), \ ", predictedTarget sum=", np.sum(predictedTarget), \ ", realTarget sum=", np.sum(realTarget) confusion = confusion_matrix(trainingData.data, temp_predictedTarget) print "Confusion matrix: \n", confusion Here is the key output from this: Training using model [LibSVM].. Warning: using -h 0 may be faster * optimization finished, #iter = 2430 obj = -4.860000, rho = -1.000054 nSV = 4860, nBSV = 4860 Total nSV = 4860 ('Scoring training results = ', 0.95715721363211625) Sanity check. Predicting using training data model Prediction results shapes: trainingData.data= (56719L, 108L) , temp_predictedTarget= (56719L,) , realTarget= (1L, 56719L) , predictedTarget= (1L, 56719L) Prediction sanity checks predictData.target sum= 2430.0 , temp_predictedTarget sum= 0.0 , predictedTarget sum= 0.0 , realTarget sum= 2430.0 Confusion matrix: [[54289 0] [ 2430 0]] AI: Assuming the training data has some noise in it, you don't want a fit of 1.0 because that would mean you've overfit the model to the noise. Instead you want your model to be capable of generalizing so it can make the best possible predictions on new data.
H: Which algorithm Doc2Vec uses? Like Word2vec is not a single algorithm but combination of two, namely, CBOW and Skip-Gram model; is Doc2Vec also a combination of any such algorithms? Or is it an algorithm in itself? AI: Word2Vec is not a combination of two models, rather both are variants of word2vec. Similarly doc2vec has Distributed Memory(DM) model and Distributed Bag of words (DBOW) model. Based on the context words and the target word, these variants arised. Note: the name of the model maybe confusing Distriubted Bag of words is similar to Skip-gram model Distributed Memory is similar to Continuous bag of words model
H: Cross-entropy loss explanation Suppose I build a neural network for classification. The last layer is a dense layer with Softmax activation. I have five different classes to classify. Suppose for a single training example, the true label is [1 0 0 0 0] while the predictions be [0.1 0.5 0.1 0.1 0.2]. How would I calculate the cross entropy loss for this example? AI: The cross entropy formula takes in two distributions, $p(x)$, the true distribution, and $q(x)$, the estimated distribution, defined over the discrete variable $x$ and is given by $$H(p,q) = -\sum_{\forall x} p(x) \log(q(x))$$ For a neural network, the calculation is independent of the following: What kind of layer was used. What kind of activation was used - although many activations will not be compatible with the calculation because their outputs are not interpretable as probabilities (i.e., their outputs are negative, greater than 1, or do not sum to 1). Softmax is often used for multiclass classification because it guarantees a well-behaved probability distribution function. For a neural network, you will usually see the equation written in a form where $\mathbf{y}$ is the ground truth vector and $\mathbf{\hat{y}}$ (or some other value taken direct from the last layer output) is the estimate. For a single example, it would look like this: $$L = - \mathbf{y} \cdot \log(\mathbf{\hat{y}})$$ where $\cdot$ is the inner product. Your example ground truth $\mathbf{y}$ gives all probability to the first value, and the other values are zero, so we can ignore them, and just use the matching term from your estimates $\mathbf{\hat{y}}$ $L = -(1\times log(0.1) + 0 \times \log(0.5) + ...)$ $L = - log(0.1) \approx 2.303$ An important point from comments That means, the loss would be same no matter if the predictions are $[0.1, 0.5, 0.1, 0.1, 0.2]$ or $[0.1, 0.6, 0.1, 0.1, 0.1]$? Yes, this is a key feature of multiclass logloss, it rewards/penalises probabilities of correct classes only. The value is independent of how the remaining probability is split between incorrect classes. You will often see this equation averaged over all examples as a cost function. It is not always strictly adhered to in descriptions, but usually a loss function is lower level and describes how a single instance or component determines an error value, whilst a cost function is higher level and describes how a complete system is evaluated for optimisation. A cost function based on multiclass log loss for data set of size $N$ might look like this: $$J = - \frac{1}{N}\left(\sum_{i=1}^{N} \mathbf{y_i} \cdot \log(\mathbf{\hat{y}_i})\right)$$ Many implementations will require your ground truth values to be one-hot encoded (with a single true class), because that allows for some extra optimisation. However, in principle the cross entropy loss can be calculated - and optimised - when this is not the case.
H: Python - Check if text is sentences? So I have a scraper that gets articles. However, it doesn't always work properly. I want to get better at checking when it doesn't work. For example, the following is something like I want it to scrape: Hello. This is a sequence of sentences that are put together. They don't have to follow this exact format, but something very close to this would be nice! Just basically stuff like this put together with the occasional weird formatting, which depends on what is scraped. But I might also get something that is obviously not text: REGISTER | LOGIN | LOGOUT | Sign in to your account Forgot your password? {* #signInForm *}.... Is there any python library that checks the general format of strings? Basically, I am scraping articles and want to see if the text scraped is article-y. If there isn't a python library, would the best way to go be some sort of regex matching? Is this possible to do reasonably well? Any help would be greatly appreciated, thanks!! AI: I would try a semi-supervised learning technique where it passes you scraps and asks you to label them. What you're looking for will likely be kind of domain specific depending on the type of site. In the end you'll probably have a bunch of heuristics like: If length < 50 and contains "LOGOUT", "REGISTER","SIGN IN","LOGIN" If count of "|" > 1 If count of all upper case words > 1
H: Why do we pick random features in random forest I understand that random forest is a stylized version of bagging of trees. We choose randomly data points as well as random features for constructing random forest. But if we use just plain version of bagging by choosing only data points randomly then we have trees which have trained on more number of features unlike the random forest in the stylized version. Since learning with more features every individual tree has more information about the data points and so are more 'intelligent' in some sense than the individual trees in the random forest. So why does the random forest using stylized version of bagging performs better than the random forest using plain implementation of bagging? I understand that the random forest using the stylized version gives a model lower variance but since each of its trees is trained on some of the features shouldn't it make the model a bit high biased? AI: The idea of random forests is basically to build many decision trees (or other weak learners) that are decorrelated, so that their average is less prone to overfitting (reducing the variance). One way is subsampling of the training set. The reason why subsampling features can further decorrelate trees is, that if there are few dominating features, these features will be selected in many trees even for different subsamples, making the trees in the forest similar (correlated) again. The lower the number of sampled features, the higher the decorrelation effect. On the other hand, the bias of a random forest is the same as the bias of any of the sampled trees (see for example Elements of Statistical Learning), but the randomization of random forests restrict the model, so that the bias is usually higher than a fully-grown (unpruned) tree. You are correct in that you can expect a higher bias if you sample fewer features. So, "feature bagging" really gives you a classical trade-off in bias and variance.
H: Excel file merge with different headers but same data I need to merge data from 1000s of excel files provided by different operations managers on productivity and other reports. The excel files have similarity of data but the headers are all custom since being different manager and different clients. For example manager A will have a.xlx and manager b will have a.xlx, but the headers for each will be different though the data inside will be usually the same. Each day 100 different excel files are updated by all team members via new files e.g a_todays_date.xlx and manager b using /a.xlx. Is this something that can be handled via python ML libraries. What is the best way to merge all these data and save to the DB on a per day basis. The average data would per day would be around 15GB. The end goal is to create dashboard. AI: I suggest SSIS (SQL Server Integration Services) . It is designed for handling collecting data from different sources and export to database. You can design a simple data flow and run it from SQL serevr on schedule (daily in your case)
H: Feature engineering while using neural networks When doing Data Science, we concentrate on feature engineering first - check on correlations, imputations, tranformations etc. Do we have to follow the same steps while feeding the inputs to a neural net? Or what should be the steps taken? AI: If you are talking about tabular data (not images, video, sounds etc) then yes. All of preparations you mentioned are applicable to Artificial neural networks. For instance, data normalization is very important since neurons activations are calculated applying some activation function on a weighted sum(linear combination) of it's inputs. A value of 0.01 and a value of 10000000 in a weighted sum will always have a large value if the weights are at the same scale. Theoretically the network could learn how to deal with such discrepancies on the input values but in a practical sense normalization can make learning faster. There is a classical paper on efficient backprop, with many tips and tricks, by Yann LeCun.
H: Determine when entry/series of entries are outliers A common question I face is this: I have a stream of incoming data. Let's call it a vector of entries where each entry represents a value. As I'm getting this stream of entries that is getting added to the vector, I want to figure out if one of them is an outlier. So my question is two essentially: How do you establish a good baseline as you consistently keep getting data? How do you determine if a value that is appendede to the vector is significantly different than the baseline? AI: Look into six-sigma techniques: e.g.: http://www.whatissixsigma.net/imr/ In short: create a moving range (moving average +/- 3 times moving std.deviation). Points outside that range can be considered outliers and need inspection (not just discarding).