text
stringlengths
83
79.5k
H: Variable Importance Random Forest on R I am currently using a random forest model for classification, however I am unsure how the feature selection technique "varImp" works on R. I understand the context of variable importance, however when I implement it in R it doesn't seem to produce the results I expect. When removing the most important variable (of 31 features), the model's accuracy does not decrease. I would expect it to as it should be contributing the most to the model's ability to classify. Could someone please explain what this function is doing? AI: What the function is doing for each variable? Record the Out-Of-Bag (OOB) accuracy for each tree. "shuffle" or permute the values of that variable. This means you take all the values of that variable in the data, and assign those values randomly back out to the observations, which is a way of introducing noise and getting rid of the signal that that variable provided. Now it finds the OOB accuracy again, but this time the values for that variable are incorrect since we permuted it. By introducing noise where your model expects signal, you should see a decrease in performance. Compare the original accuracy in (1) to the accuracy in (3) for each variable. If the model performance decreases a lot for a variable in step (3) compared to (1), then it is deemed to have greater importance. Why does removing the most important variable not have a negative affect on accuracy? (my guess) Probably because that important variable is correlated with some other variable(s) you have. Your model can capture the information contained in that missing important variable by using a few other variables to make up for it. When you drop the important variable, which other variables see a notable gain?
H: The Why Behind Sum of Squared Errors in a Linear Regression I'm just starting to learn about linear regressions and was wondering why it is that we opt to minimize the sum of squared errors. I understand the squaring helps us balance positive and negative individual errors (so say e1 = -2 and e2 = 4, we'd consider them as both regular distances of 2 and 4 respectively before squaring them), however, I wonder why we don't deal with minimizing the absolute value rather than the squares. If you square it, e2 has a relatively higher individual contribution to minimize than e1 compared to just the absolutely values (and do we want that?). I also wonder about decimal values. For instance, say we have e1 = 0.5 and e2 = 1.05, e1 will be weighted less when squared because 0.25 is less than 0.5 and e2 will be weighted more. Lastly, there is the case of e1 = 0.5 and e2 = 0.2. E1 is further away to start, but when you square it 0.25 is compared with 0.4. Anyway, just wondering why we do sum of squares Erie minimization rather than absolute value. AI: Simple google search on "stats why regression not absolute difference" would give you good answers. Try it yourself! I can quickly summarise: Your regression parameters are solutions to the maximum likelihood optimisation. That involves derivative, but absolute difference doesn't have a derivative at zero. There's no unique solution for least absolute regression. Least absolute regression is an alternative to the regular sum of squares regression, commonly classified as one of the robust statistical methods. You'd prefer least absolute regression if you care about outliers, otherwise the regular regression is generally better. You might want to read about L1 vs L2: https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models
H: Why do so many functions used in data science have derivatives of the form f(x)*(1-f(x))? The sigmoid function's derivative is of that form, and so is the softmax function's. Is this by design, or some strange coincidence that seems to work for ML models/neural networks? AI: Sigmoid function is a partial case of softmax, when the number of classes $K=2$. That's why the similarity of their derivatives shouldn't surprise you. Why do so many functions used in data science have derivatives of the form f(x)*(1-f(x))? If you consider the following differential equation $y' = y \cdot (1-y)$ you will find the general solution in the form $y(x) = \frac{e^x}{c + e^x} = \frac{1}{1 + c e^{-x}}$ So, in some sense, there are not that many functions with this property: they are "very like" sigmoid function.
H: Structure of Convolutional Neural Network to analyze a sequence of frames I want to apply a CNN to a series of image sequences to classify that sequences of frames/images in two groups/categories. We are in a binary classification problem. My dataset is composed by a lot of 'batches' of frames. For example, each batch of frames could be compose of 20 frames of 64x64 pixels. One important thing is that the order of that 20 frames is important. If you shuffle the order of that 20 frames the output could change. For all this, I want to create a CNN to solve this binary classification problem. I'm usig Keras and TensorFlow. What is my question? Well, I'm not sure if I have to use a TimeDistributed layer or not. The input shape of the neural network is the following one: (20, 64, 64, 1), whose meaning is: 20 frames with a 64x64 size (1 channel - grayscale). Should I use a TimeDistributed layer? AI: I'm not sure if I have to use a TimeDistributed layer or not You definitely don't have to use TimeDistributed. You have other choices, that may be equally valid, depending on your data: Flatten your example data into 81920 features per example and use a simple Dense layer. Use a Conv3D layer. Use some form of RNN, such as LSTM. From your data description, I would expect either a TimeDistributed or 3D CNN based approach to be a good first bet. Intuition suggests that the CNN would work better if there was minor change between frames (because it has capability to directly find subtle frame differences), the TimeDistributed approach would work better processing larger changes (because it will ignore frame differences until the fully-connected layer). Should I use a TimeDistributed layer? Only you can answer this, by trying it and measuring performance of your classifier. However, it should function correctly, and intuition suggests it would be a good choice if your frames are in sequence but visually disjoint.
H: Can clustering my data first help me learn better classifiers? I was thinking about this lately. Let's say that we have a very complex space, which makes it hard to learn a classifier that can efficiently split it. But what if this very complex space is actually made up of a bunch of "simple" subspaces. By simple, I mean that it would be easier to learn a classifier for that subspace. In this situation, would clustering my data first, in other words finding these subspaces, help me learn a better classifier? This classifier would essentially be an ensemble of each subspace's classifier. To clarify, I don't want to use the clusters as additional features and feed it to a big classifier, I want to train on each cluster individually. Is this something that's already been done/proven to work/proven to not work? Are there any papers on it? I've been trying to search for things like this but couldn't find anything relevant so I thought I'd ask here. AI: It is absolutely a way to improve your classifier's accuracy. Actually a "strong" enough classifier such as a neural network could be able to learn by itself these clusters. However, you would need a substancially deeper network. The "smartest" way to do this, if you know there are many groups/clusters in your data is to actually perform a 2-steps process: Cluster your data Train X models, one for each of your clusters A nice way to visualise this is the following problem, you want to build a recommendation engine for a Netflix-like application, you don't want to build one model per person, how would you do this ? First find clusters of similar users (geeks, SF fans, teenagers, etc.) Fit one model for each of these clusters
H: Dimension of weight matrix in neural network Why would the dimension of $w^{[2]}$ be $(n^{[2]}, n^{[1]})$ ? This is a simple linear equation, $z^{[n]}= W^{[n]}a^{[n-1]} + b^{[n]}$ There seems to be an error in the screenshot. the weight, $W$ should be transposed, please correct me if I am wrong. $W^{[2]}$ are the weights assigned to the neurons in the layer 2 $n^{[1]}$ is the number of neurons in layer 1 Screenshot from Andrew Ng deeplearning coursera course video: AI: There seems to be an error in the screenshot. The weight, $W$ should be transposed, please correct me if I am wrong. You are wrong. Matrix multiplication works so that if you multiply two matrices together, $C = AB$, where $A$ is an $i \times j$ matrix and $B$ is a $j \times k$ matrix, then C will be a $i \times k$ matrix. Note that $A$'s column count must equal $B$'s row count ($j$). In the neural network, $a^{[1]}$ is a $n^{[1]} \times 1$ matrix (column vector), and $z^{[2]}$ needs to be a $n^{[2]} \times 1$ matrix, to match number of neurons. Therefore $W^{[2]}$ has to have dimensions $n^{[2]} \times n^{[1]}$ in order to generate an $n^{[2]} \times 1$ matrix from $W^{[2]}a^{[1]}$
H: Time Resolution Changes in Time Series Forecasting I am doing time series forecasting for estimating monthly sales of certain consumer goods SKU. I have last 3 years of actual sales data, wherein first 2.5 years of data is monthly sales data, but last 6 months data daily sales data is available. I can certainly add the daily data to get monthly data, but considering that total data points are then only 36 (for 3 months) and it may be slightly challenging to get various models like ARIMA, Exponential models. Is there a way to use the daily last 6 months of data to improve the accuracy of the model? AI: I would do two separate analyses: By day for six months By month for the last 36 months (with the most recent 6 aggregated) The 36 month analysis will be important for catching annual trends you wouldn't be able to see with the daily data, but you'll have a lot more fine-grain with the daily analysis to catch quarterly, monthly, and weekly cycles.
H: Does Non Negativity Constrains increases the estimation error I have been working with Tensor and matrix Non negative constrained algorithms. I have never seen a non negative constrained algorithm (ex. Non Negative Tucker Decomposition NTD) with error that is comparable to the non constrained algorithm (ex.High Order SVD HOSVD). Publications normally don't compare estimation error of unconstrained algorithms with constrained algorithms instead the comparison is always between different algorithms of same type. By Testing, I found that the result of constrained algorithms with noise is around $10^{-7}$ compared to unconstrained ones is around $10^{-15}$. Any reference that I can start from or any explanation would be appreciated. AI: Are you talking about train error or test error? Non negativity constrains make your search space of possible solutions much smaller. This means that the train error would be always higher or equal when you add non-negativity constraint. But higher train error can mean that you are overfitting. If the assumption of non-negativity make sense for the solution of your problem, then it is likely that this assumption would decrease the test error as you are limiting your function space and decrease the chance of overfitting.
H: Confused by kmeans results I am using kmeans to cluster some data with 2 features. Not sure I understand why kmeans is producing the clusters I see: Why would kmeans not cluster these points in a way that matches what we would expect visually looking at the data? Why are seemingly random points in the middle of visual clusters being put into a second cluster like that? The code I am running: cols = ['col1', 'col2'] features = map(lambda x: df[x], cols) input = np.matrix(list(zip(*features))) scaler = StandardScaler() scaler.fit(input) input_scaled = scaler.transform(input) algo = KMeans(n_clusters=2) algo.fit(input_scaled) df['cluster'] = pd.Series(algo.labels_) sns.lmplot(x=cols[0],y=cols[1],data=df, fit_reg=False, hue='cluster') AI: Got it. Thanks to everyone who helped. The issue had nothing to do with kmeans. I did not realize that when you do this: dataframe[col] = Series, the series gets merged based on the index, as opposed to a simple appending of the column. My dataframe had been filtered before I generated the lists of features, so the index was not 0,1,2 but rather 0,2,5, etc. I needed to perform reset_index() on the original dataframe before assigning a new column in the dataframe to the labels from the algorithm.
H: Non Scaled New Actual Data I am new to Machine Learning and I have a conceptual question. I have a scaled dataset (scikit-learn and pandas). After training/testing my algo, I will make new predictions using new actual data which will not be scaled or normalized. Will this discrepancy be a problem, if so, how should I resolve it? Best, AI: You should save the scaler params used to fit the training set and use the same ones to transform all other data used with the model from then on - whether CV, test or new unseen data. After training/testing my algo, I will make new predictions using new actual data which will not be scaled or normalized. No that won't work. Once you add scaling/normalisation to the training pipeline, the exact same scaling (as in same scaling params, not re-calculated) should be applied to all input features. The scikit-learn scalers like e.g. StandardScaler have two key methods: fit should be applied to your training data transform should be applied after fit, and should be used on every data set to normalise model inputs. fit_transform can be used on the training data only to do both in a single step. If you need to do the training and predictions in different processes (maybe live predictions are on different devices for instance), then you need to save and restore the scaling params. One basic, simple way to do this is using pickle e.g. pickle.dump( min_max_scaler, open( "scaler.p", "wb" ) ) to save to a file and min_max_scaler = pickle.load( open( "scaler.p", "rb" ) ) to load it back.
H: Is Java or Python a better choice for an application involving data intensive algorithms employing natural language processing? I am a professional java developer with 9 years of experience. Never worked with Python but, I do not have any restraints learning it. Still I need clarity on a couple of aspects. I hope if anyone can help, it will clear the cobwebs in my mind. I am looking to develop a machine learning project : With a web interface to present interactive analytical graphs and views. Employing a web crawler collecting data, both structured and unstructured. A backend crunching the data collected using machine learning and artificial intelligence and natural language processing algorithms. While exploring options available, I found both Python and Java have extensive support and frameworks/libraries/modules available to leverage machine learning implementations. However, working with Java gives me a sense of strongly typed programming and it can leverage the underlying CPU cores via its robust multi-threading APIs. Python on the other hand is scripted language and on the face value looks to lesser efficient when working on complex logic crunching a huge amount of data pertaining to Natural Language Processing. I might be utterly wrong as I am not very acquainted with Python ecosystem. Can anyone help me understand on the choice between Java and Python if I keep in mind execution speed, maintainability and community support targeted toward development of a large, scalable and robust machine learning/artificially intelligent application ? AI: I work with python and java in big-data settings every day. python is definitely my language of choice when researching, experimenting and fitting models. python has a ton of very useful libraries such as spacy, nltk and sklearn that makes exploration very easy, especially in within a jupyter notebook. When running the code in production, If performance is a consideration, we implement only the prediction phase in java. java is more suited for production, has great support for packaging, tests, deployment.
H: How to prevent a neural network from choosing the 'easiest' solution I have a neural network that takes in roughly twelve values, and outputs a singly probability. The issue is that the network appears to be smart enough to realize there is a very significant correlation between three of the values and the output, and doesn't seem to be recognizing any correlation with the remainder of the inputs. In other words, the model has figured out that if the and of three specific inputs are non-zero, there is an ~80% chance the output is 1. This seems like expected behavior, but I am looking to have the network enhance it's 80% prediction with data from the other nine inputs. Is this possible? My proposed solution requires a bit more information regarding the nature of the data. The twelve inputs can be categorized into four groups of three variables each. For instance, we have three metrics: purchase cost, warranty length and average maintenance costs. These three metrics are then applied to four different appliances in a household: the boiler, washing machine, dishwasher and refrigerator. The network is then supposed to output the probability of the owner of the household moving out in a few months. In this example, there happens to be a 80% correlation between refrigerators and the homeowner moving: if they own a refrigerator, there is an 80% chance they will move out within the next few months. As mentioned before, the network is smart enough to recognize the correlation between the refrigerator and the moving out, but I am hoping to get it to augment this prediction with the inclusion of the other inputs. My first strategy for addressing this would be to split the single network into four separate networks, each handling one appliance. These will output four probabilities total, which I can then either weight and compute by hand, or feed into a fifth network for final processing. My worry is that this would result in the same issue occurring - the final network would determine that ignoring the first three inputs and only returning the fridge-based probability is the best way to minimize error. My second possible solution is to simply up the amount of iterations over the dataset, in the hope that the network happens to locate a slightly better solution by chance, then works to optimize it. This seems unlikely, and overly dependent on the problem. My third and final possible solution is to initially only train the network on input where the homeowner doesn't own a fridge (all fridge values are zero) such that the network optimizes it's solution without having to worry about the fridges, and then gradually introduce inputs from homeowners with fridges. Will any of these work? Is there a better solution? I am using the Tensorflow Estimator API. Thanks! AI: This is fairly common with scenarios like predicting fraud when overall cases of fraud are uncommon. The model will frequently just say "no fraud!" and be right a large percent of the time. You need to assign a penalty, cost, or weight so that output 1 isn't so easy. This article describes that further in a lot of detail: https://blog.fineighbor.com/tensorflow-dealing-with-imbalanced-data-eb0108b10701
H: Why ReLU is better than the other activation functions Here the answer refers to vanishing and exploding gradients that has been in sigmoid-like activation functions but, I guess, Relu has a disadvantage and it is its expected value. there is no limitation for the output of the Relu and so its expected value is not zero. I remember the time before the popularity of Relu that tanh was the most popular amongst machine learning experts rather than sigmoid. The reason was that the expected value of the tanh was equal to zero and and it helped learning in deeper layers to be more rapid in a neural net. Relu does not have this characteristic, but why it is working so good if we put its derivative advantage aside. Moreover, I guess the derivative also may be affected. Because the activations (output of Relu) are involved for calculating the update rules. AI: The biggest advantage of ReLu is indeed non-saturation of its gradient, which greatly accelerates the convergence of stochastic gradient descent compared to the sigmoid / tanh functions (paper by Krizhevsky et al). But it's not the only advantage. Here is a discussion of sparsity effects of ReLu activations and induced regularization. Another nice property is that compared to tanh / sigmoid neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply thresholding a matrix of activations at zero. But I'm not convinced that great success of modern neural networks is due to ReLu alone. New initialization techniques, such as Xavier initialization, dropout and (later) batchnorm also played very important role. For example, famous AlexNet used ReLu and dropout. So to answer your question: ReLu has very nice properties, though not ideal. But it truly proves itself when combined with other great techniques, which by the way solve non-zero-center problem that you've mentioned. UPD: ReLu output is not zero-centered indeed and it does hurt the NN performance. But this particular issue can be tackled by other regularization techniques, e.g. batchnorm, which normalizes the signal before activation: We add the BN transform immediately before the nonlinearity, by normalizing $x = Wu+ b$. ... normalizing it is likely to produce activations with a stable distribution.
H: Classifying job titles I have a dataset of about 10000 unlabeled job titles (mostly very short titles) such as head of mobile or lead iOS developer. I would like to perform classification of those job titles in two different ways : The 1st classification is sorting job titles according to the type of job it represents (i.e. marketing, IT, healthcare, legal...) The second classification would be about the level of seniority of a specific job (i.e. executive, manager, associate, trainee...) I have tried an approach based on Word2Vec (vectors were taken from the Google Word2Vec set) : I cleaned and stemmed job titles I created a vector representing the job title as the average vector of the words that compose it I created a list of vectors representing the target values (executive could be the average vector of +executive, +chief, -assistant for example) Find the least cosine distance between the job title vector and the target vectors. While this approach gets decent result (about 70-80% accuracy) it is not enough for the task I'm planning to do. So I was wondering if a better approach could be used (except labeling the data by hand and using that to train some algorithm) AI: A reasonable assumption to make, is that titles that share indicative words, are in the same sector. Examples: Positive example: "senior IOS developer" and "principal web developer" seem to be both in the IT sector, and the word "developer" is our giveaway. Negative example: "chief operations officer" and "chief data officer" share 2 words, but those words (chief,officer) are not indicative So, I would maintain a list of non-indicative words, such as "senior","chief",... Filter them out, and then apply an hierarchical clustering algorithm on the word-distance between titles.
H: Need help understanding the structure of this convoluted neural network I'm trying to grasp the structure of this convoluted neural network. (Source) I understand the first layer is a 6x6 conv with stride 2 followed by 3x3 max pool and then 6 5x5 convs and another 3x3 max pool. After this, however, outputs from a fully connected layer of 64 neurons are "tiled over the special dimensions of the response map of pool2". I don't understand what this means. The output of pool2 should be 64 (because of 64 filters) 18x18 arrays. In the first 18x18 array do I add output1 to each of the 18*18=324 values, and in the second array add output2 to each of the 324 values, etc.? TLDR What do I do with the 64 outputs (each is a 18x18 array) and the 64 outputs from the fully connected layer? AI: The so-called motor command $v_t$ (I don't know what it means but it looks to be some scalar feature although it would work the same if it's a vector) is fed into a layer that builds 64 representations of this value, one for each feature map in the convolutional layer that we are going to add it too. The conv layers have a spatial resolution however, and this representation is only one number for the corresponding feature map. What they do is tile this number so that this whole motor command representation has the same dimensions as the max pooling layer after the convolutional layer (pool2). Now that the dimensions match we can use an element-wise addition operation to inject this information into the convolutional network.
H: Does Kernels always map data points to higher dimensions When we use kernels in SVM to linearly sperate non linear data points by mapping it to 'another dimensions', does this suitable 'another dimensions' always be a higher dimension with respect to original dimension of the data points? And is it true that we can always find a higher dimensions that can linearly seperate data points in a training set? AI: 'Another dimension' can be also a lower dimensional space. You are free to choose your kernel. Example is: $k\big((x_1, y_1), (x_2, y_2)\big) = x_1x_2$ Is it true that we can always find a higher dimensions that can linearly seperate data points? Yes. If you have a finite dataset, you can always find a higher dimension that can linearly seperate data points. The obvious one is mapping n points to a n-dimensional space. If you do any non-trivial mapping (no d points are on a d-1 dimensional hyperspace), then you can always linearly separate any two class assignments.
H: Using dates from predicting loan My goal is: Calculate the probability of a client take a loan. My problem is: How to take in account the date of the loan in this process. Certainly the date is important here since, let's assume my clients usually take loans closer to Christmas, for example. My initial thought was to use the day of the year, since this would absorb those "special dates" like between days 340~365 they are more likely to take the loan since this is close to Christmas and this way I don't have to "manually consider" those special dates and can even discover new special periods. Question: Is this a sound approach? AI: You are going in the right direction. Dates can contain a lot of information depending on the task you want to learn. A problem with your suggestion of using the day of the year straight up is that the last day of the year and the first day of the following year are very close to each other, while in your representation they are the furthest away. An alternative to this could be using the fact that this is cyclical and map the day_of_year feature onto a circle and use the coordinates on this circle as features to represent this. A lot of the important features regarding dates are cyclical, like the day of the week, the day of the month, the hour of the day etcetera. An alternative which allows for easier non-linear relationships would be to one-hot encode them as classes. This has the advantage of a more free representation, but the downside that it will cost more features and it cannot generalize based on the fact that two options might be next to each other. With regards to additional domain knowledge, like knowing that being close to christmas, or close to salary day might be relevant, it's almost always beneficial to add these features to your data. This could be done by calculating absolute distance to christmas, or days before pay day etcetera. Some of these features can be work intensive to create, for example localized holiday features, but they can significantly influence people's decisions and if that is what you are trying to predict, adding these will benefit your performance.
H: How to decide which images to label next? We have a custom dataset of 20 thousand images with two pixel-wise labeled classes. However we have 1 million more raw images, which we would like to label. We want to label the most important new images first. Importance is defined as: Images with the more new information Images helping our deep learning bounding box classifier to improve So instead of labeling images, where we already have about thousand similar ones, we first want to label the images which are quite different from the already-labeled ones and help more to improve our classifier. How can we assign priorities and decide which images to label first? AI: This type of problem is considered to be part of 'active learning'. There is a lot of research being done on this topic at the moment, but some first approaches are relatively easy, depending on the type of model that you are using. Since you mentioned that you are using deep learning bounding box detectors, I will showcase a few examples of how to approach this problem using Convolutional neural networks. The core idea is that we want some measure of potential gain of an unlabeled sample. That way we can train our model on our labeled training set, predict the labels for our unlabeled set and measure which examples will be most useful to label. In case of classification you could use the sigmoid/softmax output and get some kind of uncertainty from there, however deep learning models are usually fairly certain about their predictions and a high probability doesn't automatically mean that it predicts it well. Another approach is to use dropout in your model during training, and then apply dropout to your predictions on your unlabeled set as well. By sampling multiple dropout masks and comparing all the different predictions, you could measure how different the outputs are. If the outputs are very similar, it's unlikely that your model will learn much more if you label this, but if the outputs vary wildly, maybe this example lives in a part of your feature space that your model doesn't know or understand very well yet. There are a lot of ways to approach this, what I have written here is just an introduction to the concept of 'active learning'. There are a lot of papers available about this topic! EDIT: I haven't actually read a lot of this research, but here are a few: https://arxiv.org/pdf/1703.02910.pdf https://arxiv.org/pdf/1707.05928.pdf https://arxiv.org/pdf/1701.03551.pdf
H: Proper derivation of dz[1] expression for backpropagation algorithm For backpropagation algorithm, is it true to have weight, $w$ transposed in the expression of $dz^{[1]} = w^{[2]T}dz^{[2]} * g^{[1]'}(z^{[1]}) $ ? Could anyone show me why $w^{[2]T}$ instead of just $w$ ? I have read other articles, but the transposed dimension of $w$ is ignored when chain derivation is applied. Screenshot from Andrew Ng deeplearning coursera course video: AI: For backpropagation algorithm, is it true to have weight, $w$ transposed in the expression of $dz^{[1]} = w^{[2]T}dz^{[2]} * g^{[1]'}(z^{[1]}) $ ? Yes it is correct. You can show it is with some different arguments: Checking correct dimensions Although this is not a robust theoretical argument, checking dimensions is actually what I do in practice when I am confused when implementing NNs. Here's the rough logic of a dimension check on the equation in your question: $W^{[2]}$ is a $n^{[2]} \times n^{[1]}$ matrix for the forward propagation to work. $dz^{[2]}$ is a $n^{[2]} \times 1$ column vector (because it is the gradient of cost function with respect to $z^{[2]}$, so it has to have the same dimension as it) $dz^{[1]}$ is a $n^{[1]} \times 1$ column vector It is $dz^{[2]}$ that is being multiplied - so the multiplying matrix must be $\text{something} \times n^{[2]}$. The output needs to be same dimensions as $dz^{[1]}$ - so the matrix must also be $n^{[1]} \times \text{something}$. That means the multiplying matrix must have dimensions $n^{[1]} \times n^{[2]}$ - which $W^{[2]T}$ has (and no other component of the network could match) Theory from item-by-item calculation, converted to matrix math But why is it $W^{[2]T}$ and not some other matrix with the desired dimensions? For that we need to go back to the chain rule. Note that the expression you have written is actually two steps of the chain rule combined - first we derive $da^{[1]}$ from $dz^{[2]}$, then we derive $dz^{[1]}$ from $da^{[1]}$. It is that first step that introduces terms from $W^{[2]}$, so I will just show that. Also, deriving this directly using matrix notation is more complex, so we'll just drop into using the index values, and use the "official" partial derivative notation. To calculate $da^{[1]}$ for a specific $i^{th}$ neuron, you have to sum over the gradients $dz^{[2]}$ for all the neurons it links to: $$ \frac{\partial J}{\partial a_i^{[1]}} = \sum_j \frac{\partial J}{\partial z_j^{[2]}} \frac{\partial z_j^{[2]}}{\partial a_i^{[1]}} = \sum_j \frac{\partial J}{\partial z_j^{[2]}} W_{ij}^{[2]} $$ Doing this for each $i$ value in turn gets your the full $da^{[1]}$ vector. Compare this to how the forward propagation works in the same network using the same notation and indexing: $$z^{[2]}_j = b^{[2]}_j + \sum_i W_{ij}^{[2]} a_i^{[1]}$$ And you can see that the index used to drive the sum is different between forward and backward calculations: When we write $W^{[2]}_{ij}$, then $i$ is index of matrix row, $j$ is index of matrix column from $W^{[2]}$. When multiplying forward, we sum over $i$, i.e. each row of $W^{[2]}$ multiplied by each matching element of $a^{[1]}$, and this is summed up. When you look at the gradient, you can see the equivalent sum is over $j$. To turn that gradient calculation into a normal matrix multiply (just so the matrix notation works, and you don't have to write out the sum) you have to swap rows and columns - which is what a transpose is. This is partly a definitions and notation issue. We could define a different kind of matrix multiply that worked with sums column-by-column. But this is not usually done, because multiplying by the transpose does the same thing.
H: Decision Tree Classifier with Majority vote (Is 50% a majority vote) I have a homework question that requires me to create a 3 level tree (root, intermediate and leaf) with majority vote and expand the tree if 3 level tree is not possible. I have to use ID3 algorithm. Its a manual question rather than coding assignment and dataset contains 16, 4 features and a class label with T, F as their values. I started calculating entropies and information gain for different nodes. When I complete the 3 level tree, I can assign the classes at the leaf with 50%, 100%, 100% and 75% majority vote or confidence. My questions are is 50% really a majority vote? Do I need to expand on that part of the tree that has 50% confidence or I can just assign the classes arbitrarily? Regards AI: The goal of ID3 is to get the purest nodes possible ( ironically that is what contributes to its problem of overfitting), so 50% is not pure at all, the data under that node is equally likely to be in one of the classes which makes peedicition tricky, it would be better to grow the tree further and find nodes which are more pure than atleast 50%.
H: Difference between RFE and SelectFromModel in Scikit-Learn What is the difference between Recursive Feature Elimination (RFE) function and SelectFromModel in Scikit-Learn? Both seems exactly similar. AI: They effectively try to achieve the same result but the methodology used by each technique varies a little. RFE removes least significant features over iterations. So basically it first removes a few features which are not important and then fits and removes again and fits. It repeats this iteration until it reaches a suitable number of features. SelectFromModel is a little less robust as it just removes less important features based on a threshold given as a parameter. There is no iteration involved.
H: Should I relabel this data or remove the potentially leaky feature? Putting together a Keras MLP to predict whether a value will exceed a static percent threshold in the next 15 minutes. The incoming data is a rolling percentage which moves smoothly for the most part because the data comes in every few milliseconds and is windowed over some fixed number of minutes. So, when the data is already above the threshold, it tends to stay there for a while. When putting the data through the NN, it gets high accuracy, but this seems to be due to it correctly predicting that when it is currently over the threshold, it will also be over the threshold at some point (the next point) in the next x timesteps. The usefulness in the model would be if it could accurately predict before it crosses the threshold. Features: 1) current point is over threshold - 1/0 2) current point is AM - 1/0 3) current day is weekday - 1/0 4) current percentage - 0.0-1.0 5-9) average of percentages in past 1/5/10/20/30 minute - 0.0-1.0 Label: 1 if a point is over threshold at any point in time after now and before now+15 minutes Features 5-9 are intended to capture the inertia of the current percentage. From feature importance, it looks like the current value is heavily used, followed by whether or not its over threshold, followed by the rolling means in order of time. I am currently changing the NN architecture and number of epochs in order to increase f1 score. Should I remove features 1 and 4, or rework the label in order to increase the accuracy of the predictive ability before its actually over the threshold? AI: Consider your problem as a binary classification. We have two kinds of prediction: raise an alarm (state is going to change), do not raise an alarm (state is going to stay the same). Actual / Predicted Alarm goes off Alarm did not go off State changed True positive False positive State stayed False negative True negative Note the true negatives are useless - we don't care if the state stays in the same spot (below/above threshold). Therefore, the accuracy is NOT important for this system. Do not try to minimize the wrong metric. This extends to "do not use ROC/AUC" (in your problem). You can use a PR curve, but be careful with those (no interpolation, no AUC PR, these are wrong/useless). Basic metrics that you could use are F-score, precision and recall, or a re-weighting of those (if the false alarm is less important than a missed alarm, for example). We also would like to deal with a huge "lag", trying to predict many time periods before (predicting just before the change is less useful). For this reason, I'd suggest investigating models other than MLP, notably those based on recurrent neural networks, such as LSTM. Also consider using time series prediction instead of classification - the literature on the subject is extensive and matches your problem really well.
H: In R, can I integrate different classifying algorithms in one bagging model? I use R to do data analysis. I have a dataset. When I use different classifying algorithms, such as random forest, SVM, etc, I have the different accuracy. So, I want to integrate all the algorithms into one framework, let's say adaboost. We know that adaboost framework use multiple "weak" classifying algorithms to combine a strong classifier. So, can I customize the "weak" classifying algorithms as I want? Here is just my current idea: In this framework, I use SVM first. Then give the data that are classified incorrectly more weights. Then, I use random forest. ... In the end, all the classifiers in this framework will work together. This is just what I think about this issue. If there is other method working such as voting, please let me know too. Any help is appreciated. AI: What you're looking for is called an ensemble model which means it is a compilation of several models to improve the results. This is a very common technique for winners in Kaggle competitions. Since you're using R and caret is a popular way to do ML in R, here's a package just for that purpose on caret: https://cran.r-project.org/web/packages/caretEnsemble/vignettes/caretEnsemble-intro.html
H: How to add time as a feature into clustering algorithm? I've wrote here about my problem. And now I have a new one. How to use time as a feature? I have a pandas DataFrame with date-time column('date') and I don't know how to use it as a feature. For now I'm using TfidfVectorizer. from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(min_df=5, max_features=10000, ngram_range=(1, 2)) vz = vectorizer.fit_transform(df['word']) Where 'word' column is tokens on my text. I'm using MiniBatchKMeans. Here is code example: from sklearn.cluster import MiniBatchKMeans num_clusters = 32 kmeans_model = MiniBatchKMeans(n_clusters=num_clusters, init='k-means++', n_init=1, init_size=1000, batch_size=1000, verbose=False, max_iter=1000) kmeans = kmeans_model.fit(vz) kmeans_clusters = kmeans.predict(vz) kmeans_distances = kmeans.transform(vz) I'm interested at how to add my time feature(my time column) to clusterizator and is there any possibility to make some kind of "weight" factor for time features and for different tokens. AI: I've decided to try LDA: cvectorizer = CountVectorizer(min_df=4, max_features=10000, ngram_range(1,2)) cvz = cvectorizer.fit_transform(df['word']) n_topics = 32 n_iter = 2000 lda_model = lda.LDA(n_topics=n_topics, n_iter=n_iter) X_topics = lda_model.fit_transform(cvz) Then I've concatenated time feature to the LDA vector: X_all = np.hstack((X_topics, X_time)) Where X_time was my time feature column.
H: Charts with or without grids? I am writing a thesis and don't know if the charts below should have grid lines or not. Do they need the grid lines? An example without the grid lines: AI: I would say that it is really up to you here. I don't think that the grid lines are distracting in any way, but can be helpful if you are trying to show more exact results. For example, it if it important that the scatter plot shows exact points then keep them. However, if you are demonstrating more of a general trend, then you could go without them. I'm not sure if these are your finished graphs or not, but make sure to label the color differences in the lines and provide a title. Also, is there a reason why your x-axis is on the top of the graph? If you want to keep it there and use the grid lines, try to have the numbers be above the graph instead of being slightly blocked by them.
H: Where can I find a trained neural network data to play with? This is the trained neural network for the XOR operator: Can I find something like a trained network for recognizing hand writing digits somewhere on the internet? Is there an "official format" for trained neural networks? AI: What about the super popular and classical MINIST? Tensorflow has a nice tutorial: https://www.tensorflow.org/get_started/mnist/beginners Just follow the instructions!
H: What are the advantages or disadvantages of Owl? Owl is the numerical library for OCaml: https://github.com/ryanrhymes/owl It is supposed to be an equivalent of numpy and also have capabilities of tensorflow. Any insights on why it should be used or why it shouldn't? AI: In a nutshell it is promising but it lacks in multiple points. The research below explains why it is better/worse than Numpy and Tensorflow: http://pligor.tumblr.com/post/166198475026/owl-an-ocaml-numerical-library-research-by
H: Sklearn Aggregating Multiple Fitted Models Into A Single Model? (binary classification) My problem context: dataset too big to fit into memory. binary classification [0,1] 30 csv files in a directory with exactly 30,000 samples (rows) each file contains 15,000 0 class and 15,000 1 class (no unbalance) model is xgboost.XGBClassifier() If I iterate through these 30 files, fit a model, and pickle the model to disk; is it possible to aggregate these 30 fitted models into a single model? I am aware of sklearn.ensemble.VotingClassifier but that is used for: sklearn.ensemble.VotingClassifier Soft Voting/Majority Rule classifier for unfitted estimators. AI: Despite the downvote, the question is clear, and a common one I'm sure most stumble across after doing machine learning work for some time. The goal was to make a stronger predictive model from multiple trained models. Quote from my question: is it possible to aggregate these 30 fitted models into a single model? Answer: yes but there's no good functionality that allows you to do this in sklearn. Verbose answer: Imagine you have 30 CSV files that contain 15,000 0 class and 15,000 1 class samples. In other words, an equally balanced number of binary responses (no class imbalance). I generated these files myself because 1) the size of the data I'm working with is too big to fit into memory (point #1 of my original question) and 2) contains 97% class 0 and 3% class 1 (large class imbalance). My goal is to see if it's even possible to distinguish between a 0 or a 1 if the class imbalance issue was removed from the equation. If there were distinguishing features found, I'd want to know what those features were. To generate each batch, I grabbed 15,000 1 responses, and randomly sampled from the 97% 15,000 more samples, joined into one dataset (30,000 samples total), then shuffled them at random. I then went through each batch and trained an XGBClassifier() using optimal parameters found from GridSearchCV() for each. I then saved the model to disk (using Python's pickle capabilities). At this point, you have 30 saved models. /models/ directory looks like this: ['model_0.pkl', 'model_1.pkl', 'model_10.pkl', 'model_11.pkl', 'model_12.pkl', 'model_13.pkl', 'model_14.pkl', 'model_15.pkl', 'model_16.pkl', 'model_17.pkl', 'model_18.pkl', 'model_19.pkl', 'model_2.pkl', 'model_20.pkl', 'model_21.pkl', 'model_22.pkl', 'model_23.pkl', 'model_24.pkl', 'model_25.pkl', 'model_26.pkl', 'model_27.pkl', 'model_28.pkl', 'model_29.pkl', 'model_3.pkl', 'model_4.pkl', 'model_5.pkl', 'model_6.pkl', 'model_7.pkl', 'model_8.pkl', 'model_9.pkl'] Again, going back to the original question - is it possible to aggregate these into a single model? I wrote a pretty basic python function for this. def xgb_predictions(X): ''' returns predictions from 30 saved models ''' predictions = {} for pkl_file in os.listdir('./models/'): file_num = int(re.search(r'\d+', pkl_file).group()) xgb = pickle.load(open(os.path.join('models', pkl_file), mode='rb')) y_pred = xgb.predict(X) predictions[file_num] = y_pred new_df = pd.DataFrame(predictions) new_df = new_df[sorted(new_df.columns)] return new_df It iterates through each file in a directory of saved models and loads them one at a time casting its own "vote" so-to-speak. The end result is a new dataframe of predictions. Since this new dataset is smaller (X.shape x 30), it will fit into memory. I iterate through each batch file calling xgb_predictions(X), getting a dataframe of predictions, add the y column to the dataframe, and append it to a new file. I then read the full dataset of predictions and create a "level 2" model instance where X is the prediction data and y is still y. So to recap, the concept is, for binary classification, create equally balanced class datasets, train a model on each, run through each dataset and let each trained model cast a prediction. This collection of predictions is, in a way, a transformed version of your original dataset. You build another model to try and predict y given the predictions dataset. Train it all in memory so you end up with a single model. Any time you want to predict, you take your dataset, transform it into predictions, then use your level 2 trained model to cast the final prediction. This technique performed much better than any other single model did or even stacking with UNTRAINED prior models.
H: What is the best way to classify data not belonging to set of classes? I am building a multi-class support vector machine (8 classes to be precise) on an image dataset of pre-defined classes. And then I thought of a question: What if I have an image that doesn't belong to the set of predefined classes, what would be the outcome? So I decided to experiment with it and the result was very bad. I got a higher accuracy for images that don't belong to any of the classes. Some images gave 98% accuracy, that they belonged to a particular class, even though my expectation should be that they should should have a very low accuracy. I also tried using OnceClass SVM to first predict if it's part of the class or not. If yes, then what's the label? (Meaning I have 2 models). But this doesn't seem to work as the OneClass SVM couldn't classify the "other" images well. Now I am running out of ideas of how to go about it. How can I approach this problem? AI: Just a proposal of a method to try out. Stage $1$: Use one-class SVM to assign those images that do not belong to the set of predefined classes as the $9$-th class. Stage $2$: For those images that passes through your filter, let the multi-class SVM assign them to one of the $8$ classes.
H: What is PAC learning? I have seen here but I really cannot realize that. In this framework, the learner receives samples and must select a generalization function (called the hypothesis) from a certain class of possible functions. The goal is that, with high probability (the "probably" part), the selected function will have low generalization error. Actually we do that in every machine learning situations and we do latter part for avoiding over-fitting. Why do we call it PAC-learning? I also have not get the meaning of the math too. Is there anyone that can help? AI: PAC stands for Probably Approximately Correct. It was a very common research area in computer science looking for proof of learnability of certain hypothesis sets. The usual hypothesis sets were not distributions (like in statistics) but more logical formulas like DNF, CNF, DFA, etc. In order to prove learnability, you need to show that for every sample distribution, for every concept $h$ in the hypothesis class $H$, for every $\lambda$ (the probability part) and for every $\epsilon$ (the approximately part) you can find an hypothesis $h^*$ such that with probability at least $1- \lambda$ the disagreement between $h$ and $h^*$ will be smaller than $\epsilon$. This bar for learning is very high, and some of the research proved negative results about many simple hypothesis sets. When you try to learn to differentiate between cats and dogs, you don't have a mathematical definition of hypothesis set. Most publications run an algorithm on a dataset and show their result. Almost nobody proves that the algorithms are correct (even given some assumptions). I find the lack of proofs quite sad. The math behind PAC is actually probability. Let's say that you have two cubes (classifier, concept) and you want to know how correlated they are. So, you toss them together $m$ times and you check the level of disagreement, epsilon. The higher the $m$ and the lower the $\epsilon$, the more confident you are that this is due to correlation and not due to error, $\lambda$. Given this intuition, PAC learning usually works in the other direction. Once you state the desired lambda and epsilon, the algorithm provides the number of samples $m$ needed to reach them.
H: What are real world applications of Doc2Vec? I am new to Doc2Vec. As I understand Doc2Vec group similar documents based on the context of their words. I have a set of newspaper documents and I want to identify what are the main topics of the newspapers (group 'Politics' news documents to one group, 'Sports' news documents to another group etc.) based on their content. Thus, I am interested in knowing; What variant of Doc2Vec is more suitable for this (dbow, dm)? What are the real world applications of Doc2Vec algorithm? AI: I will answer your second question first, doc2vec and word2vec both are primarily good representations of text data that capture the semantics of words and documents. So whenever you are working with text data, you need a representation for it and that is what word2vec and doc2vec provides. Now think of any real world task on text data, like document similarity, using doc2vec you can find cosine similarity between two documents easily, now think of real world applications for it, finding duplicate question on a site like stack overflow, ranking candidate answers for a question answering model, features for text cladsification like sentiment analysis ( word2vec doesnt work well here, the context of good and bad are quite similar so it struggles to differentiate between positive and negative reviews). So these are just representations, you can apply them to any NLP and a lot of IR task. To answer your first question, a model is not just task dependent but also data dependent. So you can read Mikolov's paper to find out how each model works for the baseline tasks but a good idea is to try both models on your data and extrinsically evaluate which algorithm performs better.
H: Interpretation of probabilities from logistic regression in Credit Score Card modelling I was trying to understand the probabilities output by a logistic regression in credit scorecard, let us say that I have performed Vintage analysis and identified the performance period as 6 months , and Bad rate is defined as 90DPD , so when I perform logistic regression for incoming new customers , the probabilities would define the probability that a customer will go Bad in the next 6 months ( which comes from performance period) ? AI: This would hold if you have the same priors in train and test population. However this is not the case for most of credit scoring cases. In the train data, you have customers that are accepted by model, so a higher good to bad ratio. In test data (when you apply your scorecard for new customers), this ratio is typically very different. Therefore the probabilities predicted by the logistic model can not directly used. There is no easy solution for this (look at reject inference problem). A simple improvement would reweighting/downsampling the train data to match the good/bad ratio of the test data (if you have an approximation for that).
H: Reporting test result for cross-validation with Neural Network I have a small dataset, so I have to use cross validation to report the test result to get a better estimate of the classification result. For some reason, I have to use neural networks to do this. Because neural networks have their unique quirks e.g finding hyper-parameters, I am using a nested cross-validation. I am dividing up my dataset into 10 folds for cross-validation. Then I am dividing the 9 folds that are for training, again in 10 folds. From those 10 folds, I am using 9 folds to train with different hyper-parameters(in my case it is number of hidden units and dropout rate), and using the other fold to to get the accuracy with different hyper-parameters (kind of like a validation set in the deep learning literature). Then I am training my model again on all of the 9 folds of the first division of data with the best hyper-parameters I found. Because I am missing out on some data from the initial 9 folds for using as a validation set. Now when I am reporting the test result, I set the epoch number for training on my training data a fixed number of times, and when my network is doing the best on the test set, I stopped the training, saved that model for future use, and report that result. My question is about this last part. Am I doing something wrong on reporting this result? Just to make it clear, I am not tuning any hyper-parameters at this stage. I am just setting the network to stop training on the training data when it reaches the best test result. I think this is a really subtle problem, if it is a problem at all. That is why I am confused. I am doing this whole thing 5-6 times with different seeds for different divisions of data, and I am only reporting the mean of all of these runs. AI: Now when I am reporting the test result, I set the epoch number for training on my training data a fixed number of times, and when my network is doing the best on the test set, I stopped the training, saved that model for future use, and report that result. My question is about this last part. Am I doing something wrong on reporting this result? Technically yes this is incorrect process for reporting an unbiased test metric. This could be a bad over-estimation of performance if cv results are noisy and vary randomly epoch-to-epoch. You should in theory treat the early stopping epoch number same as your other hyper-parameters, discover a good value from the lower-level cross-validation and stick with it. There is a problem with my suggestion though - the early stopping epoch number is sensitive to training data set size, and you just increased the size when you changed from lower level cross-validation to the higher level one. So you might get an unbiased measure at the expense of significantly worse results. First, take a look at your learning curves. Just how sensitive are the cv results to epoch number? If they are not sensitive - no obvious over-fit over a reasonable range of epoch numbers - then just pick a mid-range fixed number of epochs that applies to all folds. That way you will have your unbiased estimate, and probably not compromised on model quality. Alternatively, you may just have to be happy knowing that your test estimate is biased, but you have reduced the variance significantly by using 10-fold CV, and have only searched one hyper-parameter at the top level (not all three). It may only be slightly biased, and still a good estimate. The smoother and less jittery the learning curves are, the more you can get away with this - but sadly you won't get a measure of the bias, so you'll never be 100% sure.
H: What machine learning technique should I use in medical problem What are the best machine learning techniques to classify responders to a medicine if I have: Clinical data with ~200 features (age, education, marital status etc.) Gene data with around 250K features (genom data (snips) taken from the patient (DNA analysis)) Number of obs. ~ 4K (the data is taken from a study on 4000 patients). Please advise. AI: Typically you need a much more exact phrasing (in more mathematical terms) of the question to ask. Will a patient respond to medicine X? What is the likelihood? To what amount will patient repond to medicine X? Is patient in the group that is expected to respond to medicine X? Are slightly diffent questions that may impact choice of technique. Furthermore, your data plays an important role. Are you missing data? Have you normalized already? Do you expect 'Marital status' or 'Education level' to have an impact on medicine efficiency? (It might if certain groups take medicine at home, but it may be less likely when taking under doctor supervision) A priori you determine how to measure or quantize the success of a model (typically prediction accuracy). Then typically you try a few machine learning techniques, and make a model from the most successful one.
H: Converting Json file to Dataframe Python I have a json file which has multiple events, each event starts with EventVersion Key. The data looks similar to the following synthesized data. {"Records":[{"eventVersion":"1.04","userIdentity":{"type":"R","principalId":"P:i","arn":"arn:aws:sts::5","accountId":"50","accessKeyId":"AW","sessionContext":{"attributes":{"mfaAuthenticated":"f","creationDate":"2013-09"},"sessionIssuer":{"type":"R","principalId":"WA","arn":"arn:aws:iam::6","accountId":"70","userName":"user1"}}},"eventTime":"2027-6","eventSource":"a.com","eventName":"DS","awsRegion":"UZ","sourceIPAddress":"2.1.3","userAgent":"li","requestParameters":null,"responseElements":null,"requestID":"OO","eventID":"09","eventType":"ABC","apiVersion":"2010-4","recipientAccountId":"78"},{"eventVersion":"1.04","userIdentity":{"type":"R","principalId":"P:i","arn":"arn:aws:sts::5","accountId":"50","accessKeyId":"AW","sessionContext":{"attributes":{"mfaAuthenticated":"f","creationDate":"2013-09"},"sessionIssuer":{"type":"R","principalId":"WA","arn":"arn:aws:iam::6","accountId":"70","userName":"user1"}}},"eventTime":"2027-6","eventSource":"a.com","eventName":"DS","awsRegion":"UZ","sourceIPAddress":"2.1.3","userAgent":"li","requestParameters":null,"responseElements":null,"requestID":"OO","eventID":"09","eventType":"ABC","apiVersion":"2010-4","recipientAccountId":"78"}]} I'm using the following code in Python to convert this to Pandas Dataframe such that Keys are columns and values of each event is a row. with open('/Users/snehahonnappa/Documents/NLP_AWSlogs/Model/Data/505728423372_CloudTrail_ap-northeast-1_20160913T1700Z_yKA3wB5Nx6juR6Kg.json') as json_data: sample_object = json.load(json_data) df = pd.io.json.json_normalize(sample_object) df.columns = df.columns.map(lambda x: x.split(".")[-1]) print df.shape When I print shape of the dataframe its 1X1. I'm expecting (Number of unique keys X Number of records) Snippet of how I'm expecting the dataframe to be eventVersion userIdentity eventTime type principalId P arn accountID userName 1.04 R P i arn:aws 50 user1 2027-6 1.06 Q O i arn:aws 67 u2 2027-7 Appreciate any help. Update : I'm writing the json file into a csv and then trying to convert this to dataframe on which my models can be applied on. Following is my code. import json import csv import sys data_parsed = json.loads(open('/tmp/A.json').read()) log_data = data_parsed['Records'] # open a CSV file for writing data = open('/tmp/log.csv', 'w') # create the csv writer object csvwriter = csv.writer(data) count = 0 for i in log_data: if count == 0: header = i.keys() csvwriter.writerow(header) count += 1 csvwriter.writerow(i.values()) data.close() This is writing the keys as headers and values of each record as a separate row which is as expected. However the nested json objects are being written as one value. Following is a snippet of my csv file which was obtained by executing the above code. eventVersion eventID eventTime requestParameters eventType 1.04 0 2016-20 AwsApiCall 1.04 8 2016-20 {u'tagKeys': [u'User Name']} AwsApiCall 1.05 4 2016-30 {u'filterSet': {u'items': [{u'name': u'resource-type', u'valueSet': {u'items': [{u'value': u'*'}]}}, {u'name': u'tag:User Name', u'valueSet': {u'items': [{u'value': u'*'}]}}]}} AwsApiCall Any suggestions to tackle this? AI: I once ran into a situation like this where i wanted a complex dataframe due to the original source having a complex data structure. I solved my issue by simplifying my structure into multiple separate dataframes instead of one big complex multi structure dataframe. With simple separate dataframes i was better positioned to apply complex algorithmic operations. Looking at your specific data, you could get rid of userIdentity which results in a simple 2d dataframe. This should position you to do any complex operation. I understand this doesn't answer your specific dataframe structure requirement. But i hope this answers the spirit of your objective.
H: How many features to sample using Random Forests The Wikipedia page which quotes "The Elements of Statistical Learning" says: Typically, for a classification problem with $p$ features, $\lfloor \sqrt{p}\rfloor$ features are used in each split. I understand that this is a fairly good educated guess and it was probably confirmed by empirical evidence, but are there other reasons why one would pick the square root? Is there a statistical phenomenon happening there? Does this somehow help decrease the variance of the errors? Is this the same for regression and classification? AI: I think in the original paper they suggest using $\log_2(N +1$), but either way the idea is the following: The number of randomly selected features can influence the generalization error in two ways: selecting many features increases the strength of the individual trees whereas reducing the number of features leads to a lower correlation among the trees increasing the strength of the forest as a whole. What's interesting is that the authors of Random Forests (pdf) find an empirical difference between classification and regression: An interesting difference between regression and classification is that the correlation increases quite slowly as the number of features used increases. Therefore, for regression often $N/3$ is recommended, which gives larger values than $\sqrt N$. In general, there is no clear justification for $\sqrt N$ or $\log N$ for classification problems other than that it has shown that lower correlation among trees can decrease generalization error enough to more than offset the decrease in strength of individual trees. In particular, the authors note that the range where this trade-off can decrease the generalization error is quite large: The in-between range is usually large. In this range, as the number of features goes up, the correlation increases, but PE*(tree) compensates by decreasing. (PE* being the generalization error) As they say in Elements of Statistical Learning: In practice the best values for these parameters will depend on the problem, and they should be treated as tuning parameters. One thing your problem can depend on is the number of categorical variables. If you have many categorical variables that are encoded as dummy-variables it usually makes sense to increase the parameter. Again, from the Random Forests paper: When many of the variables are categorical, using a low [number of features] results in low correlation, but also low strength. [The number of features] must be increased to about two-three times $int(log_2M+1)$ to get enough strength to provide good test set accuracy.
H: What does discriminator of a GAN should do? A Generative Adversarial Network (GAN) consists of two sub-networks: (1) generator and (2) discriminator. What does a discriminator should be able to do? Or more specifically, should it be able to distinguish (classify) a real object (for example a vector) from a generated one or should it be able to distinguish a set of generated vectors from a set of real vectors? I tend to think that the second option is correct. However, if it is the case, how do we build a neural network that classifies a set of vectors instead of a vector? AI: The discriminator must classify individual elements as being fake (i.e. created by the generator) or real (i.e. taken from the training dataset). The discriminator generates labels (real/fake) for each element in the batch. The loss functions are computed based on those labels. Elements are fed to the discriminator in batches of the same type (i.e. all elements in the batch are real or all elements in the batch are fake). This is because the fake data batch is directly generated by the generator as part of the same computational graph (i.e. the output of the generator is directly connected to the input of the discriminator). This is so to be able to propagate gradients of the generator parameters through the discriminator.
H: Fractions or probabilities as training labels This it a problem that has come on my path a few times now and I don't have a satisfying solution yet. The goal is to predict probabilities or fractions based on some $x$ where our training $y$ has these probabilties or fractions and thus is in the domain $[0,1]$ as opposed to $\{0,1\}$. My question is with regards to my loss function. In the case of fractions, if the error between 0.4 and 0.5 and the error between 0.89 and 0.99 is the same I can just use MSE if I want to predict the expected value. In case of probabilities where we want to approach it similarly as classification problems, where the difference between 0.89 and 0.99 is much bigger than that of 0.4 and 0.5, we want to put this in our loss function. Does cross entropy still work properly if I feed it fractions in $y$? $\mathcal{L}(y,\hat{y})=-y\log(\hat{y}) - (1-y)\log(1-\hat{y})$ Let's say our $y=0.5$ and our current prediction is $\hat{y}=0.6$ we would get: $\mathcal{L}(0.5,0.6)=-0.5\log(0.6) - 0.5\log(0.4)$ I don't really see why this would go wrong? The function is still convex. Everywhere it says that the target should be in $\{0, 1\}$ however. Maybe my math is lacking or I'm missing something obvious, why is this a bad idea? AI: Cross-entropy loss still works with probabilities in $[0,1]$ as well as $\{0,1\}$. Most importantly, $\hat{y} = y$ is still a stationary point (although it will not equal $0$). It is also the case that the possible improvement in loss (and immediate gradient) is larger for $\hat{y} = 0.99, y = 0.89$ than for $\hat{y} = 0.4, y = 0.5$. If you use a sigmoid output, then the gradient at the logit $\hat{y} - y$ still applies - the larger gradient of the loss function scales inversely to the lower gradient of the sigmoid at that point. So, in short, yes use binary cross-entropy loss for single-class probabilities, even when they are not strictly in $\{0,1\}$.
H: Can we use machine learning to generate a text output based on the input strings Problem : Generate a text output based on input strings which will be combined using a number of rules. Example : Feature1 Feature2 O/P Rule 1 Enum_Domain Priority /Enum_Domain/Priority Rule 2 Enum_Domain.EnumData Name /Enum_Domain/EnumData/Name Rule 1 Trunkgroup Gateway /Trunkgroup/Gateway Rule 2 GatewayGrp.Gateway IP /GatewayGrp/Gateway/IP This is a simple programming problem, but is there any machine learning algorithm that can learn these rules and generate the output based on the two inputs. AI: Yes, sequence 2 sequence models attempt to do this. This can be used in a number of domains, from typo fixing to machine translation. They are encoder -> decoder based, which means you have a part that encodes your input and then a decoder that generates a new sequence based on this encoding (and usually some attention). In this case your encoder would likely be two recurrent neural networks of which the output would be concatenated and then a decoder that takes this concatenated output and turns this into a new sequence. If you want to use attention you need to adapt the standard attention a bit because you have two textual inputs, but if you understand how it works this would not be too difficult to adapt.
H: Forecasting one time series with missing data with help of other time series I have time series $R$, which shows, how something changes at the regional level. I have several time series $U_i$, which show, how something changes at a special unit $I$ level. There are many units in the region. $R$ has no missing data. Different $U_i$ have their own missing periods. I want to forecast $U$ after a missing period using information of $R$ and information of $U$ when it was availible. My thoghts till now: Suppose $R$ is known on interval $[0, 365]$. Suppose $U_i$ is known on interval $[0,300]$. Let's take $R$ and $U_i$ both on interval $[0,300]$, take difference between them and trying to predict that difference with linear regression. So for interval $[301,365]$, I will have differences and to restore $U_i$, I will just have to take out my differences from $R$. I don't like my solution, because: We need a model for each $U_i$. Because, sometimes data is more sparse and I don't even have a $[0,300]$ known interval, so not able to train regression properly. AI: Not a fully complete answer, but some inputs. Your time series are correlated. I assume that the measure you want to forecast for a region is an aggregation of units forecasts. To address the first point, I usually use Vector Autoregressive Model (VAR) that forecast all time-series at once (each one being expressed as a regression using the others) The second point involves the concept of hierarchical forecasting and reconciliation. You can exploit the fact that the regional forecast should/must equal the unit- forecasts. There can be a process to adjust forecasts to take that into account. There are both packages for VaR and hierarchical reconciliation in R but as far as I know no direct code to handle both at the same time... You may find this paper providing some details on the proposed approach: https://mpra.ub.uni-muenchen.de/76556/1/MPRA_paper_76556.pdf
H: Is the percepetron algorithm's convergence dependent on the linearity of the data? Does the fact that I have linearly separable data or not impact the convergence of the perceptron algorithm? Is it always gonna converge if the data is linearly separable and not if it is not ? Is there a general rule ? AI: Yes, the perceptron learning algorithm is a linear classifier. If your data is separable by a hyperplane, then the perceptron will always converge. It will never converge if the data is not linearly separable. In practice, the perceptron learning algorithm can be used on data that is not linearly separable, but some extra parameter must be defined in order to determine under what conditions the algorithm should stop 'trying' to fit the data. For example, you could set a maximum number of iterations (or epochs) for the algorithm to run, or you could set a threshold for the maximum number of allowed misclassifications.
H: Keras autoencoder not converging Could someone please explain to me why the autoencoder is not converging? To me the results of the two networks below should be the same. However, the autoencoder below is not converging, whereas, the network beneath it is. autoencoder implementation, does not converge autoencoder = Sequential() encoder = containers.Sequential([Dense(32,16,activation='tanh')]) decoder = containers.Sequential([Dense(16,32)]) autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True)) rms = RMSprop() autoencoder.compile(loss='mean_squared_error', optimizer=rms) autoencoder.fit(trainData,trainData, nb_epoch=20, batch_size=64, validation_data=(testData, testData), show_accuracy=False) non-autoencoder implementation, converges model = Sequential() model.add(Dense(32,16,activation='tanh')) model.add(Dense(16,32)) model.compile(loss='mean_squared_error', optimizer=rms) model.fit(trainData,trainData, nb_epoch=numEpochs, batch_size=batch_size, validation_data=(testData, testData), show_accuracy=False) AI: The new version (0.3.0) of Keras no longer has tied weights in AutoEncoder, and it still shows different convergence. This is because weights are initialized differently. In the non-AE example, Dense(32,16) weights are initialized first, followed by Dense(16,32). In the AE example, Dense(32,16) weights are initialized first, followed by Dense(16,32), and then when you create the AutoEncoder instance, Dense(32,16) weights are initialized again (self.encoder.set_previous(node) will call build() to initialize weights). Now the following two NNs converge exactly the same: autoencoder = Sequential() encoder = containers.Sequential([Dense(32,16,activation='tanh')]) decoder = containers.Sequential([Dense(16,32)]) autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True)) rms = RMSprop() autoencoder.compile(loss='mean_squared_error', optimizer=rms) np.random.seed(0) autoencoder.fit(trainData,trainData, nb_epoch=20, batch_size=64, validation_data=(testData, testData), show_accuracy=False)
H: Loss of MSE always be 0 when keras for topic predict my input is a 200 dims vector, which is generated by mean of the word2vector of all words of a article, my output is a 50 dims vector,which is generated by the LDA results of a article I want to use mse as the loss function,but the value of the loss always be 0 my code as follows: model = Sequential() model.add(Dense(cols*footsize, 400,init = "glorot_uniform")) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(400, 400,init = "glorot_uniform")) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(400, 50,init = "glorot_uniform")) model.add(Activation('softmax')) model.compile(loss='mse', optimizer='rmsprop') And regardless of the number of epochs, the loss is always 0 AI: First, is your output a one-hot vector of predicted classes? i.e: class one is [1, 0, 0, ...] and class two is [0, 1, 0, 0, ...]. If so, then using softmax activation at the output layer is acceptable and you are doing a classification problem. If you are doing a classification problem (one-hot output) you cannot use MSE as the loss, use categorical cross-entropy. Softmax scales the output so that the number given is a predicted probability of a certain class. Wikipedia here: https://en.wikipedia.org/wiki/Softmax_function If you are expecting the output vector to be real numbers then you need to use linear activation on your output neurons.
H: Why are autoencoders for dimension reduction symmetrical? I'm not an expert in autoencoders or neural networks by any means, so forgive me if this is a silly question. For the purpose of dimension reduction or visualizing clusters in high dimensional data, we can use an autoencoder to create a (lossy) 2 dimensional representation by inspecting the output of the network layer with 2 nodes. For example, with the following architecture, we would inspect the output of the third layer $[X] \rightarrow N_1=100 \rightarrow N_2=25 \rightarrow (N_3=2) \rightarrow N_4=25 \rightarrow N_5=100 \rightarrow [X]$ where $X$ is the input data and $N_l$ is the number of nodes in the $l$th layer. Now, my question is, why do we want a symmetrical architecture? Doesn't a mirror of the deep 'compression' phase mean we might have a similarly complex 'decompression' phase resulting in a 2 node output which is not forced to be very intuitive? In other words, wouldn't having a simpler decoding phase result in the output of the layer with 2 nodes necessarily being simpler too? My thinking here is that the less complex the decompression phase, the simpler (more linear?) the 2D representation has to be. A more complex decompression phase would allow a more complex 2D representation. AI: There is no specific constraint on the symmetry of an autoencoder. At the beginning, people tended to enforce such symmetry to the maximum: not only the layers were symmetrical, but also the weights of the layers in the encoder and decoder where shared. This is not a requirement, but it allows to use certain loss functions (i.e. RBM score matching) and can act as regularization, as you effectively reduce by half the number of parameters to optimize. Nowadays, however, I think no one imposes encoder-decoder weight sharing. About architectural symmetry, it is common to find the same number of layers, the same type of layers and the same layer sizes in encoder and decoder, but there is no need for that. For instance, in convolutional autoencoders, in the past it was very common to find convolutional layers in the encoder and deconvolutional layers in the decoder, but now you normally see upsampling layers in the decoder because they have less artefacts problems.
H: Breaking captcha with a neural network - Learning deep learning I would like to implement a neural network allow to make captcha recognition. Actually, I'm new in deep learning that's the first neural network I'm building. I have seen a another similar project on Github : https://deepmlblog.wordpress.com/2016/01/03/how-to-break-a-captcha-system/ However, I'm not able to understand it. I don't know what's a vgg base neural network model, etc ... I was wondering if there is a course or courses that I could follow in order to be able to implement such a neural network. I don't know where to start ... I don't aim to become an expert for the moment, I just want to first concretely discover this science that seems exciting. Thank you for your help AI: My recommondation is a course from Stanford, Convolutional Neural Networks for Visual Recognition. But: This will not provide you easy to understand code to break captchas. You will learn the concept of Convolutional Neural Networks (CNN), which are popular in the classification of images. As a reminder: Captchas are built to prevent computers from automatically filling them in. Even if nowadays techniques might be able to solve them, your project might be a bad idea (especially from the legal perspective).
H: What are differentiable modules used in deep learning I am reading this paper. Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spa- tial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the abil- ity to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, result- ing in state-of-the-art performance on several benchmarks, and for a number of classes of transformations. Spatial Transformers are used in CNNs to have spatial invariant transformations and consequently the process of learning would be so easier and the networks would have better performance for data with different kind of distributions (noisy data). In this paper I don't realize the meaning of differentiable module. These so called differentiable modules are used in neural networks. But what is the meaning of differentiable? AI: "Differentiable" means that you can compute the derivative of the operations in the module, and therefore you can compute the gradients of the loss function with respect to the module parameters (i.e. use backpropagation). This is normally a requirement for operations involved in neural network computations. Note: you can use non-differentiable operations as part of a computational graph, but you won't be able to backpropagate gradients through them, and therefore any learnable parameters involved in operations prior to the non-differentiable one would not be able to learn.
H: Median versus Average, how to choose? I want to test how long it takes to run an algorithm. So here is what I am doing: close all the other un-needed applications, run my algorithms alone considering some unstable computer system factors, run multiple times As you can tell, I can get a set of running time t1, t2, t3, t4, ... , tn, so I'm asking: What to choose from t1, t2, t3, t4, ... , tn as my final result, median or average? Is this the right way to do this? btw: my algorithm is actually a rendering algorithm, so it would be more complicated. AI: You haven't asked a proper statistical question, so the choice of mean or median as "best" as a measure of your runtime is unanswerable. Have you looked at the distribution of run times? Is the algorithm intrinsically variable in its run-time, or is it fixed in its run time but the run times differ because of noise caused by the OS doing other things? Do you want to remove that noise? What if the OS suddenly decides to swap to disk for a bit, or a big network data packet arrives, and the OS goes and does something for a few ms. You could get a long run time for one of your times, and that could pull the mean value way off. The median is a robust estimator which means a single "bad" value can't throw it off. The mean can be thrown off by a single "rogue" value. Is that what you want? Maybe you do.
H: Should I gray scale the image? I'm categorizing 30 types of clothes from the image using R-CNN Object Detection Library from tensorflow : https://github.com/tensorflow/models/tree/master/research/object_detection Does color matter when we collect images for training and testing? If I put only purple and blue shirts, I guess it won't recognize red shirts? Should I gray scale all images to detect the types of clothes? :) AI: Your hypothesis about missing colours in your samples affecting results in production could be correct. However, it is trivial to convert images to greyscale as you load them from storage. So keep them in colour, and convert them as you load them if you need black and white. That way you can try both with and without colour as input and you will have your answer. This is very little effort to do in practice, and allows you to do the "science" part of data science by comparing two approaches and measuring the difference. This is standard practice, even if you are reasonably certain one approach or another is "the best", it is normal to explore a few variations. When you are not sure, then it is even more important to try it and see. To test your hypothesis, you could put all the t-shirts of a particular colour in your test set. If that reduces the accuracy of your results with the colour model, it would back up your concern. One fix for that might be to remove colour information from the model, if it is not relevant to the task. An alternative is to collect more data so you have enough samples of different colour shirts. However, you might find if you are fine-tuning a neural network trained on many more images (e.g. Inception v5) that the impact of colour is less even though your samples do not cover all possible T-shirt colours.
H: Using machine learning to evaluate a random number generator Let's say that I want to create a pseudorandom number generator, and I'd like to make sure that it's truly close to random. For simplicity, let's assume that I want to output a 0 or a 1 randomly. I have heard of the monobit, runs, poker test etc. but are there "machine learning" ways to evaluate a pseudorandom number generator? As in, one could try to predict the number that will be outputted given the first k numbers that were previously outputted, and the performance of that model would give you how well the pseudorandom generator is performing. It might be way over my head, but could a generative adversarial network learn a truly pseudorandom generator that way? AI: are there "machine learning" ways to evaluate a pseudorandom number generator? No it is the wrong tool for the job. Statistical tests, like those you mention monobit, runs, poker test etc, are a way to evaluate a PRNG and search for bias or unwanted patterns which would establish that it had flaws. If you try to use ML on random data - e.g. map dependent x,y data to a label z where all are determined randomly - then one sign the data is random will be that the ML does poorly. However, this will not give any measure of quality of the randomness. Using a high variance model it will still be possible to fit the data, although cross validation and test sets will score badly. It might be way over my head, but could a generative adversarial network learn a truly pseudorandom generator that way? A GAN learns how to map a low dimensional space into a manifold (sub-shape) of a larger dimensional space, allowing you to sample from a population in the higher dimension. It cannot inject "noise" into that space other than using something provided already by a PRNG. The GAN itself cannot be the PRNG. The GAN sampling routine is typically driven by an external PRNG (i.e. not driven by the weights or processing of the GAN) Probably you could construct a PRNG from a RNN, although I suspect it would be hard to create one of any quality, and even if you could get something close to statistical randomness from it, the performance is likely to be poor compared to standards such as Mersenne Twister.
H: Tensorflow MLP worse than Keras(TF backend) I'm kinda new to this field, so I started tinkering with some models in Keras (using Tensorflow backend). But recently I started to migrate to a pure Tensorflow approach, and I'm not getting good results, what is strange, since I'm using the TF backend in Keras, so I was expecting similar results. So, most certainly, I'm getting something in the implementation wrong, but I can't figure out what it is. I'm trying to implement a 7 layer MLP, with one linear output using ADAM. To make it easy, I removed all regularization from the model, so I was expecting the model to overfit, what happened to the Keras model, but not to the TF model. If someone could point what is wrong in the Tensorflow implementation, I would be very grateful. Here is the github repository: https://github.com/makalaia/Tensorflow-Benchmark Keras code: import keras import numpy as np import time import matplotlib.pyplot as plt from keras.layers import Dense from keras.models import Sequential from pandas import read_csv def calculate_rmse(real, predict): m = len(real) return np.sqrt(np.sum(np.power((real - predict), 2)) / m) test_size = 150 df = read_csv('data/mastigadin.csv', header=None) df.set_index(list(df)[0], inplace=True) y_total = df.iloc[:, -1:].values x_total = df.iloc[:, :-1].values y_train = y_total[:-test_size, :] x_train = x_total[:-test_size, :] y_test = y_total[-test_size:, :] x_test = x_total[-test_size:, :] tempo = time.time() # Neural net epochs = 200 batch_size = 64 optmizer = keras.optimizers.Adam() model = Sequential() model.add(Dense(256, input_shape=(x_train.shape[1],))) model.add(Dense(256, activation='relu')) model.add(Dense(256, activation='relu')) model.add(Dense(256, activation='relu')) model.add(Dense(256, activation='relu')) model.add(Dense(256, activation='relu')) model.add(Dense(256, activation='relu')) model.add(Dense(1)) # fit model.compile(loss='mean_squared_error', optimizer=optmizer) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), verbose=2) print('TIME: ' + str(time.time() - tempo)) # predict y_trained = model.predict(x_train) y_tested = model.predict(x_test) # errors error_train = calculate_rmse(y_train, y_trained) print('TRAIN: RMSE - ' + str(error_train)) error_test = calculate_rmse(y_test, y_tested) print('\nVAL: RMSE - ' + str(error_test)) # plot plt.plot(y_total, label='REAL DATA') plt.plot(y_trained, label='TRAINED DATA') plt.plot(range(len(y_train), len(y_total)), y_tested, label='TEST DATA') plt.legend() plt.title('KERAS') plt.show() Tensorflow code: import numpy as np import tensorflow as tf import time import matplotlib.pyplot as plt from pandas import read_csv def calculate_rmse(real, predict): m = len(real) return np.sqrt(np.sum(np.power((real - predict), 2)) / m) test_size = 150 df = read_csv('data/mastigadin.csv', header=None) df.set_index(list(df)[0], inplace=True) y_total = df.iloc[:, -1:].values x_total = df.iloc[:, :-1].values y_train = y_total[:-test_size, :] x_train = x_total[:-test_size, :] y_test = y_total[-test_size:, :] x_test = x_total[-test_size:, :] n_samples = x_train.shape[0] tempo = time.time() epochs = 200 batch_size = 64 n_input = 36 n_output = 1 n_hidden = 256 # tf Graph input X = tf.placeholder("float", [None, n_input]) Y = tf.placeholder("float", [None, n_output]) # Store layers weight & bias weights = { 'h1': tf.get_variable('h1', shape=[n_input, n_hidden]), 'h2': tf.get_variable('h2', shape=[n_hidden, n_hidden]), 'h3': tf.get_variable('h3', shape=[n_hidden, n_hidden]), 'h4': tf.get_variable('h4', shape=[n_hidden, n_hidden]), 'h5': tf.get_variable('h5', shape=[n_hidden, n_hidden]), 'h6': tf.get_variable('h6', shape=[n_hidden, n_hidden]), 'h7': tf.get_variable('h7', shape=[n_hidden, n_hidden]), 'out': tf.Variable(tf.random_normal([n_hidden, n_output])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden])), 'b2': tf.Variable(tf.random_normal([n_hidden])), 'b3': tf.Variable(tf.random_normal([n_hidden])), 'b4': tf.Variable(tf.random_normal([n_hidden])), 'b5': tf.Variable(tf.random_normal([n_hidden])), 'b6': tf.Variable(tf.random_normal([n_hidden])), 'b7': tf.Variable(tf.random_normal([n_hidden])), 'out': tf.Variable(tf.random_normal([n_output])) } # Create model def multilayer_perceptron(x): layer_1 = tf.nn.relu(tf.add(tf.matmul(x, weights['h1']), biases['b1'])) layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])) layer_3 = tf.nn.relu(tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])) layer_4 = tf.nn.relu(tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])) layer_5 = tf.nn.relu(tf.add(tf.matmul(layer_4, weights['h5']), biases['b5'])) layer_6 = tf.nn.relu(tf.add(tf.matmul(layer_5, weights['h6']), biases['b6'])) layer_7 = tf.nn.relu(tf.add(tf.matmul(layer_6, weights['h7']), biases['b7'])) out_layer = tf.matmul(layer_7, weights['out']) + biases['out'] return out_layer # Construct model pred = multilayer_perceptron(X) # Define loss and optimizer cost = tf.reduce_mean(tf.squared_difference(pred, Y)) optimizer = tf.train.AdamOptimizer() train_op = optimizer.minimize(cost) # Initializing the variables init = tf.global_variables_initializer() display_step = 1 with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(epochs): avg_cost = 0. total_batch = int(n_samples / batch_size) # Loop over all batches tp = time.time() for i in range(total_batch): batch_x = x_train[i * batch_size:(i + 1) * batch_size] batch_y = y_train[i * batch_size:(i + 1) * batch_size] # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([train_op, cost], feed_dict={X: batch_x, Y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch + 1), "cost={:.9f}".format(avg_cost), "TIME: %.2f" % (time.time() - tp)) print('TIME: ' + str(time.time() - tempo)) # Test model y_trained = sess.run(pred, feed_dict={X: x_train}) y_tested = sess.run(pred, feed_dict={X: x_test}) error_train = calculate_rmse(y_train, y_trained) print('TRAIN: RMSE - ' + str(error_train)) error_test = calculate_rmse(y_test, y_tested) print('\nVAL: RMSE - ' + str(error_test)) plt.plot(y_total, label='REAL DATA') plt.plot(y_trained, label='TRAINED DATA') plt.plot(range(len(y_train), len(y_total)), y_tested, label='TEST DATA') plt.legend() plt.title('TENSORFLOW') plt.show() AI: This part is badly enough wrong that you will get poor results: # Store layers weight & bias weights = { 'h1': tf.get_variable('h1', shape=[n_input, n_hidden]), 'h2': tf.get_variable('h2', shape=[n_hidden, n_hidden]), 'h3': tf.get_variable('h3', shape=[n_hidden, n_hidden]), 'h4': tf.get_variable('h4', shape=[n_hidden, n_hidden]), 'h5': tf.get_variable('h5', shape=[n_hidden, n_hidden]), 'h6': tf.get_variable('h6', shape=[n_hidden, n_hidden]), 'h7': tf.get_variable('h7', shape=[n_hidden, n_hidden]), 'out': tf.Variable(tf.random_normal([n_hidden, n_output])) } The problem is the initialization. Your hidden layers have no initialization at all. The output layer initializes with likely the wrong scale. To match Keras, your initialiser should be something like: tf.random_normal([n_in, n_out]) * (math.sqrt(2.0/(n_in + n_out)) or you can use the built-in Xavier initialiser: tf.contrib.layers.xavier_initializer() In addition, you can probably drop the initializer for the bias values.
H: K-mode or K-prototype I'm studying about K-mode and K-prototype but I cannot find any proper example on a very basic example of how it works contrary to K-means where there are quite a lot (like this one description-k-means). Does anyone know a book or website which have a similar example for K-mode and/or K-prototype ? Thanks a lot AI: As these methods are designed for categoricial data, you won't find visual examples as for k-means: the data is not that visual. A realistic example run will just be a boring list of updated modes / prototypes.
H: How to implement gradient descent for a tanh() activation function for a single layer perceptron? I am required to implement a simple perceptron based neural network for an image classification task, with a binary output and a single layer, however I am having difficulties. I have a few problems: I am required to use a tanh() axtivation function, which has the range [-1, 1], however, my training labels are 1 and 0. Should I scale the activation function, or simply change 0 labels to -1? So the gradient descent rule dictates that I shift the weights according to the rule: $$ \Delta \omega = -\eta \frac{\delta E}{\delta \omega} $$ I am using the mean square error for my error: $$ E = (output - label)^2 $$ Considering my output is $o = tanh(\omega .x)$, x is my input vector and $y_i$ is the corresponding label here. $$ \frac{\delta E}{\delta \omega} = \frac{\delta (y_i - tanh(wx))^2}{\delta \omega} \\= -2(y_i - tanh(wx))(1 -tanh^2(wx)) \frac{\delta wx}{\delta w} \\= -2(y_i - tanh(wx))(1 -tanh^2(wx)) x \\= -2(y_i - o)(1 - o^2)x $$ I implemented this is python, the dot product of the input vector with the weights turns out to be too large, which makes $tanh(x)=1$ and $1-o^2 = 0$, so I can't learn. How can I circumvent this problem? Thanks for the replies. The implementation: def perc_nnet(X, y, iter = 10000, eta=0.0001): a, b, c = X.shape W_aug = np.random.normal(0, 0.01, a*b+1) errors = [] for i in range(iter): selector = rd.randint(0,c-1) x_n = X[:,:,selector].ravel() #.append(1) #has the bias as well x_n = np.append(x_n, 1) v = x_n.dot(W_aug) o = np.tanh(v) y_i = y[:,selector] if y[:,selector]==1 else -1 MSE = 0.5*(o - y_i)**2 errors.append(MSE) delta = - eta * (o - y_i) * (1 - o**2) * x_n W_aug = W_aug + delta return W_aug, errors AI: How can I circumvent this problem? TLDR : Normalize your input data. Why? Notice how tanh actually works on the input data: >>> np.tanh(np.asarray([-1000, -100, -10, -1, 0, 1, 10, 100, 1000])) array([-1. , -1., -1. , -0.76159416, 0.,0.76159416, 1. ,1. , 1]) If your data input to tanh has a magnitude of 10 or more, tanh produces an output of 1. That means it treats 10, 50, 100, 1000 the same. We don't want that. This would explain the input vector with the weights turns out to be too large, which makes tanh(x)=1tanh(x)=1 and 1−o2=01−o2=0, so I can't learn. Instead, we normalize the data (by dividing the input data by 1000 ( there are other ways of doing this)): >>> np.tanh(np.asarray([-1000, -100, -10, -1, 0, 1, 10, 100, 1000]) / 1000) array([-0.76159416, -0.09966799, -0.00999967, -0.001, 0. ,0.001, 0.00999967, 0.09966799, 0.76159416]) Now, tanh treats all of them differently. Note You are doing a classification task. Using mean squared error as a Cost Function won't yield the best results here. You should instead use Cross Entropy Loss with One Hot Vectors and softmax. As to why should be a whole different answer. Here is a link for the same.
H: Why do we need XGBoost and Random Forest? I wasn't clear on couple of concepts: XGBoost converts weak learners to strong learners. What's the advantage of doing this ? Combining many weak learners instead of just using a single tree ? Random Forest uses various sample from tree to create a tree. What's the advantage of this method instead of just using a singular tree? AI: It's easier to start with your second question and then go to the first. Bagging Random Forest is a bagging algorithm. It reduces variance. Say that you have very unreliable models, such as Decision Trees. (Why unreliable? Because if you change your data a little bit, the decision tree created can be very different.) In such a case, you can build a robust model (reduce variance) through bagging -- bagging is when you create different models by resampling your data to make the resulting model more robust. Random forest is what we call to bagging applied to decision trees, but it's no different than other bagging algorithm. Why would you want to do this? It depends on the problem. But usually, it is highly desirable for the model to be stable. Boosting Boosting reduces variance, and also reduces bias. It reduces variance because you are using multiple models (bagging). It reduces bias by training the subsequent model by telling him what errors the previous models made (the boosting part). There are two main algorithms: Adaboost: this is the original algorithm; you tell subsequent models to punish more heavily observations mistaken by the previous models Gradient boosting: you train each subsequent model using the residuals (the difference between the predicted and true values) In these ensembles, your base learner must be weak. If it overfits the data, there won't be any residuals or errors for the subsequent models to build upon. Why are these good models? Well, most competitions in websites like Kaggle have been won using gradient boosting trees. Data science is an empirical science, "because it works" is good enough. Anyhow, do notice that boosting models can overfit (albeit empirically it's not very common). Another reason why gradient boosting, in particular, is also pretty cool: because it makes it very easy to use different loss functions, even when the derivative is not convex. For instance, when using probabilistic forecast, you can use stuff such as the pinball function as your loss function; something which is much harder with neural networks (because the derivative is always constant). [Interesting historical note: Boosting was originally a theoretical invention motivated by the question "can we build a stronger model using weaker models"] Notice: People sometimes confuse random forest and gradient boosting trees, just because both use decision trees, but they are two very different families of ensembles.
H: Deep Learning to estimate what is beyond the edge I have an image of some data which is approximately 4,000 x 8,000 pixels. I am interested in finding out if anyone has used a deep learning algorithm to predict what would be on the image if it extended 100 more pixels in each direction. I would imagine that the data could be trained on smaller rectangles, and then the rules developed would be used to extend beyond the image given. Has anyone seen a problem like this (and is there a reference)? Even if not, what deep learning scheme would be best for this? AI: I think the closest problem that has been addressed with deep learning is image inpainting, that is, filling a blacked out region in the image: For instance, this paper: Semantic Image Inpainting with Perceptual and Contextual Losses. So it is certainly possible to fill missing information from an image with deep learning.
H: How can I deal with circular features like hours? Assume I want to predict if I'm fit in the morning. One feature is the last time I was online. Now this feature is tricky: If I take the hour, then a classifier might have a difficult time with it because 23 is numerically closer to 20 than to 0, but actually the time 23 o'clock is closer to 0 o'clock. Is there a transformation to make this more linear? Probably into multiple features? (Well, hopefully not 60 features if I do the same for minutes) AI: The question was already posted, you can find the answer there : What is a good way to transform Cyclic Ordinal attributes? The idea is to transform your time feature into two feature : it's like if you represent the hour as the angle of the hand on the clock, and use the sin/cos of the angle as your features
H: LDA vs Word2Vec vs Others for predicting recipients of a message I'm investigating various NLP algorithms and tools to solve the following problem; NLP newbie here, so pardon my question if it's too basic. Let's say, I have a messaging app where users can send text messages to one or more people. When the user types a message, I want the app to suggest to the user who the potential recipients of the message are? If user "A" sends a lot of text messages regarding "cats" to user "B" and some messages to user "C" and sends a lot of messages regarding "politics" to user "D", then next time user types the message about "cats" then the app should suggest "B" and "C" instead of "D". So I'm doing some research on topic modeling and word embeddings and see that LDA and Word2Vec are the 2 probable algorithms I can use. Wanted to pick your brain on which one you think is more suitable for this scenario. One idea I have is, extract topics using LDA from the previous messages and rank the recipients of the messages based on the # of times a topic has been discussed (ie, the message sent) in the past. If I have this mapping of the topic and a sorted list of users who you talk about it (ranked based on frequency), then when the user types a message, I can again run topic extraction on the message, predict what the message is about and then lookup the mapping to see who can be the possible recipients and show to user. Is this a good approach? Or else, Word2Vec (or doc2vec or lda2vec) is better suited for this problem where we can predict similar messages using vector representation of words aka word embeddings? Do we really need to extract topics from the messages to predict the recipients or is that not necessary here? Any other algorithms or techniques you think will work the best? Should I just use supervised learning instead? What are your thoughts and suggestions? Thanks for the help. AI: Why the detour with the topic? You can just learn a linear SVM to directly predict the recipient and avoid all the difficulties from topic modeling. What if a mail isn't about any of the previous topics? how many topics are there?
H: Is there a model-agnostic way to determine feature importance? Sklearn has a feature_importances_ attribute, but this is highly model-specific and I'm not sure how to interpret it as removing the most important feature does not necessarily decrease the models quality most. Is there a model-agnostic way to tell which features are important for a prediction problem? The only way I could see is the following: Use an ensemble of different models Either start with a big set of features and remove one at a time. To find the features "uplift", compare the ensembles quality with the full feature set against the ensembles quality against the removed feature set. (What this can't do is to find connected features: Some features might be not exactly the same, but have a common underlying cause which is important for the prediction. Hence removing either of them doesn't change much, but removing both might change a lot. I ask another question for that.) AI: Ways to "determine feature importance" are normally called feature selection algorithms. There are 3 types of feature selection algorithms: Filter approaches: they choose variables without using a model at all, just looking at the feature values. One example is scikit-learn's variance threshold selector. Wrapper approaches: they use whatever prediction algorithm to score different subsets of features and choose the best subset based on that. These use a model but are model agnostic, as they don't care about which model you use. One example is recursive feature elimination. Embedded approaches: in these approaches, the variable selection is part of a model, hence the feature selection and the model are coupled together. This is the case of the feature_importances_ in random forest algorithm. From the question, I understand that both filter and wrapper approaches are suitable for the OP needs. A classic article that covers both very well is this one by Kovavi and John. Here you can see an overview of scikit-learn feature selection capabilities, which includes examples of the three aforementioned types of variable selection algorithms.
H: What are some good papers that discuss Tufte's 'data density index' and 'data-to-ink ratio' in data visualisation? I've found a number of resources that mention Tufte's 'data density index' and 'data-to-ink ratio' when considering the analysis of particular visualisations and visualisation techniques, but I am yet to come across any papers that actually critique the measures themselves. Are there critiques of these principles available? AI: There are critcism's of Tuft's principles. Some examples: Sometimes We Must Raise Our Voices, Stephen Few, Perceptual Edge Even after many years of working in the field of data visualization, which has involved a great deal of experience and study that has expanded my expertise into many areas that Tufte hasn’t specifically addressed, I have only on rare occasions discovered reasons to disagree with any of his principles. The topic that I’m addressing in this article, however, deals with one of those rare disagreements. Minimalism in information visualization: attitudes towards maximizing the data-ink ratio From the abstract: People did not like Tufte's minimalist design of bar-graphs; they seem to prefer "chartjunk" instead.
H: Understand clearly the figure: Illustration of a Convolutional Neural Network (CNN) architecture for sentence classification I am studying the blog: Understanding Convolutional Neural Networks for NLP. It is very good blog. One thing I can't understand clearly about this blog. As the figure Illustration of a Convolutional Neural Network (CNN) architecture for sentence classification as following: I want to ask: I know the region sizes(2,3,4) is like 2-gram, 3-gram, 4-gram word, but what’s the meaning of number filters? Here is 2 filters for each region. Why in the author's code about sentence classification is the number of filters defined to 128? Could you give examples to explain the meaning of the number of filters? for example using the sentence of ‘I like this movie very much’ would be great. 2) I understand the height of region size (4) is 4, but in the figure, the height of region(2, 3) are 5 and 6 respectively, I don't know why? I think the height of region is 2 and 3. AI: Answering this in terms of NLP examples is quite hard, remember "All models are wrong, some models are useful." First think of this in an image classification problem context, you want to use a large number of filters to collect a large number of features out of the image, one could detect edges, the other could detect densely coloured areas, one might turn a region to b&w. Extend a similar logic to text, by using a lot of filters, in this case 128, you are trying to capture a lot of features. For an example like , " I like movies very much", a certain filter might detect that like is a positive word and not a similarity comparison, a certain filter of size 2 might detect very much and detect that it is an expression of degeree. You can go on like that, it will be hard to come up with 128 features but the idea is to get enough features. If you think the number is unreasonable and might lead to overfitting, you can reduce the number and compare your results. No, 1- maxpool means that you take the maximum value of the output vector after applying a filter to the input. So it has nothing to do with the longest word but rather choose an element from the output that express the extracted feature to the highest amount.
H: Sentence similarity prediction I'm looking to solve the following problem: I have a set of sentences as my dataset, and I want to be able to type a new sentence, and find the sentence that the new one is the most similar to in the dataset. An example would look like: New sentence: "I opened a new mailbox" Prediction based on dataset: Sentence | Similarity A dog ate poop 0% A mailbox is good 50% A mailbox was opened by me 80% I've read that cosine similarity can be used to solve these kinds of issues paired with tf-idf (and RNNs should not bring significant improvements to the basic methods), or also word2vec is used for similar problems. Are those actually viable for use in this specific case, too? Are there any other techniques/algorithms to solve this (preferably with Python and SKLearn, but I'm open to learn about TensorFlow, too)? AI: Your problem can be solved with Word2vec as well as Doc2vec. Doc2vec would give better results because it takes sentences into account while training the model. Doc2vec solution You can train your doc2vec model following this link. You may want to perform some pre-processing steps like removing all stop words (words like "the", "an", etc. that don't add much meaning to the sentence). Once you trained your model, you can find the similar sentences using following code. import gensim model = gensim.models.Doc2Vec.load('saved_doc2vec_model') new_sentence = "I opened a new mailbox".split(" ") model.docvecs.most_similar(positive=[model.infer_vector(new_sentence)],topn=5) Results: [('TRAIN_29670', 0.6352514028549194), ('TRAIN_678', 0.6344441771507263), ('TRAIN_12792', 0.6202734708786011), ('TRAIN_12062', 0.6163255572319031), ('TRAIN_9710', 0.6056315898895264)] The above results are list of tuples for (label,cosine_similarity_score). You can map outputs to sentences by doing train[29670]. Please note that the above approach will only give good results if your doc2vec model contains embeddings for words found in the new sentence. If you try to get similarity for some gibberish sentence like sdsf sdf f sdf sdfsdffg, it will give you few results, but those might not be the actual similar sentences as your trained model may haven't seen these gibberish words while training the model. So try to train your model on as many sentences as possible to incorporate as many words for better results. Word2vec Solution If you are using word2vec, you need to calculate the average vector for all words in every sentence and use cosine similarity between vectors. def avg_sentence_vector(words, model, num_features, index2word_set): #function to average all words vectors in a given paragraph featureVec = np.zeros((num_features,), dtype="float32") nwords = 0 for word in words: if word in index2word_set: nwords = nwords+1 featureVec = np.add(featureVec, model[word]) if nwords>0: featureVec = np.divide(featureVec, nwords) return featureVec Calculate Similarity from sklearn.metrics.pairwise import cosine_similarity #get average vector for sentence 1 sentence_1 = "this is sentence number one" sentence_1_avg_vector = avg_sentence_vector(sentence_1.split(), model=word2vec_model, num_features=100) #get average vector for sentence 2 sentence_2 = "this is sentence number two" sentence_2_avg_vector = avg_sentence_vector(sentence_2.split(), model=word2vec_model, num_features=100) sen1_sen2_similarity = cosine_similarity(sentence_1_avg_vector,sentence_2_avg_vector)
H: How do I represent SURF Features into Bag of Words to determine Nearest Neighbors? I'm trying to use Speeded Up Robust Features (SURF) to get the $k$ most similar images from a set of images in my directory. I'm planning to use $k$-Nearest Neighbours ($k$-NN) for this. As far as I know, the size of SURF descriptors are $n \times 64$ or $n \times 128$ depending on how many descriptors I want. There are suggestions and I've read about the Bag of Visual Words method, where these patches are converted to bags of words, similar to the common Natural Language Processing technique. I've also read that Bag of Visual Words are generated by clustering, whereas the feature patches are clustered together. What I don't understand is, how can I use these Bag of Visual Words to train my $k$-NN such that I can get similar images? I really can't grasp how those clusters can generate BoWs. Say I have 1000 images, if I convert them, what will they look like? Will they be 1000 BoWs that still represent the same images? AI: The Bags of Visual Words (BoWs) approach to image retrieval, described in works such as Sivic et al. "Video Google: a text retrieval approach to object matching in videos" (2003) and Csurka et al. "Visual categorization with bags of keypoints" (2004), is composed of multiple phases: First, a visual vocabulary, often called a codebook, is generated. This is usually done by applying k-means clustering over the keypoint descriptors of a data set, or a sufficiently descriptive fraction of it. The vector $\mathcal{V}$ of size $k$ containing the centroids $\mathcal{V_i}$ of each cluster is your visual vocabulary. In this case, each image $x$ should yield a variable number $x_n$ of SURF keypoint descriptors, usually of size 64 each. One can aggregate all keypoint descriptors from multiple images and perform k-means clustering over all of them. The choice of the $k$ hyperparameter in clustering depends on the image domain. One may try multiple values of $k$ (10, 100, 1000) to understand which is more suitable for the intended task. Afterwards, each image is "tested" against the codebook, by determining the closest visual vocabulary points and incrementing the corresponding positions in the BoW for each keypoint descriptor in the image. In other words, considering an image's BoW $B = \{ o_i \}$: for each image keypoint descriptor $d_j$, $o_i$ is incremented when the smallest distance (often the Euclidean distance) from $d_j$ to all other visual vocabulary points in $\mathcal{V}$ is the distance to $\mathcal{V}_i$. The result is a histogram of visual descriptor occurrences of size $k$, which can be used for representing the visual content. As similar images will yield similar bags of words, one can compare images through their BoWs. The Euclidean distance between them is a commonly used metric here. Therefore, the bag of words of each image makes a global representation of that image. When performing content-based image retrieval, the $n$ most similar images are retrieved by fetching the $n$ closest (or approximately closest) bags of words to the query image's. No training process is required at this point (we can picture visual vocabulary generation as an offline training phase).
H: Forgetting curve using Duolingo data I'm trying to replicate the forgetting curve using the open sourced Duolingo data for fun. The problem is that my finding doesn't make any sense, namely that the longer you wait the better recall value. Anyone have any pointers? # make our plot outputs appear and be stored within the notebook. %matplotlib inline import matplotlib.pyplot as plt # import matplotlib for scatterplot, use the alias plt import numpy as np # import the numpy package with alias np import pandas as pd from scipy.optimize import curve_fit df = pd.read_csv('learning_traces.13m.csv') # Load Duolingo data df = df[(df['history_seen'] == 1) & (df['session_seen'] == 1)] # Seen only once before df = df.sort_values('delta') # sort by: time (in seconds) since the last lesson/practice that included this word/lexeme df['delta'] = df['delta'] / 60.0 # Seconds to minutes minInMonth = 44640 def func(x, a): return np.exp2(-x / a) xdata = df['delta'] ydata = df['p_recall'] plt.scatter(xdata, ydata) popt, pcov = curve_fit(func, xdata, ydata) print(popt) # Show the result plt.plot(xdata, func(xdata, *popt), 'r-', label='fit') Expectation: Reality Rolling Mean AI: I think the most important problem you are facing is that you are trying to fit a function that somewhat erratically flips between 0 and 1 as if it were a smooth function. My Mathematical intuition says that won't be numerically stable. To create a more smooth function you could calculate a running average of your y data and plot it against the corresponding time data. It should descend as you forget more over a longer period of time and should be easy to plot to demonstrate your intuition. It has hysteresis/history since initial successes will be remembered. Your fit function would be the integral: $$\int_0^X e^{-x/a} dx = a - a\cdot e^{-X/a}$$ where $a$ will be the estimator of the forgetting rate. Other problems could be in the time domain (just leave the unit of time at seconds), or you are mixing time scale/sample scale. Both your expectation and reality plot should cover the complete time range on the x-axis to make them better comparable. Maybe you should throw away data with exceptionally large time delta values anyway, as they are likely to disturb the calculations and the plots. The programs from your references remove data > 9 months and < 45 minutes if I recall correctly. Another domain problem that could interfere is that the data mixed words that are easy to remember and words that are difficult to remember, e.g. 'cat' is in many languages easier to remember than 'butterfly' (though that could also average out over much measurement data).
H: Match an image from a set of images : Combine traditional Computer vision + Deep Learning/CNN In the application I am developing, I have about 5000 product label images.(One label per product). One functionality of my application is that user can take a picture using his camera and get a possible match(es) against the product labels registered the system. Since initially, my system only has one sample per product, I decided to go with traditional Computer Vision techniques. I managed to implement this using Feature extraction and Descriptor matching.(using OpenCV SIFT and FLANN techniques referring this: https://github.com/kipr/opencv/blob/master/samples/cpp/matching_to_many_images.cpp) Now I am thinking how to improve the accuracy by combining with CNN or Deep Learning techniques since when users approve matches, it gradually add more label samples for a product. Is it possible to build a hybrid image matching system combining Computer Vision techniques and CNN/Deep Learning techniques? Are there any similar services already available as services? AI: In my opinion, if you want a hybrid of Convolutional Neural Networks and the classic feature extraction techniques that would be redundant. Mainly because the architecture of a Convolutional Neural Network is composed of convolutions. I won't go much into detail of the whole architecture but these convolutions actually extract the good features for you and then those convolutions are connected to a classic Neural Network that does the classification task. Hence, extracting features using SIFT and using Convolutional Neural Networks would be redundant. In addition, the features that CNN will extract are better as compared to simple SIFT features. Though if you want to push through with combining SIFT and Deep Learning, you can instead substitute convolutions and use SIFT for feature extraction and then feed it into a Neural Network. That would work as well.
H: Random Forest Classifier gives very high accuracy on test set - overfitting? I have a financial dataset, where I'm trying to predict company types, based on the amount dollars, what time of day, and whether they buy or sell (currency pairs). It looks like this: The features I use to predict: X.head(): Dollars | Hours | Buy | Sell -0.761916 0.364838 1 0 -0.924413 0.377558 1 0 -0.573336 0.397836 0 1 -0.561639 0.399144 0 1 -1.164036 0.423715 1 0 The features I want to predict could look like this: y.head() Bank Tech Fund Holding Defence Financial Services Pharma 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 Agriculture Commodities Energi Pension 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 In this snippet, the first five companies are banks. Using a training/test ratio of 0.25, I get an accuracy of 0.99, which seems too good to be true: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) rand_forest = RandomForestClassifier(max_depth = None, random_state = 0) rand_forest.fit(X,y) predictions = rand_forest.predict(X_test) The result of the classification_report: precision recall f1-score support 0 0.98 0.95 0.97 5074 1 0.98 0.91 0.94 2292 2 0.98 0.82 0.89 572 3 0.99 0.83 0.90 235 4 0.98 0.91 0.94 261 5 0.99 0.81 0.89 411 6 0.98 0.83 0.90 239 7 1.00 0.70 0.82 144 8 1.00 0.81 0.89 384 9 0.99 0.81 0.89 200 10 1.00 0.81 0.90 232 avg / total 0.98 0.90 0.94 10044 Adjusting the max_depth parameter of the classifier changes this number significantly though, but I'm still reading up on what the consequences of that parameter actually are. It is worth mentioning that there is only 50,000 entries in this dataset, across 11 different companytypes, which might be too little? Using a simpler DecisionTreeClassifier yiels an accuracy of about 50%. UPDATE: I used the entire dataset for training, not the actual training set. Switching these two outs gives an accuracy of 54%, which sounds much better (or more realistic anyways). AI: rand_forest.fit(X,y) Why are you using the whole data set for training? You are using the test set for training then evaluate the performance on it again? In your code, I didn't see you actually used the training set you created.
H: Amount of multiplications in a neural network model I'm currently reading this book and want someone to tell me if what currently I'm assuming about neural network is right or not. https://www.pyimagesearch.com/deep-learning-computer-vision-python-book/ If a layer has 30 neurons: In a feedforward meural network each neuron of the first layer multiplied with all the neurons of the second layer. That would be 30 neurons of the first layer multiplied by the 30 neurons of the second layer that would give a total of 900 (30*30 = 900) multiplications (is this is correct?) But those 900 multiplications is only for the first layer and the second layer of the neural network(nn). So if I have a feedforward nn that has 3 layers I would have to do 900 more multiplications because they are the multiplications of the output of the first layer (that they are the input of the second layer) with the weight of the third layer. So to recap what I said so far I have a feedforward nn with 3 layer with 30 neurons in the first two layers multiply each neuron with all the neurons of the second layer it would be 30 * 30 = 900 multiplications for each additional layer that I add, it adds 900 additional multiplications. Then for a model with three (3) fully connected layer would be 900 multiplications in the first two layer because for what I said earlier and 900 multiplications in the second and third layer for a total of 1,800 (900 + 900) multiplications excluding an activation function and this is only for a forward pass, is my understanding correct? And in addition to a forward pass in a typical Neural network they also have a backward pass that because of my calculations doing earlier they are 1,800 derivatives (gradient) for the entire backward pass. Am I correct assuming this for a Neural network?. That's why a CPU computer takes so long to train a model because it has to do about 3,600 (1,800 + 1,800 ) mathematical operations. AI: Essentially you are correct, there are a lot of calculations necessary to process inputs and train neural networks. You have some terminology a bit wrong or vague. E.g. In a feedforward meural network each neuron of the first layer multiplied with all the neurons of the second layer. The neurons do not multiply together directly. A common way to write the equation for a neural network layer, calling input layer values $x_i$ and first hidden layer values $a_j$, where there are N inputs might be $$a_j = f( b_j + \sum_{i=1}^{N} W_{ij}x_{i})$$ where $f()$ is the activation function $b_j$ is the bias term, $W_{ij}$ is the weight connecting $a_j$ to $x_i$. So if you have $M$ neurons in the hidden layer, you have $N\times M$ multiplications and $M$ separate sums/additions over $N+1$ terms, and $M$ applications of the transfer function $f()$ And in addition to a forward pass in a typical Neural network they also have a backward pass that because of my calculations doing earlier they are 1,800 derivatives (gradient) for the entire backward pass. It doesn't work quite so directly, and there as a small factor of more calculations involved (you do not calculate each derivative with a single multiplication, often there are a few, some results are re-used, and other operations may be involved). However yes you do need to calculate a derivative for each weight and bias term, and there are roughly that number of weights in your network that require the calculations done. Your suggested numbers are actually quite small compared to typical neural networks used for image problems. These typically perform millions of computations for a forward pass. That's why a CPU computer takes so long to train a model because it has to do about 3,600 (1,800 + 1,800 ) mathematical operations. Actually that is a trivial number of calculations for a modern CPU, and would be done in less than a millisecond. But multiply this out by a few factors: You must do this for each and every example in the training data Your example network is small, think bigger This does not include the activation function calculations - typically slower than a multiply Your rough estimate ignores some of the necessary operations, so as a guesstimate, multiply number of CPU-level operations by 3 or 4 from your analysis. . . . and the number of operations does start to get to values where CPUs can take hours or days to perform training tasks in practice.
H: What are graph embedding? I recently came across graph embedding such as DeepWalk and LINE. However, I still do not have a clear idea as what is meant by graph embeddings and when to use it (applications)? Any suggestions are welcome! AI: Graph embedding learns a mapping from a network to a vector space, while preserving relevant network properties. Vector spaces are more amenable to data science than graphs. Graphs contain edges and nodes, those network relationships can only use a specific subset of mathematics, statistics, and machine learning. Vector spaces have a richer toolset from those domains. Additionally, vector operations are often simpler and faster than the equivalent graph operations. One example is finding nearest neighbors. You can perform "hops" from node to another node in a graph. In many real-world graphs after a couple of hops, there is little meaningful information (e.g., recommendations from friends of friends of friends). However, in vector spaces, you can use distance metrics to get quantitative results (e.g., Euclidian distance or Cosine Similarity). If you have quantitative distance metrics in a meaningful vector space, finding nearest neighbors is straightforward. "Graph Embedding Techniques, Applications, and Performance: A Survey" is an overview article that goes into greater detail.
H: Input for Sales Forecasting I want perform demand forecast for particular item based on attributes.Did I need to train the model with unsold items ? by maintaining sales Quantity as zero or go with only items sold in training period. AI: In short, if there is supply, it should be in the model to determine demand: If you have XXXL pink jeans in your store and don't sell them, I think it needs to be in the model. If you don't have the pink jeans in your store for sale, it doesn't. It gets interesting when you'd sell some XS pink jeans and blue XXXL ones, but you'd at least have to offer them in the store to be able to sell them and get measurement data to train the model.
H: Is it possible to plan and assign Data Science tasks by complexity levels based on team members' experience? How would you assign Data Science primary tasks to Data Scientists in a team according to a colleague' seniority? I mean, as always you can expect different things and also level and pace of self-learning. For example just some picks which come to my mind: Junior/Beginner: mining technology and data cleansing? Intermediate: training models, data visualization? Senior: plan and design work for other team (not just from managerial point of view), decide on strategy and estimate project risks (with a custom data science solution, hehe)? Note. I do not think this is a very opinion based question: it would be however if my premise is wrong in the way if we would argument "there are junior geniuses" but I try to focus rather to some mainstream rule of thumb than exceptions. AI: I would divide tasks by complexity, smaller tasks for the juniors, bigger tasks for intermediates, etc. But the complete task from mining to interpreting the results of a trained model. How would juniors otherwise be able to 'grow' from data cleansing to model training? Data cleansing and model training are probably too closely related to divide between team members. Where would you normalize data, and how do you assure that both team members use the same assumptions? And the Junior has to learn how to clean data properly for models. I do think that Data Science projects, more than other projects in Engineering, have uncertain outcomes. Setting milestones, S.M.A.R.T. targets, monitoring, supervision, reviews, interpretation should be done on a daily basis by Seniors.
H: How to clone Python working environment on another machine? I developed a machine learning model with Python (Anaconda + Flask) on my workstation and all goes well. Later, I tried to ship this program onto another machine where of course I tried to set up the same environment, but the program fails to run. I copied the program to other machines where it also runs smoothly. I cannot figure out what the problem is in the failed case (both the program code and the error message are copious so I am not able to present them here) but I'm almost certain that it is something with the different versions of the dependencies. So, my question is that given an environment where a certain program runs well, how can I clone it to another where it should run well also? Of course, without the cloning of the full system ;) AI: First of all this is a Python/Anaconda question and should probably be asked in a different stack exchange subsite. As for the question itself - you can export your Anaconda environment using: conda env export > environment.yml And recreate it using: conda env create -f environment.yml Please note that as others suggested - you should use virtual environments which allows you to create a certain environment that is separated from that of your machine and manage it more easily. To create a virtual environment in Anaconda you can use: conda create -n yourenvname python=x.x anaconda which you activate using: source activate yourenvname
H: Can we implement random forest using fitctree in matlab? There is a function call TreeBagger that can implement random forest. However, if we use this function, we have no control on each individual tree. Can we use the MATLAB function fitctree, which build a decision tree, to implement random forest? Thanks a lot. AI: I would highly recommend doing some research into the architecture of random forests. There are many sites that provide in depth tutorials on RFs (Implementation in Python). Quick explanation: take your dataset, bootstrap the samples and apply a decision tree. Within your trees, you want to randomly sample the features at each split. You should not have to build your own RF using fitctree however. You don't want to control each individual tree in the forest. This introduces bias, and the point of the RF is that by bagging many trees, you remove the risk of overfitting. Define your hyperparameters and let the algorithm do it's thing. Carefully cross-validate to ensure you are not under-fitting.
H: Converting a nominal attributes to numerical ones in data set I'm using the NSL-KDD data set which contains nominal and numerical values, and I want to convert all the nominal values to numerical ones. I tried the get_dummies method in python and the NominalToBinary method in WEKA, but the problem is that some nominal features contain 64 values so the conversion increases the dimensionality of the data a lot, and this can create problems for the classifier. My question is if I can convert the nominal attributes by establishing a correspondence between each category of a nominal feature and a sequence of integer values, for example protocol_type {tcp=0, udp=1, icmp=2...etc}? Would this alter the credibility of the resulted data set? AI: By converting a nominal attribute to a single numeric attribute as you described, you are implicitly introducing an ordering over the nominal labels which is a bad representation of the data, and can lead to unwanted effects from a classifier. Does it make sense to say that UDP should be inbetween TCP and ICMP? (no!) Imagine you are training a $k$-NN model on this data. It doesn't make sense to say that ICMP should be "further away" from TCP than UDP, but if you adopted the mapping that you suggested, the representation of the data has this assumption built-in. Alternatively, what if you are training a decision tree-based model? Usually, in decision trees, binary split points are chosen for numeric attributes. There could be some randomness in your training data where splits at certain values of the numeric attribute results in overfitting to noise. Typically when converting a nominal attribute to numeric, one numeric attribute per nominal label is created. Each attribute is set to one if the corresponding nominal label is set, and zero otherwise. For example, if a nominal attribute called protocol has labels {tcp, udp, icmp}, then this dataset: $$ \begin{array}{ccl} \text{inst.} & \text{protocol} & \text{other attributes} \\ \hline 1 & \text{tcp} & \dots \\ 2 & \text{icmp}& \dots \\ 3 & \text{icmp}& \dots \\ \vdots & \vdots & \ddots \end{array} $$ could be converted as follows: $$ \begin{array}{ccccl} \text{inst.} & \text{tcp} & \text{udp} & \text{icmp} & \text{other attributes} \\ \hline 1 & 1 & 0 & 0 & \dots \\ 2 & 0 & 0 & 1 & \dots \\ 3 & 0 & 0 & 1 & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \\ \end{array} $$ This is what the NominalToBinary filter does in WEKA. As you mention, the downside of this is that a large number of additional attributes can be introduced if the number of distinct nominal values is high. If the dimensionality is too high after the conversion, you may want to consider using a dimensionality reduction technique such as random projection, PCA, t-SNE, etc. Note that this will reduce the interpretability of your model. You could also use feature selection techniques to remove some of the less useful attributes. It is possible that some of the nominal labels are not useful for your model, and you will improve performance by removing them. Another thing you could try is to use your domain knowledge to reduce the number of categories. For example, TCP and UDP are both transport protocols, maybe for your application the distinction between TCP and UDP is not that important and you can put instances with protocol $\in$ {tcp, udp} into a new category, removing the old ones.
H: How to preprocess data? What is the generic way to preprocess data for machine learning and predictive models ?What are the sequence of steps to be taken? AI: You are having a dataset with both continous and categorical data 1.Centre the data for numerical variable centering usually done by subtracting mean of the column and some times by minimum value of the column 2.Scaling Scaling the data means converting range of data between 0 and 1. It is done by different methods some follows by dividing with range and some follows by dividing with variance(unit var) 3.Skewness Check for the skewness of the variable in given data.If the skewness factor is not around zero then try to perform data tranformations using exponential transformations , Box cox and logarithmic e.t.c 4.One on n encoding Its is also called as one hot encoding also it is used to encode categorical variables that to nominal variables or else if you can use Equilateral encoding etc. for ordinal variable you can encode them as increasing or decreasing order 5.Feature Selection and Importance If you want to remove unwanted variables in your data then you use any feature selection algorithm like recursive feature selection or feature importance by trees etc. 6.Dimensionality Reduction To reduce the number of dimensions in your data go with algorithms like Principal Component Analysis (unsupervised) or Partial Least Squares(Supervised) and select the number of dimensions which can nearly describe variance of your data. 7.Removal of outliers Outliers are the portion of your data you are not explored in some situations to remove outliers you can go with techniques like Spatial Sign and some other techniques. 8.Missing Values Missing values are the most common problem in data science.In order to over come to that you can impute values using different approaches like knn impute and build some model to predict missing data using other variables. 9.Binning Data This is like converting continous data into categorical or interval data this sounds interesting but often leads to loss of valuable information These are some of the important and basic steps of data preprocessing for majority of the algorithms but some algorithms doesn't need some of the steps like Random forest accepting factor values (so no need of one on n encoding) XgBoost accepting missing Values etc.
H: what are the top level subsets/domains of ML? I'm not really happy with the mind maps I've been able to find on Google, most of them are algorithm based. I want to make a good one that is problem/solution domain based. Do I have this right for my top level nodes? Here is the general direction I am headed: https://i.stack.imgur.com/fNuwv.jpg My questions/doubts about what I have so far are:   Is my starting point below generally correct? e.g. no high level subclass is missing, and everything presented as a subclass deserves to be here? is Hybrid learning always just a combination of supervised and unsupervised? Or, are there real examples of other hybrid models (e.g. 'reinforcement' and 'supervised', etc.). I know theoretically we can combine any methods...I'm looking for what's real/applied/demonstrable today. does Reinforcement learning belong at this high level, or is it actually a subset of one of the others (or one I've omitted)?   Machine Learning 1.1 Supervised (uses labelled data to train and validate) 1.2 Unsupervised (uses unlabeled data, or ignores labels if they are present) 1.3 Semi-supervised (uses partially labelled (mostly unlabeled) data) 1.4 Hybrid (combines a supervised method and an unsupervised method) 1.5 Reinforcement Learning (uses data from the environment)   Thank you! AI: Google (in images!) on 'machine learning cheat sheet', to find an example like this: https://docs.microsoft.com/en-us/azure/machine-learning/studio/algorithm-cheat-sheet So you are well on your way in identifying all kinds of techniques, but you may want to turn your plot 'sideways', e.g. two-way / multi-class classification are problem (sub)domains and result in different recommended machine learning techniques.
H: Multiple time-series predictions with Random Forests (in Python) I am interested in time-series forecasting with RandomForest. The basic approach is to use a rolling window and use the data points within the window as features for the RandomForest regression, where we regress the next values after the window on the values within the window. Just plain autoregressive model (with lags), but with Random Forest instead of linear regression. Problem: If I have more than one time-series (multiple time-series), how to pass them in RF regression? For example: given two time series $y_1(t)$ and $y_2(t)$, the outcome time series is $z(t)$ and I am interested in predicting the values of $z(t)$ based on the combination of $y_1$ and $y_2$. What I need, is to use rolling window for each $y_1$ and $y_2$, and then feed these values within the window from both time-series into RF regression, to predict the value of $z(t)$. Question: How do I incorporate the data from both rolling windows into the input for RF regression? AI: Random forest (as well as most of supervised learning models) accepts a vector $x=(x_1,...x_k)$ for each observation and tries to correctly predict output $y$. So you need to convert your training data to this format. The following pandas-based function will help: import pandas as pd def table2lags(table, max_lag, min_lag=0, separator='_'): """ Given a dataframe, return a dataframe with different lags of all its columns """ values=[] for i in range(min_lag, max_lag + 1): values.append(table.shift(i).copy()) values[-1].columns = [c + separator + str(i) for c in table.columns] return pd.concat(values, axis=1) For example, the following code: df = pd.DataFrame({'y1':[1,2,3,4,5], 'y2':[10,20,40,50,30], 'z': [1,4,9,16,25]}) x = table2lags(df[['y1', 'y2']], 2) print(x) will produce output y1_0 y1_1 y1_2 y2_0 y2_1 y2_2 0 1.0 NaN NaN 10.0 NaN NaN 1 2.0 1.0 NaN 20.0 10.0 NaN 2 3.0 2.0 1.0 40.0 20.0 10.0 3 4.0 3.0 2.0 50.0 40.0 20.0 4 5.0 4.0 3.0 30.0 50.0 40.0 The first two rows have missing values, because lags 1 and 2 are undefined on them. You can fill them with what you find appropriate, or simply omit them. When you have matrix of $x$ values, you can feed it, for example, to a scikit-learn regressor: from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor().fit(x[2:], df['z'][2:]) Finally, a piece of advice. Your model could much improve if you used not only raw lagged values as features, but also their different aggregations: mean, other linear combinations (e.g. ewm), quantiles, etc. Including additional linear combinations into a linear model is useless, but for tree-based models it can be of much help.
H: Which regularizer to use to get a sparse set of regression parameters? I am doing a regression and I want to use the regularizer that will be the most useful to get a sparse set of parameters. Which regularizer should I use ? Cardinality? maximum value ? Sum of absolute values ? Euclidean norm ? AI: The most common sparse regularizer is sum of absolute values (so-called Lasso regression). With carefully chosen penalty coefficient, it makes some of less useful parameters exactly zero. Cardinality penalty exactly imposes sparsity, but it cannot be combined with gradient descent, and usually requires combinatorial optimization. Simply put, it is slow to apply. Maximum value and Euclidean norm do not affect sparsity at all. You can find more details in the wikipeda article.
H: Cropping Images for Dataset Problem I want to train HyperGAN with a set of 400+ images of people, but they aren't the specified size (32x32 pixels) for training. Question Is there any way/program to help cropping/resizing them so as to not do it 100% manually? AI: Cropping and/or resizing is very trivial using OpenCV. You could write a 10 line script to iterate over all your images and apply the transformation you require.
H: How to treat sparse categorical features in a Neural Network for multiclass classification with Tensorflow? I am building a Neural Network for multiclass classification. My dataset has 3 millions of observations. My features are 7 unordered categorical values. My problem is sparse as 4 features among the 7 can take more than 1500 different values. The label is a categorical variable that takes 2100 different values. My model outputs a vector of 2100 probabilities through the softmax function. I am using Tensorflow with Python and I wanted to use one-hot encoding to encode each of the features. Do this method suffer from the sparsity of my data? For instance, do the One-Hot encoding transform my inputs into a vector of more than 2000*4 = 8000 features? In that case, if I use a mini-batch gradient descent with a batchsize of 50, is the fact that I have a lot more features than samples (8000>>>50) a problem ("curse of dimensionality")? Would you have a solution, and a way to implement it on Tensorflow with Python? AI: Having more features than batchsize it's not a problem indeed is the rule in many fields like computer vision. Having a very sparse dataset is also the norm in NLP. Both communities rely on dropout to prevent overfitting. You may see an example of using drouput with Tensorflow here.
H: Text annotating process, quality vs quantity? I have a question regarding annotating text data for classification. Assume we have ten volunteers who are about to annotate a large number of texts into label A or B. They probably won't have time to go through all the text samples, but at least a significant portion of them. Should we focus on generating new samples for each annotator? (They never see the same text samples as any other annotator) (quantity approach). Or should all annotators see the same samples and the annotator agreement is taken to account? (quality approach). Thoughts, will generate more unique samples than 2. (more training samples for a classifier) - and hoping that in the feature extraction part, the useful features will appear by themselves. will generate less unique samples, but with the annotator agreement taken into account. (less training samples for a classifier, but with higher quality) AI: It depends on the context which you consider. For example, suppose there is a situation that all possible states can be covered by 10K different texts. It is trivial, if these texts are all annotated, then for 1000 tests, at least we can classify 500 of them truly (as we have two classes, and probability of wrong annotation for each text is at most 0.5). Now, suppose 1K of 10K texts are annotated. Then, as annotations are exact, we can classify 1/10 of 1000 texts truly (because we have no idea about the other 9K possible states). Therefore, in this situation quantity is more important than quality. Also, we can consider these cases, when the possible states are 1K. It could be straightforward to show that in this case (if the power of annotators is same as the former case) quality can be more important than quantity. However, in the most cases this number is not realistic. In sum, as in the most cases the variety of texts are more than the power of annotators, we prefer quantity to quality, as we can cover more text space and machine can learn more. Although the accuracy can be less, but for two classes classification is negligible.
H: Forecasting: How Decision Tree work? For example I have the following data structure: user: Chris age: 32 income: 60.000 basket value: 45 I want predict the basket value, and my features are the age and income. With a linear regression I get a regression function as the result of the fitting for example: $$y = 0.5x + 0.785$$ Now I can use the function for prediction. What is the form of the result of the fitting by regression decision tree? Is it also a function? AI: Yes. It is also a function, but not an affine transformation of the input but a relatively complex sum of products of indicator functions of the input. Usually, this function is represented by the fitted tree and not as a formula. So e.g. if you learn a tree of depth one and the split is at age 40 with mean response of 80 if age < 40 and mean response of 100 if age $\ge$ 40, then the function could look like $$ \hat f(\text{age}, \text{income}) = 80 \cdot {\mathbf 1}\{\text{age} < 40\} + 100 \cdot {\mathbf 1}\{\text{age} \ge 40\} $$ You can maybe imagine how long the formula is if the depth is 7...
H: Why should I normalize also the output data? I'm new to data science and Neural Networks in general. Looking around, many people say it is better to normalize the data before doing anything with the NN. I understand how normalizing the input data can be useful. However, I really don't see how normalizing the output data can help. I've also tried both cases with an easy dataset, and I achieved the same results. The only difference is that in some weird problems, it is really hard to then re-convert the output back. Can you give me some intuition on why we should also normalize the output? Or maybe why it is indifferent? AI: That paper gives a nice answer, where i quoted from. Search for Should I standardize the target variables (column vectors)? in that page. Standardizing target variables is typically more a convenience for getting good initial weights than a necessity. However, if you have two or more target variables and your error function is scale-sensitive like the usual least (mean) squares error function, then the variability of each target relative to the others can effect how well the net learns that target. If one target has a range of 0 to 1, while another target has a range of 0 to 1,000,000, the net will expend most of its effort learning the second target to the possible exclusion of the first. So it is essential to rescale the targets so that their variability reflects their importance, or at least is not in inverse relation to their importance. If the targets are of equal importance, they should typically be standardized to the same range or the same standard deviation.
H: The automatic construction of new features from raw data Some observations are far too voluminous in their raw state to be modeled by predictive modeling algorithms directly. Common examples include image, audio, and textual data, but could just as easily include tabular data with millions of attributes. Feature extraction is a process of automatically reducing the dimensionality of these types of observations into a much smaller set that can be modelled. For tabular data, this might include projection methods like Principal Component Analysis and unsupervised clustering methods. For image data, this might include line or edge detection. Depending on the domain, image, video and audio observations lend themselves to many of the same types of DSP methods. What about generating new features with higher predictive values from the raw data and concatenate them to the raw data? For example I have data about student wealth, health, family status and I want somehow generate a new feature I can call Social status whic is generated from the raw data and with high predictive value? Is this possible? linear regression can be a good way I need to discover? AI: 'high predictive value' is only defined if you a target which you are trying to predict upon. It seems that you don't, and your goal is to cluster data points according to some scale defined by a variety of factors. These can undoubtedly be used to cluster data points, and I'd advise you to look into the various methods available: some that may be interesting for you are Agglomerative and Hierarchical clustering. Now to answer the question, you can surely generate new features from the ones present in your dataset that may or may not help you achieve your goal. You can: Bin your data: define some categories such as 'rich', 'average', 'poor' with specified ranges and create a new feature that maps a numerical value (wealth) to a bin One-hot-encode categorical variables After these pre-processing steps are done, you could go ahead an apply the clustering methods I mentioned to group data together into their respective 'Social Status'. Of course, lots of tweaking and experimentation will be needed. As far as what I've encountered, there are not really automagic ways of generating new features, and mostly the available methods will depend greatly on the type of data and problem you are working on.
H: Regression: Should "known outputs" be also activated? Okay so, I understand that inputs are sent directly to the network (basically being multiplied to weights of the nodes, receive a bias, and gets activated) and then the network produces an output through this feed forward method. Now, in order for the network to "learn," we determine the error by comparing the predicted output versus the "known output / answer," and then back-propagate. Right? What I don't get it is if the output goes through some activation function, say Sigmoid (results to a value between 0.0 - 1.0), how can it ever learn if the known output is a continuous value such as "Salary", "Number of Something", or in the use-case I have, "Volume of Orders" which ranges from anywhere between 300 to 1500. // Example I have 7 inputs // They go through a hidden layer that consists 16 hidden neurons // and my output is a single neuron that's supposed to be "Volume of Orders" // // MyInputLayers[] --> HiddenLayers[] --> Output=0.123 // // Since I learn from tutorials that Output layers go through an activation, // I think it's impossible for the network to learn because a // prediction of 0.123 compared to the "known output" that is, say 615 // is almost ridiculous Should I apply Sigmoid to the "known output" as well? If so, how can I remap it when the network has already learned and is ready to do some predictive magic? I must be missing / not understanding something here. Thank you in advance. p.s. My dataset is composed of 6 months of records; an Excel file that has 200,000+ rows and 8 columns (7 leading, 1 lagging). AI: The sigmoid function is usually used for classification tasks. Basically, it converts a continuous value into a binary variable (0 or 1) based on a certain threshold. An example is predicting breast tumor as malignant (1) or benign (0) based on whether or not the output is greater than the threshold value (typically, 0.5). If you want to use ANN for regression, first you'll have to change your cost function into something more appropriate, like mean squared error (MSE). Then adapt the network to provide continuous output, by using a linear activation function, for example. Source: https://www.quora.com/Can-Deep-Learning-and-Neural-Networks-be-useful-for-regression-problems-where-the-output-variable-has-an-unknown-or-varying-in-real-time-upper-bound
H: How would you optimize this code? I have the following code written using the pandas library. I would like to know if there are ways to optimize the code. for column in df: for index, row in df[column].iteritems(): if type(row) == str: if 'R$' in row: n = row.replace('R$', '') n = n.replace(' ', '') n = n.replace('.', '') df[column].iloc[index] = float(n) Just want to remove unwanted string parts. AI: You can replace the symbol in dataframe without iterating yourself. df = df.replace({'R\$': ''}, regex=True) Then change the type of columns that can be numeric. If you don't know which are those columns, use this that will automatically change the type to numeric and ignore those that cannot be changed. df = df.apply(pd.to_numeric, errors='ignore') When you use replace and in general many other pandas features, it doesn't update your dataframe. It creates a new, temporary dataframe. So, you either need to assign it back to your original dataframe or use inplace=True wherever it is available, like: df.replace({'R\$': ''}, regex=True, inplace = True) $ is a special character in regex, so you need to escape it. That's why the backslash before it. import pandas as pd dic = {'feature1': 'R$ aaa bb', 'feature2': 1} df = pd.DataFrame(dic, index=[0,1]) print(df) >> feature1 feature2 0 R$ aaa bb 1 1 R$ aaa bb 1 df = df.replace({'R\$': ''}, regex=True) print(df) >> feature1 feature2 0 aaa bb 1 1 aaa bb 1
H: Word2vec continuous bag of words and skip grams model Recently, I want to understand word2vec. I know there are two algorithm behind word2vec. One is CBOW another is Skip grams model. Here is question, is CBOW also have windows size like skip-gram model and will do iteration of corpus? For example, "I am eating pizza now", let say windows size is 2. In CBOW, feature will be "I, am, pizza, now", label will be "eating". Is it will also do "I" as label, "am, eating" as feature and so on in iteration? AI: In CBOW you are predicting a target word from source context words. In Skip-gram it is the inverse, given a target word, it predicts source context words. CBOW also creates vector representations for all the words apart from just "eating" in your example by constructing tuples for training based on the window size. So considering the window size of 2, the data would look like this: ([I, eating], am), ([am, pizza], eating), ([eating, now], pizza) and so on and so forth.
H: Does the choice of normalization change dramatically the result of a KMeans I'm using a KMeans to get the profile of several users according to several columns (I'm working with RStudio). To analyze my clusters, I decided to realize a radar chart, so I decided to use feature scaling : x-min(x)/diff(range(x)), to have my values in [0,1] (to get a quite good idea of my data per cluster). However, since there are multiple choice for normalization, I was wondering if doing my analysis with another choice for normalization - for instance : x-mean(x)/sd(x) - would give me the same results (in a general way at least) Or am I completly wrong for considering my scaled data and should I use my unscaled data in my radar chart ? AI: Yes. Feature scaling can completely change the clustering result. People usually scale data to [0:1] or to have a standard deviation of 1. However, that is nothing but a heuristic. In many cases, the need for scaling is nothing but a symptom, caused by inappropriate data for the method. You can't just fix this by some naive scaling, but it's just a hack that often works. For statistically meaningful results, all axes should be scaled to reflect attribute relevance, such that a difference of 1 unit is of the same importance in each attribute.
H: Logistic Regression : Solving the cross-entropy cost function analytically Logistic regression cost function is cross-entropy. It is defined as below: This is a convex function. To reach the minimum, scikit-learn provides multiple types of solvers such as : ‘liblinear’ library, ‘newton-cg’, ‘sag’ and ‘lbfgs’. Is it possible to analytically find the minimum? if yes, what can we say of the computing complexity? AI: If you use just one neuron (linear->sigmoid), you can find the minimum with too much small error or maybe reach to the minimum using approaches like gradient descent or other optimization algorithms. The reason for finding the minimum is that cross entropy is a non-convex shape, but if you use sigmoid function as the activation function of logistic regression, the cross entropy cost function becomes convex and it is easy to find the only global minimum.
H: Multiple Object recognition in image using deep Learning I am working on recognizing object classes in images using neuronal nets so I could make classifiers for cats, dogs... using Imagenet and some Conv nets famous architectures but my problems is if I had an image that contain a human and a cat and a car how can I detect and recognize 3 of them? AI: It falls under the multi label object detection problem. There have been lots of advancements done in this field and you should definitely look at implementations like : Yolonet: https://pjreddie.com/darknet/yolo/ Squeezenet: http://songhan.github.io/SqueezeNet-Deep-Compression/
H: Product Recommendation based on purchase history I am dealing with problem where i have to increase the sales by product recommendation.I only have customer data and product that they have purchased.No ratings,reviews or feedback is present.What approach fit best for my problem. AI: Even if you don't have ratings or reviews, you can use the customer purchases to help creating your model and selecting the most appropriate one. If the customer has bought the product, you can suppose he has liked it. If he hasn't, we can suppose he has disliked it. This is an approach which is mainly used by e-shops. You can find more on recommendation system selection and validation below. It is discussed how to choose the best recommender with both offline and online approaches. https://medium.com/recombee-blog/evaluating-recommender-systems-choosing-the-best-one-for-your-business-c688ab781a35
H: Single vs Multiple deep learning networks for multi-label classification? Given a machine reaches a broken state, there are potentially fixes that can to be applied to get the machine to run again. We'd like to know if, for the problem further defined below, with millions of datapoints, hundreds of features, and tens of labels, should we be looking at using a single deep neural network with multiple outputs, or create an ensemble of binary networks with human-selected features presumed better for their relevance? Is there a standard approach? The state of the machine is captured in potentially hundreds of features, a good mix of continuous values and categorical data. We have millions of documented cases of machine states, and can identify when a machine had broke and what fixes were applied to get it running again. There are less than 40 fixes we are interested in, and we are considering a bucket for "other" fixes, and a bucket for "unfixable". We are treating this as a multi-label classification, because it may take multiple discrete fixes (Fix-1, Fix-2...Fix-N) to get the machine up and running again. Not all features would be relevant to each fix, so the question we're wondering is whether each fix should have its own binary classification (each outputing a single value representing Fix-i or not Fix-i) network with what we think are the relevant features, or should we create one giant deep neural network with multiple labels, sigmoid over each fix (Fix-1, Fix-2...Fix-N). With the latter approach, should combinations of fixes be represented as their own label (Fix-1, Fix-2, Fix-1&2, Fix-3, Fix-1&3, Fix-2&3, Fix-1&2&3 ... etc). AI: This kind of setup is normally addressed with a single neural network that has an output sigmoid layer of $N$ units (where $N$ is the number of possible fixes). Having combinations of fixes represented with their own label does not make much sense; instead, let the different labels take the expected value: if 2 fixes are fine, let both expected outputs be 1. Having $N$ networks each one generating a single fix-nofix is equivalent but does not benefit from the potential gains of the multitask learning obtained when training everything together and obtaining a shared internal representation.
H: Are there free cloud services to train machine learning models? I want to train a deep model with a large amount of training data, but my desktop does not have that power to train such a deep model with these abundant data. I'd like to know whether there are any free cloud services that can be used for training machine learning and deep learning models? I also would like to know if there is a cloud service, where I would be able to track the training results, and the training would continue even if I am not connected to the cloud. AI: There are no unlimited free services*, but some have starting credit or free offers on initial signup. Here are some suggested to date: AWS: If specifically deep learning on a large data set, then probably AWS is out - their free offer does not cover machines with enough processing power to tackle deep learning projects. Google Cloud might do, the starting credit offer is good enough to do a little deep learning (for maybe a couple of weeks), although they have signup and tax restrictions. Azure have a free tier with limited processing and storage options. Most free offerings appear to follow the "Freemium" model - give you limited service that you can learn to use and maybe like. However not enough to use heavily (for e.g. training an image recogniser or NLP model from scratch) unless you are willing to pay. This best advice is to shop around for a best starting offer and best price. A review of services is not suitable here, as it will get out of date quickly and not a good use of Stack Exchange. But you can find similar questions on Quora and other sites - your best bet is to do a web search for "cloud compute services for deep learning" or similar and expect to spend some time comparing notes. A few specialist deep learning services have popped up recently such as Nimbix or FloydHub, and there are also the big players such as Azure, AWS, Google Cloud. You won't find anything completely free and unencumbered, and if you want to do this routinely and have time to build and maintain hardware then it is cheaper to buy your own equipment in the long run - at least at a personal level. To decide whether to pay for cloud or build your own, then consider a typical price for a cloud machine suitable for performing deep learning at around \$1 per hour (prices do vary a lot though, and it is worth shopping around, if only to find a spec that matches your problem). There may be additional fees for storage and data transfer. Compare that to pre-built deep learning machines costing from \$2000, or building your own for \$1000 - such machines might not be 100% comparable, but if you are working by yourself then the payback point is going to be after only a few months use. Although don't forget the electricity costs - a powerful machine can draw 0.5kW whilst being heavily used, so this adds up to more than you might expect. The advantages of cloud computing are that someone else does the maintenance work and takes on the risk of hardware failure. These are valuable services, and priced accordingly. * But see Jay Speidall's answer about Google's colab service, which appears to be free to use, but may have some T&C limitations which may affect you (for instance I doubt they will be happy for you to run content production of Deep Dream or Style Transfer on it)
H: Why is an activation function notated as "g"? In many cases an activation function is notated as g (e.g. Andrew Ng's Course courses), especially if it doesn't refer to any specific activation function such as sigmoid. However, where does this convention come from? And for what reason did g start to be used? AI: The addition of the activation layer creates a composition of two functions. "A general function, to be defined for a particular context, is usually denoted by a single letter, most often the lower-case letters f, g, h." So it comes down to the reason that he uses the hypothesis representation h(x)=wX+b which is a function, and that is wrapped by an activation function denoted as g. The choice of g seems to be purely alphabetical.
H: How to use k-means outputs (extracted features) as SVM inputs? l have a dataset of images with their labels. l put them into a k-means algorithm (as a feature extractor). Now, l would like to use this new representation of images (features extracted from k-means algorithm) as SVM classifier inputs. How can l do that ? Number of cluster k=400 and numbers of images=1000. However, l just have the vectors of centroids (400 centroids) l need to get the representation for each image with respect to the centroids. EDIT1 package update from sklearn import mixture gmm = mixture.GMM(n_components=6).fit(X) Now l would like run k-means with different k=range(50,500), how can l get the distances for each k ? Is is correct to do the following : K=range(50,500) KM=[KMeans(n_clusters=k).fit(X) for k in K] distances = [np.column_stack([np.sum((X - center)**2, axis=1)**0.5 for center in C.cluster_centers_]) for C in KM] AI: 'Prediction' of k-means algorithm for each observation is just the corresponding centroid. So you can take vector of predicted centroids and use it as a categorical feature (maybe one-hot encoded). But it is just one feature. With little coding you can do better. For example, you can find for each sample its distance to each of $k$ cluster center, and so create $k$ new features. A Python example: from sklearn.datasets import load_iris from sklearn.cluster import KMeans from sklearn.svm import SVC import numpy as np iris = load_iris() X = iris['data'] y = iris['target'] kmeans = KMeans(n_clusters=6).fit(X) distances = np.column_stack([np.sum((X - center)**2, axis=1)**0.5 for center in kmeans.cluster_centers_]) svm = SVC().fit(distances, y) Another (and maybe simpler way) is to fit a gaussian mixture model (e.g. by scikit-learn). It is similar to k-means, but for each observation produces a probability distribution over clusters, instead of a single cluster label. These vectors of predicted cluster probabilities may be used as features as well. from sklearn.mixture import GaussianMixture gmm = GaussianMixture(n_components=6).fit(X) proba = gmm.predict_proba(X) svm2 = SVC().fit(proba, y)
H: SVDD vs once Class SVM Can some one please explain me what is the difference between one class SVM and SVDD(support vector data description) AI: Support vector data description (SVDD) finds the smallest hypersphere that contains all samples, except for some outliers. One-class SVM (OC-SVM) separates the inliers from the outliers by finding a hyperplane of maximal distance from the origin. If the kernel function has the property that $k(\mathbf{x}, \mathbf{x}) = 1 \quad \forall \mathbf{x} \in \mathbb{R}^d$, SVDD and OC-SVM learn identical decision functions. Many common kernels have this property, such as RBF, Laplacian and $\chi^2$. SVDD and OC-SVM are also equivalent in the case that all samples lie on a hypersphere centered at the origin, and are are linearly separable from it. See Lampert, C. H. (2009). Kernel methods in computer vision (Chapter 5) for more detailed descriptions of these models.
H: How is a splitting point chosen for continuous variables in decision trees? I have two questions related to decision trees: If we have a continuous attribute, how do we choose the splitting value? Example: Age=(20,29,50,40....) Imagine that we have a continuous attribute $f$ that have values in $R$. How can I write an algorithm that finds the split point $v$, in order that when we split $f$ by $v$, we have a minimum gain for $f>v$? AI: In order to come up with a split point, the values are sorted, and the mid-points between adjacent values are evaluated in terms of some metric, usually information gain or gini impurity. For your example, lets say we have four examples and the values of the age variable are $(20, 29, 40, 50)$. The midpoints between the values $(24.5, 34.5, 45)$ are evaluated, and whichever split gives the best information gain (or whatever metric you're using) on the training data is used. You can save some computation time by only checking split points that lie between examples of different classes, because only these splits can be optimal for information gain.
H: How to implement a convolutional autoencoder? I would like to implement a convolutional autoencoder in Tensorflow, but it is not clear how the decoder part should work. Each layer of the encoding, is a convolutional layer with activation function and then a pooling layer. But how will the decoding work? I know that I have to add padding in each layer, but what will be the reverse of the convolution? How will it reproduce the original data from much less variables and the padding? AI: Transposed Convolutions is what you are looking for, for more details take a look here: https://medium.com/towards-data-science/types-of-convolutions-in-deep-learning-717013397f4d
H: Working with Data which is not Normal/Gaussian What happens if my data/feature is not normal? Can I still use machine learning algorithms to utilize such data for predictions? I noticed in many data sciences courses, there is always a strong assumption of using a normal/Gaussian data. I have always wonder why this is so, and most people would say that due to central limit theorem, the data is always assumed to be normal. However, what if the data that I am dealing with is not normally distributed? Should I perform log/ exponential transformation on the data for the sake of getting normal distributed data? Why is Gaussian data always best suited? AI: There are models that do not make assumption that the underlying data distribution is a normal distribution. For example, support vector machine just cares about the boundaries of the separating hyperplane and do not assume the exact shape of the distributions. Decision tree models also do not make such assumption. Gaussian distribution is popular and it has been analyzed frequently for its simplicity but there are other models as well. If you know the distributions that you believed your data can follow, you can also build your own model, for example by maximizing the likelihood or posterior distributrion. If you are able to convert your data to normal distribution and use a model that you are familiar with, you can try it and see how does it perform.
H: How to calculate growth function for a threshold function I'm working on a homework problem but don't fully understand it. The problem and solution: I don't understand the definition of the threshold function. Does it mean to pick one feature and classify the point based on that one feature? It's the only way I can think of to explain the solution, $N$ ways to pick a feature, for each feature there are $m+1$ ways to select the threshold. AI: Yes, your interpretation is correct. Each member of $H$ is one such function, they are parametrized by $i$, the feature selected and $\theta$, the chosen threshold. Different $\theta$ might corresponds to the same function but effectively there are only $m+1$ such function for each $i$. Hence, the set $H$ consists of at most $(m+1)N$ elements.
H: ROC curve shows strange results for imbalanced dataset I have a classifier with a heavily imbalanced dataset (1000 of each negative label for each positive.) I'm running a GradientBoostingClassifier with moderate success (AUC .75) but the curve has this strange look: Any good ideas on what would cause the curve to have this behaviour? AI: Davis and Goadrich have explained the relationship between ROC and PR Curves in their paper. It is always recommended to use PR curve over the ROC curve in the presence of highly imbalanced data. Back to the behavior of your ROC curve, It seems that you don't have more threshold points! I would also agree with Dan and do K-fold CV. Davis, J. and Goadrich, M., 2006, June. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd international conference on Machine learning (pp. 233-240). ACM.
H: Cosine similarity between query and document confusion I am going through the Manning book for Information retrieval. Currently I am at the part about cosine similarity. One thing is not clear for me. Let's say that I have the tf idf vectors for the query and a document. I want to compute the cosine similarity between both vectors. When I compute the magnitude for the document vector, do I sum the squares of all the terms in the vector or just the terms in the query? Here is an example : we have user query "cat food beef" . Lets say its vector is (0,1,0,1,1).( assume there are only 5 directions in the vector one for each unique word in the query and the document) We have a document "Beef is delicious" Its vector is (1,1,1,0,0). We want to find the cosine similarity between the query and the document vectors. AI: You want to use all of the terms in the vector. In your example, where your query vector $\mathbf{q} = [0,1,0,1,1]$ and your document vector $\mathbf{d} = [1,1,1,0,0]$, the cosine similarity is computed as similarity $= \frac{\mathbf{q} \cdot \mathbf{d}}{||\mathbf{q}||_2 ||\mathbf{d}||_2} = \frac{0\times1+1\times1+0\times1+1\times0+1\times0}{\sqrt{1^2+1^2+1^2} \times \sqrt{1^2+1^2+1^2}} = \frac{0+1+0+0+0}{\sqrt{3}\sqrt{3}} = \frac{1}{3}$
H: Identifying which known groups are the most similar or most dissimilar I have a data set of 5 groups and their associations to different interests. The data is structured as follows with 2000+ interests and 5 known groups: Interest Group1 Group2 Group3 Group4 Group5 01. Sports 10% 40% 30% 80% 65% 02. Music 35% 60% 55% 10% 15% 03. Fashion 80% 10% 75% 5% 25% . . . 1998. Movies 70% 20% 45% 50% 65% 1999. Books 50% 70% 35% 65% 95% 2000. Art 90% 50% 25% 15% 30% I would like to gain an understanding of what would be the best way to determine which groups are closest to each other and which ones are the most different from each other in terms of interests. I am leaning towards cluster analysis, however the structure of my data is throwing me off. Any suggestions would be greatly appreciated. AI: For tiny sample sizes (5), hierarchical clustering and dendrograms work best. As similarity measure, I'd go with Manhattan distance.