text
stringlengths
83
79.5k
H: Does image's background matter for detector training (CNN)? Does an image's background matter for detector/localisation in the training part (using CNN)? For example, if I want to make a face detector, which one is better as training dataset? Faces cropped dataset Faces in a global scene dataset Does it matter? AI: Of course, it matters. Which one is best completely depends on your problem. The golden rule for machine learning problems is that you want the data you train on to be as representative as the data the model will encounter once live. Typically, faces cropped dataset will be an easier problem because the CNN can directly focus on the face, but if your use case will use images of faces that are not cropped, your model might not perform well. So use whatever data you expect to encounter later on. The only thing you could do is add a face-cropping algorithm before your network, and then you could train on the faces cropped dataset right away.
H: How to implement a Restricted Boltzmann Machine manually? I am learning about Restricted Boltzmann Machines and I'm so excited by the ability it gives us for unsupervised learning. The problem is that I do not know how to implement it using one of the programming languages I know without using libraries. I want to implement it manually, which means that I want to use native functionalities of a language as much as possible. The programming languages I know are Java, C, PHP (my preferred language), JavaScript, R and Python. I am not familiar with TensorFlow or Scikit-Learn or similar stuff. Thanks in advance AI: ML From Scratch has an implementation in Python
H: GAN for inpainting an image Is it possible to train a GAN model to inpaint an image taken from a specific setting(e.g. an office, woods, beach ...) after we have cropped out people of it? For example, I used this repo's pretrained GAN model on Places2 I used a MaskRCNN model to segment people, and then cropped them out : But when I fed the image to the mentioned GAN model, results weren't so great : If it's possible, what is the best GAN architecture to use? AI: I would doubt there's a single correct answer for best available architecture, but the current best results come from this paper by NVIDIA and this technical report by Adobe. The latter paper is the second iteration of the repo you linked. Both papers focus on non-rectangular inpainting, which seems to be your targeted interest. Neither has publicly available code, however, so if you're interested in seeing what those models can accomplish you'll have to re-implement them yourself! Edit: NVIDIA released a PyTorch implementation since the initial answer, here's the repo.
H: Extract datatable column by indexing a character vector I am asking a similar question to this one except I am trying to index a vector of column names. If I have a data.table like a: a <- data.table("NAME" = c("A", "B", "A"), "PASS_FAIL" = c("F", "P", "P"), "TEST_COUNT" = c(NA, NA, 1)) And a vector of column names: col_names <- c("NAME", "PASS_FAIL", "TEST_COUNT") Then try to index with this syntax: a[col_names[1]] I get this error: Error in `[.data.table`(a, col_names[1]) : When i is a data.table (or character vector), the columns to join by must be specified either using 'on=' argument (see ?data.table) or by keying x (i.e. sorted, and, marked as sorted, see ?setkey). Keyed joins might have further speed benefits on very large data due to x being sorted in RAM. I also tried this a$col_names[1] which results in NULL (but since it is a data.table this is the wrong syntax. Calling just col_names[1] gives a character "NAME" which is what I expected. Any ideas on what I am doing wrong here? Thanks! AI: You need [[]] as otherwise you are attempting to index the row. a[[col_names[1]]]
H: Why is patsy used to prepare data for logistic regression? I'm pretty new to both ML & scikit-learn. I've noticed that some example tutorials & codes online use patsy's dmatrices to prepare data for logistic regression. I don't understand why this is done. Example In the case above for instance, isn't it sufficient to use the data in the dataset directly to train the logistic regression model? What exactly is the point of using patsy? Thanks! AI: Mostly convenience. In this particular case, it takes care of one-hot encoding categorical variables e.g. C(occupation). patsy takes care of other things under the hood as well, like dropping rows with missing values and adding a constant intercept variable.
H: Poor performance of SVM after training for rare events I found out that Weighted SVM is a classification approach to handle class imbalance problem. My data set is highly imbalanced with rare event (minority class, labeled as 1) and the majority class (label 0). So I implemented the supervised classification weighted svm technique with stratified cross-validation as these are able to handle class imbalance. I added an additional tuning for the C parameter (boxconstraint). The training is done using 5 folds cross-validation approach. The method works well on the training set. I get good performance after the training. This I can say because by looking at the confusion matrix after the training. cmMatrix = 1443 27 0 30 It is generally recommended to re-train using the optimized hyperparameters. So, I ran the trained model on the entire dataset again (re-trained it) an predicted on the same dataset. PROBLEM: If I give a highly imbalanced unseen new data set (this set is never used by the model and is the Test set) to the trained SVM model, the prediction on this data is totally biased towards the majority class as shown below cmMatrix_TestData = 98 2 5 0 Where did I go wrong? Please help, I practically have no method working for class imbalance problem whereas several articles and suggestion suggest these two approaches which I am not able to make it work for me. AI: This I can say because I ran the trained model on the entire dataset again (re-trained it) an predicted on the same dataset. You seem to be making a fundamental mistake here. If you train and test on the same data, your performance will not be representative of how the model can perform on unseen data points. Make sure that you train and test on different datasets. If the difference in performance persists, make sure that the validation dataset is representative of your training set. If it's not, this might explain why the model is not performing well on the validation set.
H: What is the role of the bias? When talking about artificial neurons, inputs, weights and biases, I understand the role of each but the latter. In short if we have a neuron such as sigmoid(sum(w*x) + b) I get that the weights basically say which of the inputs is more "important", but what about the bias? I've read it in this other question "is as means how far off our predictions are from real values." But how can this be true, if we start them "at random"? Also this isn`t supposed be be the job of the loss/cost function? AI: Bias simply takes care of variables which are unaccounted for i.e you did not include it in X. All Machine Learning and Neural Nets do is approximate a function. All functions can be represented as a series of exponentials using Fourier Transform which basically means a function can be expressed as a sum of many other special functions. In this series there is a constant term to adjust the elevation of the curve above the x-axis. This constant can be thought of as the bias. Thus it basically takes care of the fact that when all variables are 0, a function need not be 0.
H: Low number of inputs compared to outputs (per row) in neural network I have data that one row has input of size 1x2 (two values) and output is matrix of size 15x3 (fifteen rows with three values) like that |-----------------| |y01,2 y01,2 y01,3| |y02,1 y02,2 y02,3| |. | |. | |. | |. | |. | ------- |. | |x1,x2| |. | ------- |. | |. | |. | |. | |. | |y15,1 y15,2 y15,3| |-----------------| This is one row and I have around 3000 inputs like that. Most values are real numbers except one output column that is binary (So output y$_{n,3}$ values are 0 or 1). How should I preprocess this kind of data? Is it of any sense to train two models, one for real numbers only and the second one for binary output? What neural network architecture would work best for this task? AI: 1.) Perform standard preprocessing for you input data, i.e. remove mean and scale to unit variance. 2.) No, you should train everything together. Although it is at the moment not clear, there can be relevant relationships between your outputs. 3.) You can actually use any architecture. Simply start with Dense-NN. More important is how you arange your outputs. You should have different activation and loss functions for your different outputs. For your binary outputs use (binary)-Crossentropy and for your real values use Mean Squared Error (or sth. similar). Your final output consists of course of the sum of your single losses.
H: What is meant by 'training patch size'? Currently I read a paper about symmetric skip connections for autoencoder (link). One experiment of them changes the the 'training patch size'. In my understanding patches are sub-boxes of an image that is used at one time of an convolutional layer. So if you have a 3x3 filter the patch is a part of the image with the size 3x3. Do they mean by 'training patch size' the size of the input image? (The network is a symmetric autoencoder, so the input size is arbitrary) Thanks in advance AI: If you follow the linked literature (down the rabbit hole for a few levels), you end up at a paper from 2005 from Kervann and Boulanger - at least that's as deep as I got. In that linked webpage, they define the patch-based image noise reduction methods as such: The main idea is to associate with each pixel the weighted sum of data points within an adaptive neighborhood. So a patch is an area of a single image, like a convolutional kernel, but it doesn't convolve. They talk about adaptive patches, meaning that you need to (perhaps randomly) select a pixel, then adapt the patch size used in order to include enough surrounding information to reproduce a homogenous patch as the output. It seems as though training with clean images, to which noise is added (additive Gaussian white noise), are used to train. This will help with robustness of final models by reducing the variance, but doing so also must introduce a bias to re-create areas where the noise is somehow uniform. The first link above, if you scroll down, shows many examples of typical images to be de-noised. The noise is not always so uniform. Here is a picture taken from that 2005 paper, where they show patch-regions (marked in yellow). Page 5 gives a nice short description of the general idea. Patch sizes in their work were typically $7 x 7$ or $9 x 9$ in pixel size.
H: What Introductory Statistics book would you recommend? Is there a particularly good book or good publisher for teaching Statistics from beginner level for someone interested in studying Data Science? AI: I would recommend Elements of Statistical Learning, by Trevor Hastie and Rob Tibshirani. That link gets the book directly from their Stanford website. In addition, there are also accompanying videos (they follow the structure of a similar book), which are really helpful! They offer a simpler set of books and online courses, which you can find listed here.
H: train(): argument “x” is missing, with no default when i am trying the the following code I am getting an error: argument "x" is missing, with no default. > library(caret) Loading required package: ggplot2 Attaching package: ‘caret’ The following object is masked from ‘package:survival’: cluster Warning messages: 1: package ‘caret’ was built under R version 3.4.4 2: package ‘ggplot2’ was built under R version 3.4.4 > library(kernlab) Attaching package: ‘kernlab’ The following object is masked from ‘package:ggplot2’: alpha Warning message: package ‘kernlab’ was built under R version 3.4.4 > library(ROCR) Loading required package: gplots Attaching package: ‘gplots’ The following object is masked from ‘package:stats’: lowess Warning messages: 1: package ‘ROCR’ was built under R version 3.4.4 2: package ‘gplots’ was built under R version 3.4.4 > svm.data <- read.csv(file.choose(), header=TRUE) > str(svm.data) 'data.frame': 2538 obs. of 5 variables: $ X : int 1 2 3 4 5 6 7 8 9 10 ... $ SessionID: int 13307 21076 27813 8398 23118 12256 28799 11457 7542 19261 ... $ Timestamp: Factor w/ 2532 levels "2014-04-01T03:02:33.088Z",..: 2064 905 1086 1027 2419 1327 2035 1206 481 1354 ... $ ItemID : int 214684513 214718203 214716928 214826900 214838180 214717318 214821307 214537967 214835775 214706432 ... $ Price : int 0 0 0 0 0 0 0 0 0 0 ... > dim(svm.data) [1] 2538 5 > x <- subset(svm.data, select=-Price) > y <- svm.data$Price > table(svm.data$Price) 0 1 1289 1249 > table(svm.data$Price) / length(svm.data$Price) 0 1 0.5078802 0.4921198 > set.seed(123) > ctrl1 <- trainControl(method="cv", number = 2, summaryFunction=twoClassSummary, classProbs=TRUE) > grid <- expand.grid(sigma = c(.01, .015, 0.2), C = c(0.75, 0.9, 1, 1.1, 1.25)) > svm.tune <- train(train.x=x, train.y=y, method = "svmRadial", metric="ROC", tuneGrid = grid, trControl=ctrl1) Error in is.data.frame(x) : argument "x" is missing, with no default. how to solve this error? AI: You are giving wrong argument in last line. Function train does not have any arguments like train.x and train.y. Instead do this- svm.tune <- train(x, y, method = "svmRadial", metric="ROC", tuneGrid = grid, trControl=ctrl1) It would solve this problem.
H: What to do if my target variable is column of lists? How I can transform my target variable(Y)? As it is list, I cann`t use it for fitting model, because I must use integers for fitting. AI: you can have multiple Y. Each item will be a binary variable which means that item exists or not.
H: How to aggregate data where instances occur over different time intervals I am working on a problem in which I have several instances that have predictors that have activity over various different time periods (i.e. <3 months to well over 20 months.) Originally I attempted to use knowledge I have about this problem (it is an opportunity to sale conversion model) and learned that the average time for a deal to close is about 9 months, so I broke my predictors up into three month intervals. However, I took another look at the lengths of these deals and see that there are a variety of instances that have durations that are not even close to 9 months so this idea does not make sense. The only idea I have gotten is just creating a duration column where I subtract the start and the stop date and then just do the summation for each predictor. However, I feel that the instances might get incorrectly labeled because some might have an overwhelmingly higher amount of activity than another due to the duration of the deal. Has anyone else encountered such a problem. Not sure if this is a common problem but a quick glance at google/reddit did not come up with anything (I could be asking the problem wrong.) Comment AI: I had the same problem. you can use aggregation functions. For example use Max, Min, Avg, count, std or some calculation like the slope of the line. In cases which you have a different period, you can divide your value to days of each period.
H: Why fully calculate softmax? Why is the softmax activation function used in the output layer for CNNs? Why not just take the highest value of the units in the output layer? AI: If you are only interested in the most likely class, during inference you can skip the softmax. This is fairly common even, and the reason why TensorFlow has specific logit functionality. However, for optimizing your network you need a loss function that makes sense. You have normal labels, what would your loss function be if you only compute the linear combinations?
H: What is the intuition behind Ridge Regression and Adapting Gradient Descent algorithms? So I was going through Adaptive Gradient Descent, and learning the intuition behind it: optimizing the learning algorithm, and getting the model to converge faster. The way AdaGrad does this, is by dividing the weight vector by the $L_2$ norm. The equation (copied from here), for it, is: $\theta_{t+1} = \theta_{t} - \dfrac{\eta}{\sqrt{G_t}} \odot g_{t}$ where, if i understand correctly, $\sqrt{G_t}$, is the root of the sum of the square of the gradients, which is the $L_2$ norm. Now if I understand correctly, $g_t$ in this case would be considered the "loss function" Then I got introduced to Ridge Regression, which basically adds the $L_2$ norm (scaled by a lambda) to the loss function. The intuition behind this is smoothing the loss function so that it doesn't overfit the data. The equation, (reworked from here) is: $Loss_{L_2} = Loss + \lambda L_2$ So my question, assuming all my understanding so far is correct, is: What is the intuition behind adding as opposed to dividing by the $L_2$ norm? I understand that you can regularize the loss function in AdaGrad and that they are two separate things, but I still am having trouble wrapping my head around how the math works the way it does. Like why does dividing by the $L_2$ norm make it converge faster? Why doesn't this just smooth the weight vector as well? What is going on here intuitively? AI: Adagrad is completely unrelated to ridge regression. There is no reason to expect that there should be any similarity in their methods. Adagrad divides by the L2 norm of the gradient. Ridge regression modifies the objective function by adding the L2 norm of the parameters. So even though it looks like the same thing, it's really not; those are two unrelated values. There's no principle in mathematics that the L2 norm always has to be added or divided. It's just a number. It's like seeing one equation where you multiply by 2 and another equation where you add 2 and asking "why does the second one add 2 rather than multiplying by 2?" -- the answer is "why not? there's no reason to expect them to have to use the number 2 in the same way".
H: Experience replay in Reinforcement learning - Batch Size I am starting to understand Reinforcement Learning playing with the GYM Cart-pole environment. I would like to ask if the experience replay can slow down the training and if there is a maximum of the batch size? Maybe I misunderstood, and when the program starts the fit function on the replay I guess it is not sending the action to the agent, but the environment is still running. Not sure if I missed a point or if the fit function run so quickly that the agent will have plenty of time to catch up with the environment for the next action during training. AI: Definitely using experience replay can slow down the agent processing each time step, because typically on each time step, a result is stored (possibly requiring another result to be deleted), a mini-batch number of results are chosen randomly and fetched, then the function approximator (usually a NN) has to be run forward for the mini-batch to discover max Q values (potentially multiple times per item if the NN uses action as input instead of multiple action values as output). Then it has to run a forward pass to calculate error values and back propagation and weight updates over the mini-batch. All that can take considerable amount of CPU and memory bandwidth compared to the same system running without experience replay. However, there are a lot of factors to consider whether this is important. Here are a few that I can think of: Typically the environments using DQN agents are not training in real time. Often they run multiple times the speed of real time in order to collect data quickly and solve the environment faster. Obviously something without the overhead of experience replay could collect more data in the same real time if the environment is simulated, but actually the stabilising effect of experience replay is more important and the effective "wall clock" speed of learning is faster and more reliable, using less game play experience. Due to the way mini-batch processing is parallelised for neural networks on GPUs, there is not such a high overhead for increasing mini-batch size as you may think. At least up to the limits of GPU. There's still a cost for memory transfers etc per item, so it's not completely free. when the program starts the fit function on the replay I guess it is not sending the action to the agent, That is correct, the agent is updating its internal estimates of values in order to learn the optimal control actions for later actions. The mini-batch actions are not taken, just used to refine the estimates. but the environment is still running. That is only correct in real-time systems. The Gym environments are not real-time as far as I know. They all run at the speed that the agent wants them to - they are effectively a turn-based game for the agent to play, although many of them are based on systems that run at a specific real-time rate. However, real-time reinforcement learning and control systems are a thing in other domains. If learning must be done online in such a system, then typically you have some choices on how to design the system: Have fixed time steps, and allow for maximum computation time between each one (with perhaps a margin of error). Asynchronous interactions with the environment - with fixed time steps for sampling observations and taking actions - and learning continuously sampling from recent history and updating value functions and/or policy. Allow for variable length time steps. I don't know much about this approach, but it is an option. Which to use depends on the problem, and a bit of guesswork.
H: What is the difference between fit() and fit_generator() in Keras? What is the difference between fit() and fit_generator() in Keras? When should I use fit() vs fit_generator()? AI: In keras, fit() is much similar to sklearn's fit method, where you pass array of features as x values and target as y values. You pass your whole dataset at once in fit method. Also, use it if you can load whole data into your memory (small dataset). In fit_generator(), you don't pass the x and y directly, instead they come from a generator. As it is written in keras documentation, generator is used when you want to avoid duplicate data when using multiprocessing. This is for practical purpose, when you have large dataset. Here is a link to understand more about this- A thing you should know about Keras if you plan to train a deep learning model on a large dataset For reference you can check this book- https://github.com/hktxt/bookshelf/blob/master/Computer%20Science/Deep%20Learning%20with%20Python%2C%20Fran%C3%A7ois%20Chollet.pdf
H: Why would you use word embeddings to find similar words? One of the applications of word embeddings (such as GloVe) is finding words of similar meaning. I just had a look at some embeddings produced by glove on large datasets and I found that the nearest neighbors of a given word are often fairly irrelevant. Eg. ‘dad’ is the closest neighbor of ‘mom’, ‘dealership’ is the seventh closest neighbor of ‘car’. In light of this, if you wanted to find words of similar semantics why would you prefer using embeddings instead of just downloading a database of synonyms from an online dictionary that is compiled by humans? AI: It depends on how similarity is defined. If similarity is defined as human-defined semantics, then a synset (i.e., synonym set) is most appropriate. If similarity is defined as frequent co-occurrence, then word embeddings are most appropriate. Even within semantic similarity, there are many approaches beyond synsets. One advantage of word embeddings over synsets is the ability to automatically find similarity with multi-word term vocabulary. For example, the common word analogy - Man is to king as woman is to queen.
H: Inference of root mean square value in terms of house prediction The objective of the task is to predict the housing prices. A model is created based on California housing dataset to predict housing prices and is subjected to evaluation using the below code. from sklearn.metrics import mean_squared_error housing_predictions = lin_reg.predict(housing_prepared) error = mean_squared_error(housing_labels,housing_predictions) rmse = np.sqrt(error) --> prediction error // value I got for rmse was - 68628.1981 The average housing prices range from 120,000 - 265,000. But the rmse value is 68628.1981. Questions By looking at rmse value, What should I infer?. I know the model is underfitting. I wanted to know the intuition behind the rmse value. AI: The RMSE is a measure for how "wrong" a regression model's predictions are on average, and is mostly useful as a relative metric for determining which of a set of models is best. If you want an absolute baseline against which to compare any given model you can always compute the standard deviation of your target, which is the RMSE of a model which predicts the average value of $y$ for every observation. I would also argue that for something like housing prices a more meaningful error measure is root mean square logarithmic error, which is the RMSE after taking the log of $y$ (or $y + 1$ if $y$ contains zeros) and the predicted values. The reason for this is that we're interested in errors as proportions of our target (an error of \$50,000 is much more serious for a \$100,000 house compared to a \$1,000,000 house), and taking logs is the way to adjust for this. Also be sure to focus on test or cross validation error, else you will always choose your most overfit model.
H: How to extract particular key from the dict()? I have a dict() with 1000 keys. First 4 entries of dict are like. { 'aaa': [1,0,6,8,0,5,9,1,1,0], 'abc': [1,1,1,2,4,0,0,0,9,8], 'cfg': [0,0,0,4,3,1,0,0,0,1], 'cghjj': [7,8,9,2,3,0,0,0,0,0] } I want to create a dataset using each key one by one. I want to pick key1. Then create a dataset using function1 using the values of key1 and then pick key2. Creating another dataset using values of key2 and appending the result of key2 to key1 row-wise. Then with key3... and then append the result to the result of key1 and key2 and so on.. up to 1000 keys. dataset=dict() create_dataset: select values of key 1. b1=function1() b1=np.asarray(b1) then select key 2. b2=function1() b2=np.asarray(b2) np.append(b1,b2,axis=0) ...upto 1000 key. AI: You could iterate over all the keys and apply function1 to them and them append to the full_data_set. At the end you have all them concatenated. However, Python lists are unordered, so you don't control the order which keys are chosen. data = { 'aaa': [1, 0, 6, 8, 0, 5, 9, 1, 1, 0], 'abc': [1, 1, 1, 2, 4, 0, 0, 0, 9, 8], 'cfg': [0, 0, 0, 4, 3, 1, 0, 0, 0, 1], 'cghjj': [7, 8, 9, 2, 3, 0, 0, 0, 0, 0] } def function1(l: list): # do something return l full_data_set = [] for key in data.keys(): full_data_set += function1(data[key]) If you do want to control the order of the keys, you need to make the ordered list of the keys somehow. key_order = ['aaa', 'abc', 'cfg', 'cghjj'] data = { 'aaa': [1, 0, 6, 8, 0, 5, 9, 1, 1, 0], 'abc': [1, 1, 1, 2, 4, 0, 0, 0, 9, 8], 'cfg': [0, 0, 0, 4, 3, 1, 0, 0, 0, 1], 'cghjj': [7, 8, 9, 2, 3, 0, 0, 0, 0, 0] } def function1(l: list): # do something return l full_data_set = [] for key in key_order: full_data_set += function1(data[key])
H: How to transform dictionary data into a string vector? I have key,value data where each record is in a Python string. An example record looks like this: record = { 'first_name': 'john', 'last_name': 'doe', 'age': '50', } To encode this into a neural net, I would like to firs have this data as a string vector. A sample output is like this: "first_name john last_name doe age 50" How can I make this transformation? AI: If you want output in any order, do this- >>> string_vector = '' >>> for k,v in record.items(): string_vector += k + ' ' + v + ' ' >>> string_vector = string_vector.rstrip(' ') >>> string_vector 'first_name john last_name doe age 50'
H: How to show value of a classification model even though it doesn't get the desired performance? I developed a classification model for a telecom client. Where we classify between Dual-sim and non-Dual-Sim clients. After many iteration the best precision we can get is 60%. The contract says that the acceptance criteria is 75% precision. The client measures precision based on campaign results. In other words, they call sample data and ask them explicitly are a dual sim or not. Facts we know: Learning curve shows that more training data won't do the trick. We have got all variables that were used in the market before(and everything we could think of, but nothing improved the model). Training data don't even get 1% of the whole population I am generalizing on(and we can't get more than that). The random baseline is 35%, so we got a lift of around 1.9 (but they won't approve of it). We have tried many iterations from the most simple models to the most complex, and this is the best we can get. The dual sim and non-dual sim distribution across the variables, isn't that different. The language used were SQL, R. So the question is, what else can I do to prove that the model has a value but without getting the 75% precision? AI: Often customers don't understand the value behind the performance of machine learning models. 60% precision might sound not so good, since 50% is often related to nearly random for binary problems. However, it might well be a very difficult problem, and 60% might well be very good performance. One way you can show your customer that your model is indeed performing well, is to compare its precision against a human classification. You can: Define a random baseline, and explain why it is 35% and not 50% for your problem. Get a couple of humans from your client side to try the task manually. Let your model perform the task and show the improvement. More often than not, humans perform tasks more precisely than models (if in small scale). So, if your model performs better, hopefully, they will see the value your model is adding to their company.
H: Titanic Kaggle Data: Why am I getting lower accuracy on Kaggle submissions than on held-out data? I am going through my first solo machine learning project and would like to gain some insight into what I am doing wrong/what is going on here as I am a bit stuck. I have been applying machine learning to the Titanic data set with SKlearn and have been holding out 10% of the training data to calculate the accuracy of my fitted models. I also use K-fold cross valdation with 10 folds to evaluate the model performance and choose hyper-parameters. I have so far applied logistic regression and a linear Kernel SVM and in both cases I get 78-80% accuracy on the K-fold validation sets and when applying the fitted classifiers to my held-back previously unseen testing data. However when I predict on Kaggle's test data and submit my predictions it comes back with values around 76% which is significantly less than I'd expect, and well outside the variance in the accuracy values I get with K-fold cross validation. A link to the Jupyter notebook where I do this is provided below: http://nbviewer.jupyter.org/github/AshleySetter/Kaggle_Competitions/blob/master/Titanic_project/Titanic_machine_learning_clean.ipynb Could anyone give me some insight into what is going on here and what I am doing wrong? AI: It could be because of the percentage of the different class. Imagine your data is 30% survived and 70% died but in Kaggle's test data this ratio may change, i.e 50%-50%. So your model could not predict kaggle's survived part as well as your test data. you may impute missing by the mean. if you use test data for calculating mean it could be a cheating for your model.
H: Do Data Scientists prefer MACS? I see many Data Science (DS) tutorials done on MACS, and many DS blogs recommend MACS as the best developing platform, thus the quote "Data Science is statistics on a Mac" came more than once into my mind. I'm quite fascinated by MACS (be it iMAC or MacbookPro) but I never could get a valid reason to why Data Scientists in particular use them (if that is true, of course) All who I asked said: "because it has Unix", but I hardly manage to accept it as a valid reason (at least for me). The hardware is very weak for Machine Learning processes, and the OS is inefficient and cumbersome (no file path, no drag and drop photos and music files, windows don't fully open by default nor close, etc) The only benefit I found so far in MACoS (or Linux for this matter), is that when running parallel processes, the RAM is being shared, unlike on windows, but quoting one of my lecturers: "you need to be mad to switch OS for that reason only" Would be grateful if you could enlighten me on the subject. AI: This question is highly subjective, like asking what colour keyboard is best for programming faster. A large number of Computer Science research at universities is done using Unix based operating systems (normally a Linux variant like Ubuntu) because it is completely open source and allows the user to tweak things as they please. Windows never allowed this officially. Apple's OSX doesn't either. However, OSX is based on Unix like Linux, and so there are many more similarities in every day usage between Linux and OSX than Linux and Windows. Many of the poster-boys/girls from data science have an academic background, where they likely used Linux. Because many of the latest research ideas come from universities (where I claim most researchers use Linux), it is common to see open-source projects that have only been tested and packaged for Linux usage. Making them run on OSX is a smaller jump than going for Windows, as OSX and Linux are more alike. Other guesses as to why many pick Apple products, are that they generally have higher build quality, they have a reputation for many thing just working (though IMO, Microsoft and Linux have become almost as good in that respect) and they are marketed as being luxury consumer products - and so are visually appealing. I'm sure there are books written on how social psychology with respect to this. All this ends up with the price being somewhat inflated for the actual hardware parts that you get. In the end, someone who wants to get work done and has the necessary skills will manage it on a 10 year old netbook running Windows. Likewise, owning a Linux server with 8 GPUs and a Macbook Pro to remotely connect to that server is no guarantee of good work. I for one have worked on all three platforms, for data science as well as other tasks, and although I have my preferences, the operating system rarely poses the biggest problem.
H: Display images after augmentation in Keras How can I display all images after augmentation? How can I get the number of the trained data after augmentation? Thank you AI: Depending on the kind of data set you are using you can use .flow (if you have data as numpy arrays) or .flow_from_directory (if you have images in file system) to run through the data generator and save the output using save_to_dir argument.
H: How to download a Jupyter Notebook from GitHub? This is a fairly basic question. I am working on a data science project inside of a Pandas tutorial. I can access my Jupyter notebooks through my Anaconda installation. The only problem is that the tutorial notebooks (exercise files) are on GitHub. My question: how do I download the exercise files from GitHub and then have them display in the Jupyter notebook section on my computer so that I can use them interactively? I am very new to Jupyter Notebooks. So far, I have been using PyCharm to execute all of my code. AI: If the tutorial is a GitHub repo, sure. Clone the repository. Run jupyter-notebook and open the notebook there.
H: How to train convolutional neural networks on unbalanced datasets of images? How can I train convolutional neural networks on unbalanced datasets of images? My dataset has around 400 classes and the classes have different number of images.. AI: By oversampling. When training a CNN, you generally use a mini-batch gradient descent method. The mini-batch method consists on taking some images at random and performing a gradient step with those images by backpropagating the loss of the model on those images. This random selection doesn't need to be uniform on all the images. Instead, you can make the images in the classes with few elements appear with more likelihood (oversampling them) than the other images. With this, you can help the CNN not only to learn about the most prevalent classes. In addition, if you have very few number of images of some classes, it is recommended to do image augmentation, thus generate more images in order to avoid overfitting those classes.
H: What are the benefits and tradeoffs of a 1D conv vs a multi-input seq2seq LSTM model? I have 6 sequences, s1,..,s6. Using all sequences I want to predict a binary vector q = [0,0,0,1,1,1,0,0,0,1,1,1,...], which is a mask of the activity of the 6 sequences. I have looked at seq2seq lstm models, but am struggling with the multiple-sequence-input and single-sequence-output architecture. Am I headed down the right path, or should I shift my focus to a convnet with 6 non-spatial dimensions, and 1 spatial dimension? Thanks, AI: You will have to address a varying sequence length, one way or another. will likely have either perform some padding (e.g. using zeros to make all sequences equal to a max. sequence length). Other approaches, e.g. used within NLP, to make training more efficient, are to splice series together (sentences in NLP), using a clear break/splitter (full-stops/periods in NLP). Using a convolutional network sort of makes sense to me in your situation, predicting a binary output. As the convolutions will be measuring correlations in the input space, I can imagine the success of the model will be highly dependend on the nature of the problem. For some intuition of conv nets uses for sequences, have a look at this great introductory article. If each of your six sequences are inter-related, it allows the convolutions to pick up on those cross-correlations. If they are not at all related, I would proably first try Recurrent Networks (RNNs), such as the LSTM you mentioned. Getting you head around the dimensions of a multi-variate LSTM can be daunting at first, but once you have addressed theissue of varying sequence length, it becomes a lot more manageable. I don't know what framework you are using, but as an example in Keras/Tensorflow, the dimensions for you problem would be something like: (batch_size, sequence_length, num_sequences) batch_size can be set to None to give flexibility around your available hardware. sequence_length is where you need to decide on a length to use/create via padding/trimming etc. num_sequences = 6 :-) If helpful, check out these threads, where I explained that stuff in more detail. Multi-dimentional and multivariate Time-Series forecast (RNN/LSTM) Keras Keras LSTM with 1D time series
H: Are there enough databases for all learning tasks? this might be a silly question but I guess the answer comes with experience in this field. I'm just wondering if today, with the internet overflowing with data and specifically with images (maybe not tagged), are there a lot of examples for learning tasks (specifically ones that takes images as inputs) without a proper database for those tasks? I'm asking because my experience in this field is limited and I only tried to work with some more standard learning tasks that have a proper size database and I'm wondering if the need for data (or tagged data) is still an issue. So, if you can give some examples for tasks that still require a proper database of images, that will be great. Thanks. AI: A naive observer might be tempted to think that with the vast amounts of data being collected, there's enough to answer all learning tasks. But this is misleading at least for the following reasons: There's no such thing as "all learning". Humanity has not yet perceived what could be the limits of knowledge. With the boldly stated ambitions of AI to match human intelligence, this also transfers to the lack of limits to what could machine learning aim to learn. One feature that a lot of data scientists downplay, but that comes as heritage from traditional science fields, is that sound research starts with identifying what you want to achieve and focus data collection on that objective. In other words, in an ideal data analysis setup, the analyst would have a say in how fit-for-purpose data is collected. A big part of machine learning is preprocessing data in order to clean, normalise or annotate it. This could also be done to pre-existing data as a way to adapt it for certain - previously unenvisaged - purposes. If the data is not collected with attention to how it is going to be used, it would need more preprocessing, and thus - generally speaking - would be less reliable due to the accumulation of steps where noise could be introduced. These three arguments are only a scratch on the surface of the topic, but they come to show that even if it is sometimes useful, the widespread data collection does not always respond to all questions one might want to ask. Even if in machine learning it sounds a bit more complicated, it all boils down to this amazing quote from Alice in Wonderland: “Alice: Would you tell me, please, which way I ought to go from here? The Cheshire Cat: That depends a good deal on where you want to get to. Alice: I don't much care where. The Cheshire Cat: Then it doesn't much matter which way you go. Alice: ...So long as I get somewhere. The Cheshire Cat: Oh, you're sure to do that, if only you walk long enough.” Examples: Comprehending information from random receipts or invoices. Automatically judging football situations. Telling Google (think captcha) if that thing on the image is a street sign, a bridge or a number.
H: How to Normalize a feature I have a feature that income of individual. It ranges from 10k to 116 Million. I have about 300k+ records. Clearly, I cannot use this feature as is as it will distort the model output and there are outliers.. I was thinking of normalizing all values from 0 to 1. And also eleminating outliers. Like if most values from from 30k to 200k, then people with income of 200k would get a score around 1. I have 2 questions? 1) Is this approach reasonable or should I think about it differently. 2) and suggestions on how to normalize and/or scale values to avoid outliers? AI: I can come up with many ideas, but wether they are going to be useful depends on what you are building: Take the logarithm of the variable, and use that as a feature, and then you can normalize it or work with it. Transform it into a categorical variable using its deciles/quartiles. Crop the very high values to the 95th percentile, and the normalize it. I fancy specially the second one, as I really like tree-based methods and it shines in these methods.
H: time serie with only two values Could any one help me know about different approaches, methods or algorithms to build a model to forcast a time serie which has only two values ( 0, 1 ) but they last over time each time . basically I've some on/off data, it tell me if there is an object in a place and how much time that object was in that specific place . So the output is a rectangular wave but it's not periodic. so if we have the object in that place field we will get 1 every x seconds then when there is nothing we will get 0 . Thanks in advance of your response! AI: You could just predict if the next hour a room will be occupied based on whatever features you give it. Each Example would be per room per hour. You can then use whatever machine learning model you want. It saves needing to go to the complexity of some sequence models - could just use Logistic Regression, for example. Some examples of features: Was it occupied in the previous hour? How many of the previous hours was it occupied for? Of the past 10 hours, how many were it occupied for? Since the start of the 'working' day, how many hours has the room been occupied for? Was it occupied the same hour yesterday? Was it occupied the same hour a week ago? You could obviously add any others and incorporate extra information, if you have it.
H: What machine learning algorithms to use for unsupervised POS tagging? I am interested in an unsupervised approach to training a POS-tagger. Labeling is very difficult and I would like to test a tagger for my specific domain (chats) where users typically write in lower cases etc. If it matters, the data is mostly in German. I read about about old techniques like HMM, but maybe there are newer and better ways? AI: There are no unsupervised methods to train a POS-Tagger that have similar performance to human annotations or supervised methods. The current state-of-the-art supervised methods for training POS-Tagger are Long short-term memory (LSTM) neural networks.
H: It is helpful to normalize target variables for a regression neural network? It is customary to normalize feature variables and this normally does increase the performance of a neural network in particular a CNN. I was wondering if normalizing the target could also help increase performance? I did not notice an increase in performance with the data set I am using at the moment but was curious if anyone has tried in the past. Of course the normalization happens only on the training data. AI: One reason for normalising the inputs is to make gradient descent more stable, as gradients spend more time in a comfortable region with meaningful updates and less neurons 'die' during trainings - getting stuck at one of the tails of e.g. the sigmoid non-linearity. Normalising the output distribution is perhaps not the best idea, as you are by definition altering the defition of the target. This means you are essentially predicting a distribution that doesn't mirror your real-world target (at least without some reverse non-linear transforms later on). On this you could do would be to scale the target, instead of normalising. The shape of the distribution should remain almost identical (thinking about the shape of the distribution), but the values themselves might be more easily attainable and therefore faster to optimise for; they are all closer in magnitude to the gradients that are being computed.
H: How to get windspeed when temperature is maximum for each city? Here is the sample dataset, I have Weather_Data, I want to calculate "What is the windspeed when temperature is maximum for each city" I have tried following code: df = pd.read_csv('weather_by_cities.csv') g = df.groupby('city') g.max() But Instead of getting corresponding Wind Speed, when temperature is maximum, it's giving maximum wind speed along with maximum temperature, which is little annoying. AI: try this: df = pd.read_csv('weather_by_cities.csv') max_df = df[['city', 'temperature']].groupby('city').max().reset_index() pd.merge(max_df, df[['city', 'temperature', 'windspeed']], how='inner', on=['city', 'temperature'])
H: Machine Learning for user modelling I have a dataset where each row is a interaction of a user with a content. I have user's features to represent the user (each user is uniquely represented through user.id): user.id, user.nationality, user.company, user.role and content's feature: content.id, content.type, content.activity.. My goal is to use ML techniques to predict a content (given by a user.id). My main problem is that, for a new prediction, I have the features of the user, but I do not have the features of the new content. AI: What you have is a classification problem. Given a user and their features, pick the best content for him. There are many classifiers that are for this task. I would say that your main problem is that most of your features are categorical and not numerical. Data preparation With Sklearn for example, you can use LabelEncoder to transform categorical to numerical. Below is an example of how to do this. You generate X, which is your features matrix. Each row is a user and the columns are the features. Here are some dummy users I created. You data structure might be different, so you can try to make it like this, or do some more work on it. users = [ { 'id': 1, 'nationality': 'american', 'company': 'xyz', 'role': 'ceo', 'content.id': 1001 }, { 'id': 2, 'nationality': 'american', 'company': 'abc', 'role': 'cto', 'content.id': 1002 }, { 'id': 3, 'nationality': 'canadian', 'company': 'fgh', 'role': 'cto', 'content.id': 1001 } ] Now, you can encode your categories. This means that for every word in your data, you assign a number to it. The LabelEncoder can do this for you. With this, you generate X, which is your feature matrix, where every row represents a user. from sklearn.preprocessing import LabelEncoder nationality_encoder = LabelEncoder().fit([user['nationality'] for user in users]) company_encoder = LabelEncoder().fit([user['company'] for user in users]) role_encoder = LabelEncoder().fit([user['role'] for user in users]) X = [] for user in users: X.append([ nationality_encoder.transform([user['nationality']])[0], company_encoder.transform([user['company']])[0], role_encoder.transform([user['role']])[0] ]) print(X) >> [[0, 2, 0], >> [0, 0, 1], >> [1, 1, 1]] Then you can generate y, which are your labels. The content.id of each user. Here I assume you have the user-content pairs like you said you did. This vector has the same length as X. Each row of X is associate to the ordered values in y. So the user in X[2] has the content.id in y[2]. y = [user['content.id'] for user in users] print(y) >> [1001, 1002, 1001] Classifier Now you have your X and y, your features and labels. Not all classifier will perform well with categorical data, but I would suggest you start with a DecisionTreeClassifier. Here is an example: from sklearn.tree import DecisionTreeClassifier decision_tree = DecisionTreeClassifier() decision_tree.fit(X, y) New predictions Now that you have a trained tree, you can predict the content.id for users that do not have it. Suppose you have this new user: new_user = { 'id': 999, 'nationality': 'american', 'company': 'fgh', 'role': 'ceo' } You can apply the same transformation to get the X for him. However, now you do not know y, which is what you are trying to find out. new_user_X = [[ nationality_encoder.transform([new_user['nationality']])[0], company_encoder.transform([new_user['company']])[0], role_encoder.transform([new_user['role']])[0] ]] Use the decision_tree you created to make a prediction. In this dummy example, the new_user should have content.id == 1. new_user_content = decision_tree.predict(new_user_X) print(new_user_content) >> [1]
H: Handling actions with delayed effect (Reinforcement learning) I am working on a problem where the action that I learn (using DQN) can be executed 'now' but it's effect on the environment is delayed by 'T' units of time. The environment however is active in that time and there are other conditions based on which rewards are computed and returned. How is this handled? I believe the Q value function (with gamma) handles 'delayed' effects of rewards but not actions. This is similar to the inventory management use cases. As an analogy consider that I sell cakes. As customers walk into my shop I consume cakes off the shelf. I must reorder to stock my shelf BUT this reordering can take time to take effect. I thought of just adding the quantity reordered to the shelf at a later time and let the agent learn it's effects. Will this suffice? As another approach I thought of Experience and Replay as a mechanism to handle this delayed effect. Appreciate the help. AI: I thought of Experience and Replay as a mechanism to handle this delayed effect. Experience Replay will not help directly here, it is a mechanism to make learning more stable and effective from limited data, but it will not address delayed returns. Any approach which learns Q should cope fine with delayed rewards. So Monte Carlo Control, SARSA, Q-learning, DQN and all their variants are in theory capable of learning the delayed reward. The total reward is the ultimate effect of any action. However, I'm guessing just because it is not mentioned and you are asking this question, that you may still have a problem . . . You claim that the action has a "delayed effect on the environment". You need to change that, because it implies that your state representation is incorrect for your problem. Clearly making an order for something changes the environment. An environment with an order in progress is different, in a way that is critically important to your problem, than one without an order in progress. Probably what you are missing is a state representation that captures what your action has actually done. Without that, there are hidden variables (orders currently being handled) that a Q function cannot learn about because it is not in the state, $s$ for $Q(s,a)$. For RL to be reliable, the state value $s$ has to capture all relevant information about how future state changes and rewards will progress. For instance, if you were writing an agent to control a swinging pendulum (a standard toy problem), then you don't just need the position of the pendulum, you also want its velocity in order to predict where the pendulum might end up before any action is taken. For your issue, the current stock is like the pendulum's position, and you need to track orders "in progress" as they are a bit like velocity in that they will cause further changes to state regardless of action. To address this, you will need to add a representation of recent stock orders to your state. I would guess something that describes the contents of the order (similar to your current stock representation) and a countdown for how many days left until it is delivered (or if they are not that reliable, something similar that correlates with likely arrival time). That state should be changed immediately in response to the action that caused the order, otherwise the agent will not learn the association between action and its effects, and will treat orders arriving as some kind of random effect from the environment. Once you have a representation like this, then I think that is enough, and the next steps are finding the right hyper-parameter values to learn effectively. This will be easier initially if you have a simulated environment, although you could also learn off-policy from historical data using something like DQN, provided you have a lot of historical data to work with. I thought of just adding the quantity reordered to the shelf at a later time and let the agent learn it's effects. Will this suffice? I think this is similar, but probably not enough. The most important thing to do is associate change to state with the action that caused it. Adding the new stock as it arrives you should do anyway, but the agent needs to learn that the earlier action is what caused this, and that is only possible if that action actually changes the state in some way.
H: Policy Gradients vs Value function, when implemented via DQN After studying Q-learning, Sarsa & DQN I've now discovered a term "Policy Gradients". It's a bit unclear to me how it differs to the above approaches. Here is my understanding, please correct it: From the moment I first encountered DQN, I always imagined DQN's input vector as only consisting of current state's features. On the output layer of DQN we have a vector of actions. We then take the index of the highest-scored action $a$ and execute it. It puts us into the next state $s'$. To compute the error (how incorrectly we estimated the score of $a$) we supply $s'$ into our DQN, then discover its highest action similar to 1. Once again, it will be on the output layer of DQN Compute the error by bootstrapping $a$ to $a'$. The "error" vector will have zeros everywhere except at the index of that chosen action $a$ Was this in fact a "Policy Gradient", and not a Value-function approach as I thought initialy? In that case, would the value-approach be a DQN with: [state_features; action_one_hot_encoded_vector] concatenation as input, and a single value on the output? I got this impression after reading this Link Is the basic idea of DQN Policy Gradient that simple, or am I getting things wrong? Edit: there is a really awesome lecture about policy grads. Unfortunatelly the video is deliberately unlisted, so that normal people can't really get to it - but I am for free education, so here it is: CS294-112 9/6/17 AI: Is the basic idea of DQN Policy Gradient that simple, or am I getting things wrong? This is not correct. DQN is a value-function approach, as you thought initially. Your confusion seems to stem from the two options for action representation possible in DQN. You can either estimate a single $Q(s,a)$ function by having $a$ as an input to a neural network, or can estimate all possible $Q(s, *)$ with multiple outputs, one for each $a$. However, that is an implementation detail in how you set up the neural network. It might change the efficiency of the resulting system, but it doesn't change the nature of the RL algorithm at all. Policy gradient methods are based around modifying a parametric policy function $\pi(a|,s,\theta)$ and learning $\theta$. The most basic policy gradient algorithm is REINFORCE, which requires an episodic problem, and updates on-policy after each episode. Importantly, you cannot use the relationship between $Q(s,a)$ and resulting policy in Q-learning to create a policy gradient approach. The main blocker to doing this is that the effective policy $\text{argmax}_a \hat{q}(s,a,\theta)$ is not differentiable with respect to $\theta$, so it is not possible to calculate a gradient and make an update to improve the policy. In general, if an algorithm learns a state value $V(s)$ or action value $Q(s,a)$ and then uses that to decide the policy, then it is value-based. If it learns the policy directly then it may be a policy gradient method (in RL terms it is very likely to be policy gradient, but you can also do policy search using e.g. genetic algorithms). Policy gradient methods also include Actor-Critic approaches which learn both the policy and an associated value function (usually state value $V(s)$). This is a more advanced algorithm than REINFORCE, in that it can be applied to continuous (non-episodic) problems, and updates estimates on every step. One popular Actor-Critic approach is A3C
H: Representing cyclical features as sin/cos components I'm working on a prediction project where we have a lot cyclical features such as hour of the day, weekday, month, day of year, etc etc. After some searching I decided to follow the advice here. Now I have the sin and cos component for every cyclical feature as a separate feature, so month becomes month_sin and month_cos. However, I don't know for sure whether the model can deal with this correlation, as both components need to be equally weighted in order for the feature to make sense. The model assigns different weights to the sin and cos components after training though. My intuition tells me that this is bad, but I'm not sure what to do about it. Currently gbm (R) gives the best results. For a gradient boosting model, is it better to force equal weights on the two correlated features, or is it better to let the model figure it out even if it results in different weights on the two components? Or would you suggest an entirely different approach? AI: as both components need to be equally weighted in order for the feature to make sense That is not the case. For instance if $\text{sin}(\theta)$ of the cyclical feature is weighted strongly, it means that the original feature has the strongest positive effect on output at $\theta = \frac{\pi}{2}$. If the two features are weighted equally, then the focus is around $\theta = \frac{\pi}{4}$ In general, you should expect different weights to apply to the different components, depending on which hours of the day maximise the target variable and which minimise it. For a gradient boosting model, is it better to force equal weights on the two correlated features, or is it better to let the model figure it out even if it results in different weights on the two components? Definitely better to let the model figure it out in this case. Your main concern would be whether a one-hot-encoded representation might be better than a cyclical representation for your problem. One-hot allows for arbitrary relationships to each hour, but adds more dimensions - and so may require more examples of each time. Cyclical has less dimensions, but is more likely to include non-linear effects, if for instance min and max effect on output target variable are not exactly 12 hours apart - thus it may require a more complex model.
H: How to standarize feature vector with data in different scales? Let's suppose I have a dataset with numerical attributes of different types. Let's suppose I want to employ a Neural Network for supervised classification with that dataset. For that, I need to extract feature vectors from that data. Those feature vectors must be suitable for NNs. (should be normalized/standarized vectors...) As an example, our dataset consist of data from football games. DATASET: ------------------------------------------------------------------- | local_elo| vis_elo| local_pts | vis_pts | loc_goals | vis_goals | ------------------------------------------------------------------- | 2820 | 3250 | 45 | 54 | 13 | 17 | ------------------------------------------------------------------- | 4230 | 5125 | 87 | 81 | 67 | 65 | ------------------------------------------------------------------- The feature vectors this two data points: x_1 = [2820, 3250, 45, 54, 13, 17] x_2 = [4230, 5125, 87, 81, 67, 65] but they are not suitable for feed them into a Neural Network. How could this dataset be preprocessed in order to extract feature vectors suitables for feeding them into a Neural Network? AI: use function preprocessing.scale from sklearn from sklearn import preprocessing import numpy as np X_train = np.array([[ 1., -1., 2.], [ 2., 0., 0.], [ 0., 1., -1.]]) X_scaled = preprocessing.scale(X_train) http://scikit-learn.org/stable/modules/preprocessing.html
H: Gini Index in Regression Decision Tree I want to implement my own version of the CART Decision Tree from scrach (to learn how it works) but I have some trouble with the Gini Index, used to express the purity of a dataset. More precisely, I don't understand how Gini Index is supposed to work in the case of a regression tree. The few descriptions I could find describe it as : gini_index = 1 - sum_for_each_class(probability_of_the_class²) Where probability_of_the_class is just the number of element from a class divided by the total number of elements. But I can't use this definition in the case of regression where I have continuous variables. Is there something I misunderstood here ? AI: In regression trees, sum of squared error (SSE) is the criterion for tree split. The first split is based on the feature/predictor and its values in your training set that yields the lowest SSE value. And then so on for the further splits.
H: Why should re-sampling change the value of model's coefficients? I have the code below in python to create LinearRegression model. When I train the model with re-sampled data, I get different values for its coefficients. I can't understand why that happens. Can you help me in this please? [Update] I assume that resampling is the same as shuffling. And that means the order of the data is changed but not the data itself. In the use case presented, the number of rows are are the same as I inspected it and as I understand the order of the data is changed. Thanks! from sklearn.linear_model import LinearRegression from sklearn.utils import resample model = LinearRegression(fit_intercept=False) model.fit(X, y) print('model.coef_',model.coef_) model.fit(*resample(X, y)) print('model.coef_',model.coef_) model.fit(*resample(X, y)) print('model.coef_',model.coef_) AI: In each sampling, your data is going to be different from the previous sampling. For each sampling, you are going to find the best line which describes your sample with the least error value. Consequently, for each sample you are going to find a model which may be different due to the fact that it reduces the cost for each sample.
H: In CNN (Convolutional Neural Network), does the combination of previous layer's filters make next layer's filters? I know that the first layer uses a low-level filter to see the edge information. As the layer gets deeper, it will represent high-level (abstract) information. Is it because the combinations of filters used in the previous layer are used as filters in the next layer? ("Does the combination of the previous layer's filters make the next layer's filters?) If so, are the combinations determined in advance? AI: I guess you are making mistake about the filters. After applying filters to the input of each layer, the output will be used as the input of the next layer. The first layer's filters try to find the edges in the image and their output show whether those edges exist in a specified position or not. Next layer filters try to find patterns in the outputs of the previous layer which shows the existence of edges. Due to the point that each filter is a window and specifies a receptive field on the input, it finds patterns in the input which are more abstract and more complicated than the previous layers' activations.
H: Influence of a data point on the regression result? Let's say I perform multiple regression where y = income, x1 = educaiton, x2 = sex, and x3 = religion from 2003 to 2018, where the data is measured daily. Is there any way to quantify an impact of a single day data (e.x. 2005-07-01) on the regression result? AI: It is hard to quantify exactly what a single point is doing, but you would benefit from plotting data to look for suspicious points and then measuring differences in statistics if you remove it. Don't forget about the value of plotting data! Anscombe's quartet is a great reminder of how important this is. https://en.wikipedia.org/wiki/Anscombe%27s_quartet
H: Some confusions on Model selection using cross-validation approach https://stats.stackexchange.com/questions/11602/training-with-the-full-dataset-after-cross-validation explains the procedure and the importance of doing cross-validation to assess the performance of the method/ classifier. I have few concerns which I could not clearly understand from that answer. It will be immensely helpful if these are clarified. Consider that I am using the Matlab's fisheriris dataset. The variable meas contains 150 examples and 4 features. The varaible species contains the labels. I have put the data and labels into a variable: Data = [meas species]According to the procedure outlined above, I have split the data set Data using cvpartition into 60/40 where 60% is the Xtrain and 40% is a separate Xtest data subsets. Using Xtrain I perform k fold cross-valiation and inside each fold I validate the model using the indices from Xtrain. This loop is used to tune the hyperparameters of the model. I never use Xtest in selecting the hyperparameters. Is my understanding correct? Confusion 1) The answer in the link says You build the final model by using cross-validation on the whole set to choose the hyper-parameters and then build the classifier on the whole dataset using the optimized hyper-parameters. "use the full dataset to produce your final model as the more data you use the more likely it is to generalise well" I am a bit confused on what dataset and whole set are we referring to and how is building the final model by using cross-validation on the whole set using the selected hyper-parameters different from building the classifier on the whole dataset using the hyper-parameters? I wanted to verify if my understanding of this part is correct or not. Does this statement mean that using the cross-validated hyper-parameters obtained using the Xtrain, should the classifier be build by re-training on the Xtrain subset or on Data ? Should my final model be the one from Data? Confusion 2) What is the role of the unseen Xtest data set? In papers is the performance reported on the Data or on the untouched Xtest ? AI: I'm not exactly sure of cvpartition's routine, so I'll try to provide a more generalised answer. "Whole set" v "full data set" These are the same in this instance. Model hyperparameter tuning can be done by feeding the full / whole / complete data set into a cross validation process. Inside this cross validation, the data is split into k folds, where (k - 1) folds are used to build a model, and the remaining fold is used to test the model (since the remaining data is essentially unseen to the model). This is repeated k times; results can be averaged and standard deviation calculated. Seen v unseen data (a.k.a. modelling and hold out samples) This is essentially a variation on the cross validation approach above, except the split is done only once: the full data is split into two subsets - Xtrain and Xtest in your case. In this approach, you would build a model using Xtrain only and test it on Xtest (there is no repetition). So, what's the difference? Both approaches attempt to do the same thing - somehow validate a model's performance and ability to generalise on unseen data. Cross validation is arguably the stronger technique, especially on smaller data sets. Once you've successfully parameterised your model (using either approach), building your model on the whole data set is advisable since more data > less data.
H: How would you deal with inf. or NA for rate or ratio as a feature variable I'm trying to create a feature for a churn model (binary classifier). The feature is mean of sales growth rates for several months. But if I just take the mean of sales for several months, I often get NAN or inf. since sales are often zeros. I could impute some numbers like 0 or mean as the missing sales but I feel I'm modifying the pattern/underling distribution. How would you go by creating such a feature for a classification model? Thanks! AI: Do you want to consider the missing sales values or not? If you do want to consider, impute them to 0. Or else, ensure that your mean calculation does not take them into consideration and only calculates mean over the other values.
H: I got 100% accuracy on my test set,is there something wrong? I got 100% accuracy on my test set when trained using decision tree algorithm.but only got 85% accuracy on random forest Is there something wrong with my model or is decision tree best suited for the dataset provided. Code: from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20) #Random Forest from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(n_estimators = 1000, random_state = 42) rf.fit(x_train, y_train); predictions = rf.predict(x_test) cm = sklearn.metrics.confusion_matrix(y_test,predictions) print(cm) #Decision Tree from sklearn import tree clf = tree.DecisionTreeClassifier() clf = clf.fit(x_train, y_train) predictions = clf.predict(x_test) cm = sklearn.metrics.confusion_matrix(y_test,predictions) Confusion Matrix: Random Forest: [[19937 1] [ 8 52]] Decision Tree: [[19938 0] [ 0 60]] AI: There may be a few reason this is happening. First of all, check your code. 100% accuracy seems unlikely in any setting. How many testing data points do you have? How many training data points did you train your model on? You may have made a coding mistake and compared two same list. Did you use different test set for testing? The high accuracy may be due to luck - try using some of the KFoldCrossValidation libraries that are widely available. You can visualise your decision tree to find our what is happening. If it has 100% accuracy on the test set, does it have 100% on the training set ?
H: Computational aspects are typically ignored by statisticians In the introductory chapter of "Process Mining: Data Science in Action" (2016 - Van der Aalst, pag 11) the author says that : Although data science can be seen as a continuation of statistics, the majority of statisticians did not contribute much to recent progress in data science. Most statisticians focused on theoretical results rather than real-world analysis problems. The computational aspects, which are critical for larger data sets, are typically ignored by statisticians. The focus is on generative modeling rather than prediction and dealing with practical challenges related to data quality and size. The bold phrase is not clear to me. In fact, since a generative model is the model that generates the data, once we obtain it we can do predictions. So, to me generative modeling and prediction are not opposing concepts. What do you think ? AI: I think what the author is speaking about is the time/memory complexity of algorithms that statisticians may don't care about. Make a model which is mathematically well proven may be more important to statistician eyes than making approximation to render a model feasible in real life. I encourage you to look at the complexity of frequent mathematical operations. Often operation like Singular Value Decomposition, Matrix Inversion, Matrix transpose are used and their cost is way higher than the "upper bound scalable time complexity limit" $O(n.log(n))$ which prevent any utilization on massive datasets. As an example, you can easily imagine than we are rapidly limited with time complexity because we are not able to increase computational power -- or wait -- more than approximately linearly with the increase of problem size. Take a very common $O(n^2)$ complexity, you can't afford to wait $1000000$ more longer -- or multiply your computational power by $1000000$ to keep same app duration -- when you multiply your dataset size by $1000$ (except if your operations are really fast...)
H: How to select features for Text classification problem I am working on a problem where we need to classify user query into multiple classes. Problem: Suppose we are running a website for selling products. The website has a form where the user can write any complaint or issue. In order to resolve users issue, we thought to classify issues into predefined classes so that we can understand what type of problems users are facing. Issues classes can be like Class 1: Payment Issues Class 2: Registration issues Class 3: Order booking issues Class 4: Problem accessing website app ... I am thinking to apply random forest but cannot decide what features should I select. Any suggestions would be of great help thank you AI: Since I am not sure where you are at the moment in your solution, let me give you a comprehensive, yet brief, view of what you should be looking to do. Preparing the training data: You would be required to collect data with the correct classifications. Once you have this, you should incorporate text mining algorithms (bag of words for instance) and end up with a document-term matrix (or term-document matrix, whichever is appropriate). Think of this as a list of all the different words (or word collections) possible in a complaint. These would in turn serve as your features Training the data: This is where the algorithms you have stated above (RandomForest for instance) come into picture. I would also suggest you to try other boosting algorithms here subject to data availability Testing on a holdout sample: Test the model accuracy on a holdout sample and finalize on the models you would like to choose. Note that you might also have to tune the text mining algorithm in case you do not get appropriate features for your classification algorithm Classifying new complaints: Apply the text mining algorithm to the newer data to generate the features as before and run it by the classifier Hope this gives you a good starting point for building your text classification solution.
H: Labeling hubs in a network Let's say we did some analysis on a network dataset. We have an adjacency matrix which we can use to construct a graph and we can find the nodes with the highest degree's. How would I go about giving a location of these nodes. Would it be okay to refer to the entries of the adjacency matrix (i,j)? AI: Im not sure if I understand your question. But, you do not need to create a graph to work out node degree. Node degree can be obtained by taking the sum of the row or column in the adjacency matrix. E.g. in this 4 node graph where each node has degree $2$, and the row and column sums are also 2. 0|1|2|3| 1----0 0|0|1|0|1| ¦ ¦ 1|1|0|1|0| 2----3 2|0|1|0|1| 3|1|0|1|0| And so if we were to update the matrix to add a diagonal edge between nodes 1 and 3. (apologies for the rubbish graphic). Adding values to the diagonal creates a self-loop edge. 0|1|2|3| 1----0 0|0|1|0|1| ¦ \ ¦ 1|1|0|1|1| 2----3 2|0|1|0|1| 3|1|1|1|0| In relation to node location, yes refer to node locations using the indices of the adjacency matrix $(i,j)$. In python this is done by: import numpy as np a = np.matrix([[0,1,0,1], [1,0,1,1], [0,1,0,1], [1,1,1,0]]) a.sum(axis=0) # row sum. aka Node Degree a.sum(axis=1) # column sum. aka Node Degree # form a tuple with the result of above # i.e. (a.sum(axis=1),a.sum(axis=1)) = (1,1) a.sum(axis=0).argmax() # index of row with max sum (returns 1st index if multiple nodes have same degree) d= 2 # return nodes with degree value d np.where(a.sum(axis=0) > d)[1] # returns indeces of nodes with degree > d
H: Uninitialized Value Error in Tensorflow I am trying to build Encoder-Decoder which consists of down and upsampling Convolutional network with a reference to following article and its explanation: This is what I wrote down, but it keep returning uninitialized value error import tensorflow as tf import numpy as np tf.reset_default_graph() with tf.Graph().as_default(): # hyper-params learning_rate = 0.0002 epochs = 250 batch_size = 16 N_w = 11 #number of frames concatenated together channels = 9*N_w drop_out = [0.5, 0.5, 0.5, 0, 0, 0, 0, 0] def conv_down(x, N, stride, count): #Conv [4x4, str_2] > Batch_Normalization > Leaky_ReLU with tf.variable_scope("conv_down_{}_{}".format(N, count)) as scope: #N == depth of tensor with tf.variable_scope("conv_down_4x4_str{}".format(stride)) : #this's used for downsampling x = tf.layers.conv2d(x, N, kernel_size=4, strides=stride, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)), name=scope) x = tf.contrib.layers.batch_norm(x) x = tf.nn.leaky_relu(x) #for conv_down, implement leakyReLU return x def conv_up(x, N, drop_rate, stride, count): #Conv_transpose [4x4, str_2] > Batch_Normalizaiton > DropOut > ReLU with tf.variable_scope("{}".format(count)) as scope: x = tf.layers.conv2d_transpose(x, N, kernel_size=4, strides=stride, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2)), name=scope) x = tf.contrib.layers.batch_norm(x) if drop_rate is not 0: x = tf.nn.dropout(x, keep_prob=drop_rate) x = tf.nn.relu(x) return x def conv_refine(x, N, drop_rate): #Conv [3x3, str_1] > Batch_Normalization > DropOut > ReLU x = tf.layers.conv2d(x, N, kernel_size=3, strides=1, padding='same', kernel_initializer=tf.truncated_normal_initializer(stddev=np.sqrt(0.2))) x = tf.contrib.layers.batch_norm(x) if drop_rate is not 0: x = tf.nn.dropout(x, keep_prob=drop_rate) x = tf.nn.relu(x) return x def conv_upsample(x, N, drop_rate, stride, count): with tf.variable_scope("conv_upsamp_{}_{}".format(N,count)) : with tf.variable_scope("conv_up_{}".format(count)): x = conv_up(x, 2*N, drop_rate, stride,count) with tf.variable_scope("refine1"): x = conv_refine(x, N, drop_rate) with tf.variable_scope("refine2"): x = conv_refine(x, N, drop_rate) return x def biLinearDown(x, N): return tf.image.resize_images(x, [N, N]) def finalTanH(x): return tf.nn.tanh(x) def T(x): #channel_output_structure down_channel_output = [64, 128, 256, 512, 512, 512, 512, 512] up_channel_output= [512, 512, 512, 512, 256, 128, 64, 3] biLinearDown_output= [32, 64, 128] #for skip-connection #down_sampling conv1 = conv_down(x, down_channel_output[0], 2, 1) conv2 = conv_down(conv1, down_channel_output[1], 2, 2) conv3 = conv_down(conv2, down_channel_output[2], 2, 3) conv4 = conv_down(conv3, down_channel_output[3], 1, 4) conv5 = conv_down(conv4, down_channel_output[4], 1, 5) conv6 = conv_down(conv5, down_channel_output[5], 1, 6) conv7 = conv_down(conv6, down_channel_output[6], 1, 7) conv8 = conv_down(conv7, down_channel_output[7], 1, 8) #upsampling dconv1 = conv_upsample(conv8, up_channel_output[0], drop_out[0], 1, 1) dconv2 = conv_upsample(dconv1, up_channel_output[1], drop_out[1], 1, 2) dconv3 = conv_upsample(dconv2, up_channel_output[2], drop_out[2], 1, 3) dconv4 = conv_upsample(dconv3, up_channel_output[3], drop_out[3], 1, 4) dconv5 = conv_upsample(dconv4, up_channel_output[4], drop_out[4], 1, 5) dconv6 = conv_upsample(tf.concat([dconv5, biLinearDown(x, biLinearDown_output[0])], axis=3), up_channel_output[5], drop_out[5], 2, 6) dconv7 = conv_upsample(tf.concat([dconv6, biLinearDown(x, biLinearDown_output[1])], axis=3), up_channel_output[6], drop_out[6], 2, 7) dconv8 = conv_upsample(tf.concat([dconv7, biLinearDown(x, biLinearDown_output[2])], axis=3), up_channel_output[7], drop_out[7], 2, 8) #final_tanh T_x = finalTanH(dconv8) return T_x # input_tensor X x = tf.placeholder(tf.float32, [batch_size, 256, 256, channels]) # batch_size x Height x Width x N_w # define sheudo_input for testing sheudo_input = np.float32(np.random.uniform(low=-1., high=1., size=[16, 256,256, 99])) # initialize_ init_g = tf.global_variables_initializer() init_l = tf.local_variables_initializer() with tf.Session() as sess: sess.run(init_g) sess.run(init_l) sess.run(T(x), feed_dict={x: sheudo_input}) The error detail is like below: FailedPreconditionError: Attempting to use uninitialized value conv_upsamp_3_8/conv_up_8/8/kernel [[Node: conv_upsamp_3_8/conv_up_8/8/kernel/read = IdentityT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]] conv_upsamp_3_8 is the last part of the upsampling, just befor TanH applied. I think the problem might be derived from the part that I had defined convupsample referecing to other two functions - convup and refine but can't sure then why at the last step it occurs error. Any guess or hint? AI: T(x) is constructing the graph, but this is being called after the init tensors are made. This means the init tensors have no tensors to initialize. It could be fixed by changing the last lines to something like this: output_tensor = T(x) init_g = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_g) sess.run(output_tensor, feed_dict={x: sheudo_input})
H: Why is it bad to use the same test dataset over and over again? I am following this Google's series: Machine Learning Crash Course. On the chapter about generalisation, they make the following statement: Good performance on the test set is a useful indicator of good performance on the new data in general, assuming that: - The test set is large enough. - You don't cheat by using the same test set over and over. Why exactly is the second point a bad one? As long as one does not use the test set for the training stage, why is it bad to keep using the same test set to test a model's performance? It's not like the model will get a bias by doing so (the test set is not updating any of the model's parameters internally). AI: You don't cheat by using the same test set over and over. Correct point! Test set is for testing the performance of model, how good your model performs on test data by learning information from training data. There is no problem in doing this, until you try to create a new model or increase the performance of already-built model using test set. Then it will introduce bias in your model, as it will also try to learn the information in test set. In this case, you will begin to optimize your model on test set also. Even if you are not training your model on test set, it will happen. As given in this answer, Performance estimates should always be done on completely independent data. If you are optimizing some aspect based on test data, then your test data is no longer independent and you would need a validation set. So even if test set does not update your model parameters, test set will no longer be unseen data and its purpose will end. Then your test set will become validation set (as in cross-validation), and you would not be left with any actual test set.
H: How to deal with unbalanced class in biological datasets? When dealing with unbalanced class, which is better, oversampling/undersampling of the classes or randomly selecting equal number of positive samples and negative samples from the training dataset and combining as training samples, transforming the imbalanced classification problem and replacing by multiple balanced data classification problems? Does one of them have an advantage over the other? If so, which one? I am asking for a generalized point of view. If you had an unbalanced dataset, which option would you choose, 1 or 2? AI: There can be two things for an imbalanced dataset: . For eg, in a cancer detection dataset, the number of positive(cancer present) samples, will be quite less than the negative( cancer absent) samples. In this case, you should not apply sampling technqiues. You should rather focus on minimizing False Negatives , as cancer should not be missed while diagnosing a patient at the expense of a few false positives. .Second case, for eg, The popular Iris Datset where the classes should be balanced. But suppose you get an imbalanced version of the dataset, then you should apply sampling technqiues to the dataset. Because, the class proportion does not match the actual proportion , which was not the case in the Cancer Example.
H: How to handle NaNs for ratio feature for binary classifier? I'm creating a churn model and would like to create a ratio (# customers / total transaction) for each merchant. About 70% of the data are NaNs (zero/zero). I was wondering what I should impute for the 70% of the NaNs. I have other features and I don't like to delete the 70% of the data. But if I impute 0s, the distribution would probably become different from ground truth since lower number means large transaction volume with a few customer. If I impute mean though, it'll also be different since the element (zero / zero) has no actions fundamentally. I was about to impute -1 to distinguish the NaNs with non-NaNs. Would that make sense for a feature of binary classifier? Thanks! AI: I would probably impute with a constant, 0 or -1 or something. However, I would though add another feature which would be a boolean flag to indicate if there has been any transactions or not. This gives extra help to the model in distinguishing between the cases of real values and imputed values.
H: Convert nominal to numeric variables? I am trying to develeop an algorithm with sklearn and Tensorflow to predict which car can be offer to each customer. To do that I have a database with the answers of one survey to 1000 customers. An example of questions/[Answers] are: Color/[Green,Red,Blue] NumberOfPax/[2,4,5,6,7] HorsePower/[Integer] InsuranceIncluded[yes/no/Don't know] As you can see all questions are answer previously tipified, and in case the answer can be open I validate the value to be an integer or a radio button. The purpose of that beahivour is that despite the categorical variables I can easily use sklearn to clustering the data. Will be a good approach to translate this categories to numerical value as an intern procedure an then cluster with this references? For example: yes --> 0; No --> 1; Don't know --> 2 Then sklearn will cluster with all variables as numerical values. I have thought this possibility beacuse I believe that sklearn can not cluster nominal data. What do you think about this approach? AI: Clustering on categories is not something sklearn can do by default. And assigning sequential values to categories like that certainly won't help - clustering tends to work based on distance, by assigning 0, 1, 2 to Yes, No, Don't Know like that, you are suggesting Yes is 'closer' to No than it is to Don't Know. I highly recommend having a look at k-modes, a clustering algorithm for categorical data. Essentially it optimises according to how common the set of categories are within the cluster (modes, rather than the means of values). E.g. may find green people carriers (5/6/7 pax) in a cluster, and red sports cars (2/4 pax). There is a Python library here, which also has links to some papers describing the algorithm. There is also k-prototypes, which clusters with combined numerical and catagorical data.
H: Why does prediction by a consensus of classifier work better than prediction by a single classifiers? I have seen that consensus of classifiers (taking say 5 separate classifiers) and obtaining the final labeling of the unknown sample based on the voting method (whichever class gets the predicted the most is the class of the unknown sample) works better than taking a single classifier for predicting the class of a sample. Why so? Any articles which show why this happens? AI: For a very simple example, imagine you have three independent classifiers that each have 60% accuracy. If you use any one of them to classify a random sample, you have a 60% chance of getting it right. Now use an ensemble classifier that takes the majority vote. Below are the probabilities that exactly N of the individual classifiers are correct. 0 correct - 6.4% 1 correct - 28.8% 2 correct - 43.2% 3 correct - 21.6% Your ensemble classifier has the correct output when 2 or 3 of the component classifiers are correct, which happens 64.8% of the time. Your ensemble classifier outperforms each of the individual component classifiers! You'll get this advantage by combining any classifiers that perform better than random.
H: What is the meaning of the term "pipeline" within data science? People often refer to pipelines when talking about models, data and even layers in a neural network. What can be meant by a pipeline? AI: A pipeline is almost like an algorithm, but at a higher level, in that it lists the steps of a process. People use it to describe the main stages of project. This could include everything from gathering data and pre-processing it, right through to post-analysis of predictions. The pipeline is essentially a large chain of modules, which can be individually examined/explained. Here is an example image (source: DataBricks) There is actually a nifty module (class, actually) in Scikit Learn for building your own machine learning pipelines, which is literally called Pipeline. You can specify processing steps, models and other transformations, then wrap them into a pipeline, which carries out the start-to-finish process for you. This makes it much easier to work in a modular way and alter parameters, while keeping things organised. In the documentation, they use an ANOVA (analysis of variance) model to select variables, which are then fed into an SVM (support vector machine) to perform classification. In the context of what might be considered a single model, a pipeline may refer to the various transformations performed on data. This might include dimensionsality reduction, embeddings, encoding/decoding (GAN example), attention mechanisims, and so on. Here is an example of what might be referred to as a pipeline: the Spatial Transformer Network: Images are passed through a pipeline with three parts: a localisation network a grid generator a sampling mechanism these three parts might be akin to one of the parts in the MLlib Pipeline displayed above. Another area in which pipeline is used extensively is within data management. In this case, it refers to how and where data is transferred, and perhaps by which frequency. There are large packages dedicated to building such pipelines, e.g. : Apache Spark - can do a lot more than just pipelines these days (use case example) Luigi - manages complex batch processes (developed by Spotify)
H: Confusion about neural network architecture for the actor critic reinforcement learning algorithm I am trying to understand the implementation of the actor critic reinforcement learning algorithm. According to this, there should be just one neural network with two heads for the action probabilities and the state values. It is also true in their tensorflow implementation here. However, if I refer to the tensorflow implementations such as this and this, they use two nn and update them separately. So, my question is what is the correct way to go about the neural network for the actor critic algorithm? AI: Both architectures work. It is probably more common to use two separate networks for simpler problems, and a combined network for a more complex problem, such as one involving machine vision. In general, you need the two functions - policy and value - to be separate. There is no reason to expect them to have too much in common, in terms of the overall mapping from state to their output. However, if the state requires a lot of non-linear interpretation to get meaningful features, such as image, audio or video input, then it could be an overall benefit if the two functions share the lower-level feature representations. Not only will this encourage better generic learning of the low-level features (because they are effectively being trained with twice the data per time step compared to if they were separate), but the calculations should be faster too. For a similar reason, if the agent works from natural images, you could use either pre-processed features from upper layers of some model trained on ImageNet, or you could start the network initialised with the first layers of such a model. This might apply for other RL agents, such as DQN too - if this works, then clearly sharing that part of the network when you have more than one function to work on could work also. If your state data is simpler, e.g. a few positions and velocities, or one/both policy and value have a simple relationship to state, then a joint network may be less useful. When the policy and value functions are in a shared network, they may still both have more than one dedicated layer, as it is expected that there is not any simple linear relationship between the two functions (a NN could learn this by itself, but the assumption is good enough that it may as well be hard-coded by splitting the network into two branches with maybe a couple of hidden layers each before output).
H: Algorithm for backpropagation through time I am reading through this article trying to understand the bptt algorithm, in the context of an RNN. However there is one part I don’t understand: layer_1_delta = (future_layer_1_delta.dot(synapse_h.T) + layer_2_delta.dot(synapse_1.T)) + sigmoid_output_to_derivative(layer_1) # Located on line 99 in the article linked above I can’t seem to figure out which mathematical formula this links to in the bptt algorithm to understand what it does, so could someone please provide it? Many thanks AI: As far as I can make out, this is simply the error aggregated at layer_1, meaning it is the sum of the errors from the current hidden layer, plus the error from all future hidden layers (in this case, that is only one layer: layer_2) plus the error from the output (prediction) layer - sigmoid_out. I think that line of code (#99) is performing the loss update with respect to a chosen layer and all layers that are ahead in time - so closer to the network's output layer, but as we are moving backwards during backpropagation. It corresponds then, I believe, to equation 3.24 and 3.25 in Ilya Sutskever's PhD Thesis (on page 35). So you are summing up the gradients of all layers ahead of the current layer, as well as the gradient of the output layer (the sigmoid). Equation 3.25 from the link above looks like this: $$ \frac{\delta L}{\delta W} = \sum_{t=1}^{T-1} \bigg( W'^T (r_{t+1} \odot (1 - r_{t+1}) \odot \frac{\delta L}{\delta r_{t+1}}) \bigg) v_t^T + \sum_{t=1}^{T-1} \frac{\delta log \hat P (v_t | r_{t-1})}{\delta W} $$ ... where , where $v_t$ are the input variables and $r_t$ are the RNN’s hidden variables (all of which are deterministic). (from page 34) The explanation given in the thesis: The first summation in eq. 3.25 corresponds to the use of $W$ for computing $r_t$, and the second summation arises from the use of $W$ as RBM parameters for $log \hat P(v_t |r_{t−1})$, so each $∂log \hat P(v_{t+1}|r_t)/∂W$ term is computed with CD (contrastive divergence). Computing $∂L/∂rt$ is done most conveniently with a single backward pass through the sequence. It is also seen that the gradient of the RTRBM ( Recurrent Temporal Restricted Boltzmann Machine) would be computed exactly if CD were replaced with the derivatives of the RBM’s log probability. bold text added by me The main leap here to relate to your example blog is that the it equates the log probabilities of a Restricted Boltzmann Machine to the loss of a generative RNN. If you are interested in more details, I would recommend reading either all of Chapter 3, or perhaps just sections 3.9 and 3.10. I posted that equation because I could see a version that so closely matches the code in that blog in either of the original papers that propose backpropagation through time. Additionally, none seem to be free to read. You can find them link on the relevant Wikipedia page. Have a look at this related paper, written by one of the original authors of BPTT. Backpropagation Through Time: What It Does and How to Do It.
H: How are data in tensorflow.examples.tutorials.mnist formatted? I am analyzing a sample code that implements a Restricted Boltzmann Machine (RBM) using Tensorflow. The input data seems to be the good old MNIST, except that apparently, it is now available in Tensorflow itself. So, instead of running this sample code on MNIST, I want to run it on my own data. But the problem is that I do not know how tensorflow.examples.tutorials.mnist has been formatted. Is it like a CSV file? If I want to feed my own data, which comes from Wizemann's horse images, how should I format my data that is compatible with how MNIST has been formatted? I'm very new to this stuff. So, please enlighten me if I am mistaken or my question is absurd. AI: Based on the official documentation after loading the data, from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) You can use the following line, mnist.train.images.shape to see the shape of images, as you can see it is a rank-2 matrix which means images are unrolled which one dimension specifies the number of examples and another specifies the number of pixels __features.
H: R error condition has length > 1 Hi i am getting this error: Error in if (x < min) { : missing value where TRUE/FALSE needed In addition: Warning message: In if (x < min) { : the condition has length > 1 and only the first element will be used I have seen this error in others post but his solutions don't fix it, I think this may be a different case My code: min <- data[1,2] for(i in data){ if(data[i,2] < min){ min <- data[i, 2] } } The column that I am using(the second) all are numeric values Thanks! AI: If you change your for loop to for(i in 1:nrow(data)){... then it will work. I would not call your variable min because that is the name of the function which you should be using instead such as: tmp=min(data[,2])
H: R - Order each matrix inside a list of matrices I have a list where each element is a matrix, and I want to order each matrix individually Order a matrix is as easy as this: data <- data[order(data$value),] But I am struggling to order a list of matrix individually I have tried this with some variations: for(i in 1:lenght(data)){ data[i] <- data[i][order(data[i]$value),] } data is the list that contains matrices I have also tried with apply functions Some ideas? Thanks!! All the code: #The aim is to show one hospital for each state given an outcome #in the posotion num of the ranking, its a learning exercise rankall <- function(outcome, num = "best"){ if (outcome == "heart attack"){ col <- 11 }else if(outcome == "heart failure"){ col <- 17 }else if(outcome == "pneumonia"){ col <- 23 }else{ stop("invalid outcome") } data <- read.csv("outcome-of-care-measures.csv", colClasses = "character") data <- data[, c(2, 7,col)] colnames(data) <- c("Hospital", "State", "value") data$value <- as.numeric(data$value) data <- data[complete.cases(data),] data <- split(data, data$State) #data2 is the final result dataframe data2 <- data.frame(matrix(nrow = 0, ncol = 3)) colnames(data2) <- c("Hospital", "State", "value") #From here is what I do not know how to do it. #data is a list of matrices, each matrix corresponds to a state #data2 is the result dataframe, where I am going to add the row num of each matrix in data #1. Order each matrix in data by value and alfabetically if tie #2. Add to data 2 the row num of each matrix in data once they are sort for(i in 1:lenght(data)){ data[i] <- data[i][order(data[i]$Hospital),] data[i] <- data[i][order(data[i]$value),] if(num =="best"){ num = 1; }else if(num == "worst"){ num = nrow(data[i]) } num <- as.numeric(num) rbind(data2,data[i][num]) } return(data2) } Sorry about identations, it changed when I paste the code AI: Because I did not have your exact data (I don't have the csv file), I created a dummy dataframe (swiped from this tutorial): numPeople = 10 sex=sample(c("male","female"),numPeople,replace=T) age = sample(14:102, numPeople, replace=T) income = sample(20:150, numPeople, replace=T) minor = age<18 population = data.frame(sex=sex, age=age, income=income, minor=minor) population > population sex age income minor 1 male 34 130 FALSE 2 male 48 86 FALSE 3 female 101 36 FALSE 4 male 64 78 FALSE 5 female 97 92 FALSE 6 male 19 45 FALSE 7 female 42 124 FALSE 8 female 24 68 FALSE 9 female 24 96 FALSE 10 female 91 30 FALSE This means you will have to adjust the variable names and so on to match your example, but can learn how to order a list of matrices. More specifically, you really have a list of data.frame objects, after using the split function. it is important to notice the difference between a data.frame and a matrix, as they can behave differently. I will split the data.frame by the sex column: l = split(population, population$sex) You can test that each element of the resulting list is indeed a data.frame (and not a matrix) by running: > is.data.frame(l[[1]]) [1] TRUE Notice that you have to use the doublesquare brackets to access the actual element of the list. Single square brackets will return something else: > is.data.frame(l[1]) [1] FALSE Tt is a named list-element and needs to be referenced by using (in my case) one of the names of the group that we split on: male or female Below I comment the code heavily to explain what each step does. > l[1] $female # this is the name of the list element sex age income minor 1 female 20 42 FALSE 6 female 75 103 FALSE 7 female 30 148 FALSE So we can access the actual element two ways: by name: > l[1]$female sex age income minor 1 female 20 42 FALSE 6 female 75 103 FALSE 7 female 30 148 FALSE by double square brackets: > l[[1]] sex age income minor 1 female 20 42 FALSE 6 female 75 103 FALSE 7 female 30 148 FALSE That all being said, he is a working example of sorting a list (vector, holding lists) containing data.frames: l = split(population, population$sex) # population is defined as shown above N = length(l) results <- vector("list", N) for(i in 1:N) { results[[i]] <- l[[i]][order(l[[i]]$age),] } I specify the length of the vector when I create it, as we know how long the resulting list will be in advance. ... and check the results: > results[[1]] sex age income minor 8 female 66 90 FALSE 5 female 69 131 FALSE 2 female 70 52 FALSE 9 female 95 43 FALSE > results[[2]] sex age income minor 6 male 18 87 FALSE 4 male 61 97 FALSE 3 male 64 131 FALSE 7 male 72 93 FALSE 1 male 82 107 FALSE 10 male 92 42 FALSE The results depend on random choice when creating population at the beginning, using sample().
H: How to choose the random seed? I understand this question can be strange, but how do I pick the final random_seed for my classifier? Below is an example code. It uses the SGDClassifier from SKlearn on the iris dataset, and GridSearchCV to find the best random_state: from sklearn.linear_model import SGDClassifier from sklearn import datasets from sklearn.model_selection import train_test_split, GridSearchCV iris = datasets.load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) parameters = {'random_state':[1, 42, 999, 123456]} sgd = SGDClassifier(max_iter=20, shuffle=True) clf = GridSearchCV(sgd, parameters, cv=5) clf.fit(X_train, y_train) print("Best parameter found:") print(clf.best_params_) print("\nScore per grid set:") means = clf.cv_results_['mean_test_score'] stds = clf.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, clf.cv_results_['params']): print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params)) The results are the following: Best parameter found: {'random_state': 999} Score per grid set: 0.732 (+/-0.165) for {'random_state': 1} 0.777 (+/-0.212) for {'random_state': 42} 0.786 (+/-0.277) for {'random_state': 999} 0.759 (+/-0.210) for {'random_state': 123456} In this case, the difference from the best to second best is 0.009 from the score. Of course, the train/test split also makes a difference. This is just an example, where one could argue that it doesn't matter which one I pick. The random_state should not affect the working of the algorithm. However, there is nothing impeding of a scenario where the difference from the best to the second best is 0.1, 0.2, 0.99, a scenario where the random_seed makes a big impact. In the case where the random_seed makes a big impact, is it fair to hyper-parameter optimize it? When is the impact too small to care? AI: TL:DR, I would suggest not to optimise over the random seed. A better investment of the time would be to improve other parts of your model, such as the pipeline, the underlying algorithms, the loss function... heck, even optimise the runtime performance! :-) This is an interesting question, even though (in my opinion) should not be a parameter to optimise. I can imagine that researchers, in their struggles to beat current state-of-the-art on benchmarks such as ImageNet, may well run the same experiments many times with different random seeds, and just pick/average the best. However, the difference should not be considerable. If your algorithms has enough data, and goes through enough iterations, the impact of the random seed should tend towards zero. Of course, as you say, it may have a huge impact. Imagine I am categorising a batch of images, into cat or dog. If I have a batch size of 1, and only 2 images that are randomly sampled, and one is correctly classified, one is not, then the random seed governing which is selected will determine whether or not I get 100% or 0% acuracy on that batch. Some more basic information: The use of a random seed is simply to allow for results to be as (close to) reproducible as possible. All random number generators are only pseudo-random generators, as in the values appear to be random, but are not. In essence, this can be logically deduced as (non-quantum) computers are deterministic machines, and so if given the same input, will always produce the same output. Have a look here for some more information and relative links to literature.
H: Database on time to churn in telecomunication I would like to research about time to churn1 in the telecommunication market. Does anyone have a link to such a database? The only ones I found did not include the time of churn, but only if a customer is labeled as churn or non-churn, what I would need is time to event data. The papers I researched all seemed to use private databases. 1churn is defined here as the moment in time, where a customer quits the service that he/she book from the service provider. AI: instead of the database, search dataset https://www.google.com/search?q=telecomunication+churn+dataset&ie=utf-8&oe=utf-8&client=firefox-b-ab
H: Why is this TensorFlow CNN not generalising? So I wrote a TensorFlow CNN by creating manual layers. It is not state of art, but a simple experimental setup. The problem is it is not generalising well, it is hardly generalising. This should not be the case, it should at-least generalise somewhat. As you will see below the loss hardly changes at all. I have considered adding drop out layer, but what is bugging me is the CNN does not generalise at all. What do you think is the problem here? Here is my code: learning_rate = 0.0001 epochs = 50 X = tf.placeholder(dtype = tf.float32, shape = (200, 32, 32, 3), name = 'Dataset_Placeholder') Y = tf.placeholder(dtype = tf.float32, shape = (200, 2), name = 'Results_Placeholder') X_CV = tf.placeholder(dtype = tf.float32, shape = (40, 32, 32, 3), name = 'CVDataset_Placeholder') Y_CV = tf.placeholder(dtype = tf.float32, shape = (40, 2), name = 'CVResults_Placeholder') weights_conv1 = tf.get_variable(name = 'wc1', dtype = tf.float32, initializer = tf.random_normal(shape = (3, 3, 3, 20), mean = 0, stddev = 0.1) ) weights_conv2 = tf.get_variable(name = 'wc2', dtype = tf.float32, initializer = tf.random_normal(shape = (3, 3, 20, 20), mean = 0, stddev = 0.1)) weights_conv3 = tf.get_variable(name = 'wc3', dtype = tf.float32, initializer = tf.random_normal(shape = (3, 3, 20, 20), mean = 0, stddev = 0.1)) weights_conv4 = tf.get_variable(name = 'wc4', dtype = tf.float32, initializer = tf.random_normal(shape = (3, 3, 20, 20), mean = 0, stddev = 0.1)) weights_conv5 = tf.get_variable(name = 'wc5', dtype = tf.float32, initializer = tf.random_normal(shape = (3, 3, 20, 20), mean = 0, stddev = 0.1)) filters = [weights_conv1] + [weights_conv2] + [weights_conv3] + [weights_conv4] + [weights_conv5] bias1 = tf.get_variable(name = 'b1', dtype = tf.float32, initializer = tf.random_normal(mean = 0, stddev = 0.001, shape = (1, 1, 1, 20))) bias2 = tf.get_variable(name = 'b2', dtype = tf.float32, initializer = tf.random_normal(mean = 0, stddev = 0.001, shape = (1, 1, 1, 20))) bias3 = tf.get_variable(name = 'b3', dtype = tf.float32, initializer = tf.random_normal(mean = 0, stddev = 0.001, shape = (1, 1, 1, 20))) bias4 = tf.get_variable(name = 'b4', dtype = tf.float32, initializer = tf.random_normal(mean = 0, stddev = 0.001, shape = (1, 1, 1, 20))) bias5 = tf.get_variable(name = 'b5', dtype = tf.float32, initializer = tf.random_normal(mean = 0, stddev = 0.001, shape = (1, 1, 1, 20))) biases = [bias1] + [bias2] + [bias3] + [bias4] + [bias5] def convolutionForwardPropagation(X): c1 = tf.nn.conv2d(X, filters[0], strides =[1,1,1,1], data_format ='NHWC', padding = 'VALID') f1 = tf.nn.relu(c1 + biases[0]) c2 = tf.nn.conv2d(f1, filters[1], strides =[1,1,1,1], data_format ='NHWC', padding = 'VALID') f2 = tf.nn.relu(c2 + biases[1]) c3 = tf.nn.conv2d(f2, filters[2], strides =[1,1,1,1], data_format ='NHWC', padding = 'VALID') f3 = tf.nn.relu(c3 + biases[2]) c4 = tf.nn.conv2d(f3, filters[3], strides =[1,1,1,1], data_format ='NHWC', padding = 'VALID') f4 = tf.nn.relu(c4 + biases[3]) c5 = tf.nn.conv2d(f4, filters[4], strides =[1,1,1,1], data_format ='NHWC', padding = 'VALID') f5 = tf.nn.leaky_relu(c5 + biases[4]) shape = f5.shape fr = tf.reshape(f5,(shape[0], shape[3] * shape[2] * shape[1])) fc1 = tf.contrib.layers.fully_connected(fr, 50, activation_fn = tf.nn.sigmoid, weights_regularizer = tf.contrib.layers.l2_regularizer(5.0)) fc2 = tf.contrib.layers.fully_connected(fc1, 2, activation_fn = tf.nn.sigmoid, weights_regularizer = tf.contrib.layers.l2_regularizer(5.0)) print(fc2.shape) return fc2 fc2 = convolutionForwardPropagation(X) entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits = fc2, labels = Y, name = 'cross_entropy') loss = tf.reduce_mean(entropy, name = 'loss') optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss) hypothesis = tf.nn.softmax(fc2) y_pred_class = tf.argmax(hypothesis, axis = 1) correct_preds = tf.equal(tf.argmax(hypothesis, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32)) fcCV = convolutionForwardPropagation(X_CV) entropyCV = tf.nn.softmax_cross_entropy_with_logits_v2(logits = fcCV, labels = Y_CV, name = 'cross_entropy') lossCV = tf.reduce_mean(entropyCV, name = 'loss') hypothesisCV = tf.nn.softmax(fcCV) correct_predsCV = tf.equal(tf.argmax(hypothesisCV, 1), tf.argmax(Y_CV, 1)) accuracyCV = tf.reduce_sum(tf.cast(correct_predsCV, tf.float32)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) l, h, c, ac = sess.run([lossCV, hypothesisCV, correct_predsCV, accuracyCV], feed_dict = {X_CV:CVDataset, Y_CV:y_CV}) print(ac, " ", l) for i in range(0, epochs): #sess.run(fc) _, l, h, c, acc = sess.run([optimizer, loss, hypothesis, correct_preds, accuracy], feed_dict = {X:trainDataset, Y:y_train}) print("Epoch :", i+1, ", loss : ", l, ", accuracy :", acc) l, h, c, ac = sess.run([lossCV, hypothesisCV, correct_predsCV, accuracyCV], feed_dict = {X_CV:CVDataset, Y_CV:y_CV}) print(ac, " ", l) writer.close() Here is my outputs (Removed dome of them so the post does not contain too much code): 20.0 0.70181745 Epoch : 1 , loss : 0.71289825 , accuracy : 100.0 Epoch : 3 , loss : 0.70839673 , accuracy : 100.0 Epoch : 5 , loss : 0.70380384 , accuracy : 100.0 Epoch : 7 , loss : 0.6992179 , accuracy : 100.0 Epoch : 9 , loss : 0.6949341 , accuracy : 103.0 Epoch : 11 , loss : 0.69119203 , accuracy : 111.0 Epoch : 13 , loss : 0.6879886 , accuracy : 111.0 Epoch : 15 , loss : 0.6850215 , accuracy : 110.0 Epoch : 17 , loss : 0.6818766 , accuracy : 113.0 Epoch : 18 , loss : 0.680143 , accuracy : 117.0 Epoch : 19 , loss : 0.6782758 , accuracy : 119.0 Epoch : 21 , loss : 0.6741557 , accuracy : 126.0 Epoch : 23 , loss : 0.66965437 , accuracy : 128.0 Epoch : 37 , loss : 0.635115 , accuracy : 140.0 Epoch : 39 , loss : 0.62959635 , accuracy : 139.0 Epoch : 41 , loss : 0.6239494 , accuracy : 145.0 Epoch : 43 , loss : 0.6180825 , accuracy : 147.0 Epoch : 45 , loss : 0.61196554 , accuracy : 153.0 Epoch : 47 , loss : 0.6056536 , accuracy : 154.0 Epoch : 49 , loss : 0.5992168 , accuracy : 156.0 Epoch : 50 , loss : 0.59595585 , accuracy : 155.0 20.0 0.7038449 AI: When you do fcCV = convolutionForwardPropagation(X_CV) you create a separate graph than the one you train on, so the CV graph is never updated. To fix this you can change the X and y placeholders to have a variable batch size by specifying a None for the batch dimension: X = tf.placeholder(dtype = tf.float32, shape = (None, 32, 32, 3), name = 'Dataset_Placeholder') Y = tf.placeholder(dtype = tf.float32, shape = (None, 2), name = 'Results_Placeholder') Then when you run your validation set, just use those X and y placeholders. EDIT: I realized you are actually reusing the weights you defined at the top for the convolution layers, but the fully connected layers are defined using tf.contrib.layers.fully_connected which will create new weights the second time it is called, so the above should still fix it. Another way to fix this is to define the weights for the fully connected layer as you did with the convolution layers and use a tf.matmul, but using the same placeholder for train and val input is probably cleaner.
H: CNN only performs well when split into 2 models I have 2 groups of data of equal shape (the main difference between the 2 are that one has half as many features - and consequently different labels of course)that perform better when they are trained independently (both using the same code). Is there any canonical approach to create a single model that performs well on both from here? Of course, I have some ideas such as increasing the number of layers and neurons but was wondering if there is something I am ignorant of. (p.s. this is for a regression problem) AI: You could try saving the model after training the one it performs best on. Then restore it and try feeding in the 2nd set. saver = tf.train.Saver() Then at the end of training, add the line save_path = saver.save(sess, "/your/file/path.ckpt") Then to restore. with tf.Session() as sess: saver.restore(sess, "/your/file/path.ckpt")
H: Dealing with a dataset having target values on different scales? I am currently working on a dataset having 10 features and one continuous target variable. One of the features is 'Country' , in which there are seven unique values [Argentina ,Denmark , France...etc]. Now , the continuous target variable is sales of a given product in that particular country for a given month in a given year. It has been given in the problem statement , that the Sales have been taken in the local currency of that country , so now I have values on different scales and I am not sure how to deal with them. When I grouped the data as per the different countries , (using pandas's groupby function) , I got at least 1000 observations for each country. So maybe I could train a model separately for each country ? All kind of approaches will be appreciated. AI: TL:DR The most common and straight-forward approach would be to scale all the numerical data to be within a given range. This makes the currency differences irrelevant, as all fluctation remains constant, relative to the original scale. Rescaling Here is an example function to scale the numerical columns of your dataset: def rescale(data, new_min=-1, new_max=1): """Rescale the columns of Dataframe to be in the range [new_min, new_max]. Parameters ---------- data : a Pandas DataFrame new_min : the target minimum for each of the columns, optional new_max : the target maximum for each of the columns, optional Returns ------- out : the rescaled input data, with each column now in the range [new_min, new_max] """ return (data - data.min()) / (data.max() - data.min()) * (new_max - new_min) + new_min You can then do the following to see nice descriptions of each column in your dataframe, ensuring the min/max values are as desired: your_dataframe.describe() One thing to perhaps try out, would be whether or not to scale **all* data to be within a fixed range, say [0, 1], or whether to scale each individual currency to the range [0, 1]. That equates to applying the above function on either single columns of your dataframe, or the entire dataframe. Considerations Things to keep in mind/check for would be: if some currencies are nominal huge compared to others (e.g. 1 euro = 17,000 Indonesian Rupiah), the smaller currencies will have time-series values all very close to zero - this makes computation/optimisation more difficult - especially if you use methods such as gradient descent. Scaling the currencies individually to a range would lose the relativity of their nominal values. To see this, plot the currencies before and after scaling. Unify the Currencies If you have (or could get) exchange rate the data, another alternative would be to convert all of the currencies to the one one your choice. As the exchange rate changes over time, this would also inherently introduce new information into the dataset, namely the relative economic health of each of the countries. Depending on your use case, this could be something that really helps, or you might really want to avoid. If you have to time, to do it, it could definitely make for some interestinf research! You can get currency data from Quandl - they have a Python API too, which is simple enough to use.
H: Calculate Q parameter for Deep Q-Learning applied to videogames I am working on Deep Q-learning applied to Snake, and I am confused on the methodology. Based on the DeepMind paper on the topic and other sources, the Q-value with the Bellman equation needs to be calculated as follows: Q(s,a) = r + γ(max(Q(s',a')) While calculating the Q-value for Q-table is straightforward, it's not clear to me the process in Deep Q-learning. According to some sources, the future possible states for the actual state need to be processed with Deep Learning and the highest Q-value needs to be chosen (input: future state, output: Q-value, or input: current state, output: Q-value for future states). Then, a table containing [state, action, reward, future_state] is stored in memory to reduce critical forgetting. I don't understand how we get the Q-values to predict in first place, if we need the states in order to compute the Q-values. Is this approach correct, or I am missing something? Thank you! AI: The neural network you need to implement for Q-learning must approximate the Q function $Q(s,a)$. There are two ways to do this: Using $(s,a)$ as input. A lot of the literature will assume this for simplicity, and you will see notations like $\hat{q}(s,a,\theta)$ to show you are approximating the function, and have parameters $\theta$ to learn for the neural network. Using $(s)$ as input, and have the network output multiple $Q(s,a)$ values - one for each possible action. E.g. if you have four possible actions, then your output might be $[\hat{q}_0, \hat{q}_1, \hat{q}_2, \hat{q}_3]$. This can be more efficient, since in order to find the best, maximising, action, you need all the action values calculated. Once you have a neural network set up like this, and a table of history (that grows on each action actually taken), this is how you make use of it: For each sampled [state, action, reward, future_state]: Calculate td_target = reward + gamma * max_a'(Q(future_state, a')): Run NN(future_state, a') forward for each possible action a' and find max Train the NN using the inputs (state, action) and desired output td_target You would use this variant if the network output multiple Q values at once: For each sampled [state, action, reward, future_state]: Calculate td_target = reward + gamma * max_a'(Q(future_state, a')): Run the NN(future_state) forward and take the max output Construct the desired output: Run the NN(state) forward to get array e.g. [q0, q1, q2, q3] Substitute td_target depending on action e.g. [q0, q1, td_target, q3] Train the NN using the inputs (state) to learn the desired output Although this second approach looks more complex, it avoids looping (or mini-batching) over multiple a' values to find a max, so can be more efficient. If you are using a frozen copy of NN to help with stability (a common feature in DQN), then you should use the frozen copy exclusively to calculate TD targets, and the learning copy to calculate current outputs. Note it is important that you don't store, but instead re-calculate Q values at all times. That is both because the initial values will be incorrect (depending on how the NN was initialised), and also that they should improve over time as the agent learns a better policy. Another important detail is that you should not use the NN to calculate Q value for terminal states. If future_state is terminal, then instead of using the NN, treat max_a'(Q(future_state, a')) as zero. Typically just detect that this is a terminal state and hard-code a 0 somehow for it.
H: laptops which are suitable for heavy Image and video processing I am a graduate student and My thesis is based on deep learning, vision, video and Image processing. I am going to do heavy computing and processing. I am looking for a compatible laptop which has a fast GPU-enabled laptop. CUDA enabled on an NVIDIA GPU with good TFOPS would be a perfect choice. Also, it must have SSD. Anything by hp, Dell, Microsoft, Acer. do you have any recommendations? my budget is $CAN 1800. which GPU-Nvidia is suitable for this kind of programming? AI: The answer to such a question is opinion based and the question itself is very broad. I have used an HP Envy in the past, with a 4GB Nvidia 950M GPU, which worked well with Linux installed. In general: The higher the compute capability of the GPU the better (look here at the list for Nvidia GPUs under GeForce products for notebooks). Another option worth considering would be to build compute - you will lose portability, but get a vast improvement of power for the same money. Finally, you could buy a cheap laptop and then use the rest of your budget on cloud services, such as Google Cloud, Amazon, etc. This might be the best option to get started (with some free credit they usually offer for new accounts) - also if you think you might only be using the GPU for your thesis, and not training huge models for days at a time after that period. If you know you'll be training large models over long periods of time, it's generally more cost effective to buy the hardware.
H: Keras- LSTM answers different size Imagine situation, input is some text file and information are spread according to rows. I want to use rows as input of model. Model is planned as LSTM with softmax as output layer. My problem is the output; I want to classify text file to some categories. So I imagine this like, I open text file, do necessary preprocessing and row by row feed into the model and after I have applied all the rows to the file, I tell softmax which category is right for this document. But I don't know how to make softmax layer "wait" until last row of file is processed by LSTM layers. For example, I have file with 1000 rows and every row has 5 features. So my dataset is [1000x5] size but I don't have 1000 y I have 1 y which told me category of this document. Documents can be different size so I can't use whole document as input. I also don't see as option make y size 1000 where everything is category index. When I think about this seems to me, when person implement LSTM from scratch, is possible to make LSTM run(save every state) and after several iterations perform softmax and than apply BTT or more suitable algorithm and perform weights update. Every LSTM keras tutorial which I saw uses y which is the same shape as number of samples. Maybe I didn't see right ones. AI: Your dataset is not [1000x5], one of your documents is that size. LSTMs can deal with variable sized input directly or by padding (this is the case with Keras). In Keras you can use the parameter return_sequences as False in your LSTM, which means only after the last part of your sequence has been parsed it will transform it with a Softmax and run the classification. What I would do is pad your documents to a certain size, for example 1000 (though this is high for LSTMs) so that you will have a [n x 1000 x 5] feature set and a [n] target set. You can feed this to your model now.
H: How to calculate precision and recall in an unsupervised ranking method in order to be compared with a supervised one? I am working on an ML supervised learning project for Key-phrase extraction. The evaluation dataset (different from training-validation datasets) contains around 200 phrases labeled as key or not key. The problem is that I want to compare the results obtained (in terms of precision, recall and f1) via different classifier's algorithms with existing unsupervised methods. These methods are based on ranking and extract the top-k key phrases. Using the dataset that I mention, what is the proper way to calculate precision and recall for these unsupervised ranking methods? AI: You first need to decide what your "label" is, and this label will apply to both your supervised and unsupervised models. Obviously, in a supervised world, knowing your label make training the model and evaluating it easy, so I won't go into that because it seems like you have a good handle. In the unsupervised world, you'll have to think really hard about the mapping of the unsupervised outputs to the label you've decided to use for evaluation. A lot of this is going to come down to how you structure that unsupervised problem. Like, if you're going to be using unsupervised clustering, then maybe you run the model, see what the clusters look like, and see if there's a way to map those clusters the label you're evaluating. It sounds like you're using unsupervised ranking and extracting top key phrases after the unsupervised ranking. In this case, you'd probably want to map the possible top key phrases to the labels you want to evaluate. Then after ranking and key phrase extraction, you can get a mapped "prediction label" based on the key phrases for each prediction. Of course, you'll likely have a messy collection of "prediction labels" because it's unlikely that each test prediction will output N-key phrases that perfectly match 1 label which means you'll have to come up with a way to round those maps (maybe you take the mode, maybe you round the average, it all depends on what you value most) to a singular prediction label. From there, doing recall and precision and f1 is straight forward. I will say though, that trying to bend an unsupervised model to evaluation like a supervised model is only useful if recall, precision, and f1 are the best way to evaluate your model's usefulness in its problem space. This is very different from evaluating its effectiveness. This question is: Does this models output fit the use case of the problem best? When I implement this output to an end user, would they appreciate this model more than another model? I'd encourage you to, on top of using classic evaluation scores to measure model effectiveness, also find a way to measure or understand model usefulness in the application that you have.
H: What can I use to post process an NLP tree generated from the python library `spaCy`? Using spaCy as the NLP engine for a chatbot, I call nlp("Where are the apples?").print_tree() and receive: [{'word': 'are', 'lemma': 'be', 'NE': '', 'POS_fine': 'VBP', 'POS_coarse': 'VERB', 'arc': 'ROOT', 'modifiers': [{'word': 'Where', 'lemma': 'where', 'NE': '', 'POS_fine': 'WRB', 'POS_coarse': 'ADV', 'arc': 'advmod', 'modifiers': []}, {'word': 'apples', 'lemma': 'apple', 'NE': '', 'POS_fine': 'NNS', 'POS_coarse': 'NOUN', 'arc': 'nsubj', 'modifiers': [{'word': 'the', 'lemma': 'the', 'NE': '', 'POS_fine': 'DT', 'POS_coarse': 'DET', 'arc': 'det', 'modifiers': []}]}, {'word': '?', 'lemma': '?', 'NE': '', 'POS_fine': '.', 'POS_coarse': 'PUNCT', 'arc': 'punct', 'modifiers': []}]}] I can easily enough parse out (arc, lemma) pairs for where (advmod, where) and (apple, nsubj), and call a function where(apple). However, this is a really naive way of handling the parsed tree. Any suggestions for how to handle processing this tree? I don't think something like a multilevel SVM would work. Maybe a NN of some kind? AI: What do you want to do with the chat bot? How you parse it will depend on the final use case and, believe it or not, many people get the job done by simply collecting the POS they want and using some filtering. If you want to try to maintain more of the data and perhaps abstract it makes sense to try clustering of some kind, perhaps using hierarchical methods, such as the (relateively new) hdbscan. The features on which you cluster will again depend on what you want to achieve. If you haven't already, check out the spaCy examples for some inspiration! Once you have a corpus with word all tagged, you can try training models that might be able to answer questions, based on user input. This will involve steps such as creating encoding of the words (or entire user questions), using embeddings such as Word2Vec, GLoVe, or simple sparse one-hot encodings. You basically need to transform words into numerical input somehow. I hope this gives you some keywords to help you on your search :-)
H: Is it possible to make tensorflow print out everything it see in a given image and not just the top five results? I'm working through the python API tutorials for Tensorflow and I'm seeing the results that are normally displayed, but it's always giving me the top five results. I'm trying to discern all possibilities within a certain list of basic attributes, like if I'm given a picture of a forest, I want to ask tensorflow if the picture contains oak trees, pine trees, bushes, rivers, etc. I don't need to know if the image is a picture of a forest. Is this possible? I'm not saying give me results it hasn't been trained to see, I'm saying I'm going to train the model with different types of trees/bushes/etc and I want to know if the given image contains any of those attributes (or the probability it thinks of for a given attribute). AI: Sounds like you want to see the predicted probabilities of a softmax function. You can assign the values to a list during training so you can see probabilities of each epoch if you'd like. As is written below, in_top_k will select the single top prediction of the softmax cross entropy function but if you have multiple targets in each picture you will want to change that "1" to the desired amount. xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = y) logit = tf.nn.in_top_k(logits, y, 1) y_one_prob = tf.sigmoid(logit)
H: Retraining a machine learning model Created a Machine Learning model on some data. Used this model for predicting test data. Model has learnt only from training data. How this model can be retrained as new data come in for prediction. I don't know whether it is true that retraining a model is necessary so that it can predict new data more accurately. AI: It depends on a lot of factors: Does the function you're learning change overtime ? If you try to detect dogs in pictures, and you already have a good accuracy, then relearning is not necessary. But if you try to detect topics in news streams, then you'll need to update your model regularly. Do you have initially enough data ? If you receive a stream of training data, then you should find an algorithm that can be trained in an online manner. If you are looking for constantly improving the accuracy, then the more data you feed the better results you'll get. Regularly retraining a new model with the old data + the new ones might be sufficient.
H: Aggregating standard deviations Imagine I have a collection of data, let's say the travel time for a road segment. On this collection I want to calculate the mean and the standard deviation. Nothing hard so far. Now imagine that instead of having my collection of values for one road segment, I have multiple collections of values that correspond to the multiple sub segments that compose the road segment. For each of these collections, I know the average and the standard deviation. From that, I want to aggregate these multiple average and standard deviation in order to get the average and standard deviation for the whole road segment. For example, let's suppose I have the following dataset : subSegmentA , subSegmentB , subSegmentC , subSegmentD values 20 45 25 70 30 55 10 60 10 10 10 80 15 50 30 75 15 40 15 75 20 40 20 80 30 45 20 65 10 40 25 70 average 18.75 40.625 19.375 71.875 stddev 7.90569415 13.47948176 7.288689869 7.039429766 expected_global_average : 150.625 expected_global_stddev : 18.40758772 For the average there is no problem, a simple sum do the job, but I have trouble with the global_stddev. I tried multiple solutions from here, without success. Edit : After further research, it seems mathematicaly impossible to calculate the standard deviation of a set based only from the standard deviation and average of subsets. So I am trying to calculate a new metric, that would approximate this global standard deviation. To do so, I can use in addition to the avg/stddev per subsegments, the length ratio of the subsegment to the road. AI: It is not perfect but you can try to re-create synthetic data based on the mean and sd. In R, you can use the rnorm function to create a normal distribution from mean and sd. The following is one of the ways to do it. Hope it helps! P.S. I just choose n = 1000 to illustrate how it can be done; you can try using different numbers. a <- c(20,30,10,15,15,20,30,10) b <- c(45,55,10,50,40,40,45,40) c <- c(25,10,10,30,15,20,20,25) d <- c(70,60,80,75,80,60,65,70) n <- 1000 e <- c(rnorm(n, mean(a), sd(a)) + rnorm(n, mean(b), sd(b)) + rnorm(n, mean(c), sd(c)) + rnorm(n, mean(d), sd(d))) mean(e) sd(e) Try it online
H: How to populate pandas series w/ values from another df? I need help figuring out how to populate a series of one dataframe w/ specific values from another dataframe. Here's a sample of what I'm working with: df1 = pd.DataFrame({'Year':[1910, 1911, 1912], 'CA':[2.406, 2.534, 2.668], 'HI':[0.804, 0.821, 0.832]}) df2 = pd.DataFrame({'State':['CA', 'CA', 'CA', 'HI', 'HI'], 'Year':[1910, 1910, 1911, 1911, 1911]}) df2['Population'] = pd.Series() *I'm trying to populate df2['Population'] w/ the corresponding populations from df1 (i.e. the population of a specific state from a specific year) How can I do this? AI: Here is a one solution: df2['Population'] = df2.apply(lambda x: df1.loc[x['Year'] == df1['Year'], x['State']].reset_index(drop=True), axis=1) The idea is for each row of df2 we use the Year column to tell us which row of df1 to access, and then State to select the column. Afterwards we reset the index of the result to prevent pandas from keeping the columns separate.
H: Setting up network communication between R and Python I want to share data between R and python via network sockets (I expect the solution to also work for R-R and python-python). I am interested in server being both R and Python side. For now I am interested in a simpler case of synchronous connections. What are the best approaches, packages and formats to use for this task? AI: Apache Thrift or Protocol buffers are common choices for serialization. If you want to share data in-memory, there is the Apache Arrow format which interoperates between many data types. If you want to share dataframes on disk, there is Feather.
H: Data scaling before or after PCA I have seen senior data scientists doing data scaling either before or after applying PCA. What is more right to do and why? AI: I once heard a data scinetist state at a conference talk: "Basically, you can do what you want, as long as you know what you are doing." This also applies here. The more statistically sound way would be to transform all variables prior to additional steps such as PCA or factor analysis. Then you still know the scale of your variables and can interpret the rescaling in the context of your application. If you have no such interpretation, but good reasons for rescaling your principal components due to computational issues arising if some values are to close to zero while others are quite large, rescaling the components makes sense. However, reversing this process and still being able to interpret the effect of the rescaling operation in your context will become almost impossible.
H: Categorical data for sklearns Isolation Forrest I'm trying to do anomaly detection with Isolation Forests (IF) in sklearn. Except for the fact that it is a great method of anomaly detection, I also want to use it because about half of my features are categorical (font names, etc.) I've got a bit too much to use one hot encoding (about 1000+ and that would just be one of many features) and I'm anyway looking for a more robust way of data representation. Also, I want to experiment with other clustering techniques later on, so I don't want to necessarily do label encoding as it will misrepresent the data in euclidean space. I have thus a two part question: How will label encoding (ie. ordinal numbers) affect tree based methods such as the Isolation Forest? Seeing as they aren't distance based, they shouldn't make assumptions about ordinal data, right? What other feature transformations can I consider for distance based models? AI: I would really try not to use ordinal numbers for categorical data. It imposes a false magnitude and ordering in the model, especially when you have 1,000 examples. For example, the difference between Brush Script and Calibri could be very small and the difference between Calibri and Times New Roman UNBELIEVABLY HUGE (assuming lexicographical assignment), when really they're all just different fonts. You could: Try to figure out groupings of similar features that make sense, then one-hot those groupings so you wouldn't end up with too many columns. One-hot the whole thing and then try some dimensionality reduction techniques to get the feature space back down to something sensible. Try to use an autoencoder or neural method to learn an embedding of fixed dimension. One thing you should definitely be careful of is how you combine the result of this process with whatever the other half of your features are.
H: Confusion regarding epoch and accuracy I have been learning keras and TensorFlow for some weeks now, and get confused with epoch. I trained my network for 50 epochs, the test data and training data are randomly split (80% train, 20% test). The training data's accuracy grows nicely, but the test data's accuracy goes up and down. I am not sure why it happens like that. Say in epoch 10, the test data's accuracy is 92%, in next epoch, how can accuracy drop? The thing I can think of is that maybe for each epoch, the test data and training data are randomly selected again, so the system sees new data which doesn't fit previous parameters? Is it the case? AI: Variation in test data accuracy is perfectly normal. Your network is fitting for your train data. If you see (big) variation in train accuracy, then you should be worried about learning rate tuning, regularization etc. But here, everything seems to be going fine. Remember your network only fits on your train data, so it modifies its weights according to that data. Since your test data -which the network never uses for weight update- is not exactly your train data (even though it should come from the same distribution ... so test accuracy should globally improve over iterations if the network is learning), test accuracy is bound not to improve exactly at each iteration.
H: Should we keep all channels when doing image classification? I am discovering the world of image recognition and now trying to build an image classifier. The set of images I have have the shape (101,101,3) which means that it has 3 channels. If I'm not mistaken the channels corresponds to the Red, Green and Blue channels (Please correct me if I am wrong). In which classification tasks should we keep the three channels ? I've seen people converting the Image to a grayscale so that the images shape will be (101,101,1). What difference does it make when you do so ? I am aware that when you keep the 3 channels the computation needs grow with it, here I am more looking for an answer about the intuition behind keeping the 3 channels, move to grayscale or add new custom channels. AI: Well, it depends on your images and how you want to classify them. If we take a really basic example, if you want to classify images by their color, you obviously should keep all 3 channels. In the other hand, if you want to classify images by a shape. For example squares and circle, you don't necessarily need to use all 3 channels. But changing the number of channel isn't the only preprocessing you can do on your images before classifying them. You can also change the color space for example (the colorspace LAB can be usefull in some case) , or even use Morphological Image Processing methods to improve your image by, for exemple, bringing out the contrasts of your images or cleaning impurities. All of this depends of your images, and what you want to do with them.
H: Is there a general guideline for experience replay size, and how to store? I am trying to use deep Q-learning on color images of size (224 x 224 x 3). I have read that implementations of DQN use experience replay sizes around 50,000. If my calculation is correct, this is over 56 Gigabytes for my data (two images per tuple, one image for state and next_state, totaling 100,000 images). Is this correct? If yes, how can I go about this without that much RAM? AI: If yes, how can I go about this without that much RAM? Your value of 56GB seems correct to me, assuming you include a multiplier of 4 for the "4 frames per state representation" used in the DQN/Atari paper. However you should note that in the original paper, the images were converted to greyscale and downsampled to 110×84 prior to use in state representation. This made them 16 times smaller than the frames in your problem, so the whole data set would fit into 4GB. If you must use larger images, you could store them on disk - maybe in a database - and load on demand. That may unfortunately create an I/O bottleneck that slows learning, although you should still be able to work with it. You can parallelise fetching from the database for mini-batches with the learning process, and this is similar to the mini-batch generators used for things like ImageNet training. You can also work on improving disk performance using optimisations such as parallel disk arrays or SSDs. You could also pre-process the frames using a hidden layer embedding from a generic computer vision network trained on e.g. ImageNet, and store that representation, not the raw pixel values. This may limit self-discovery of important low-level features by the learning agent, but then again it may still be worth a shot, as lower-level image features are often very similar across different problems, and transfer learning in e.g. image classification has been quite successful using that approach. More likely practical answers used by RL researchers at least up to a certain scale is one of: Store less states in replay memory. Base the size of replay on the memory you have available. Yes this may compromise the learning, but there is no special magic about the number 50,000 and if you are optimising resource use you may have to decide between how efficiently a system learns with 10,000 fast replay memory size or 50,000 slower I/O-based replay memory size. Buy more memory. The big name research labs working in deep RL are funded well enough that they can afford to throw money at this problem. One estimate for how much AlphaGo Zero hardware cost is $25million, so you can imagine that loading a few machines with 128GB+ RAM if they thought it necessary for any reason on any other flagship project would not be a major blocker. If you look at what OpenAI are doing at the cutting edge of video game playing, you can see that their hardware setup is equally monstrous. It is not clear whether they have an issue with storing experience as they use a different algorithm, or needing RAM in general, but if they did, it is also clear they could quite happily finance maximum RAM on their training rigs. Do note I am not a RL researcher myself (just a hobbyist) and have not asked anyone else what they would do when faced with this problem.
H: What is coadaptation of neurons in neural networks? Looking for a bare minimum example (3 hidden units only maybe?) for what weights of a neural network with heavily coadapted weights would look like and showcase why they are bad. Also, how is coadaptation a sign of overfitting? AI: Co-adaptions in simple English term would mean co-operation. If you think nodes of a NN as workers it would mean missing even a few workers would result in failure of the NN to do something substantial. This happens mainly due to few nodes of NN outputting values which get cancelled by other nodes (so we can remove those nodes altogether - essentially redundant units), thus resulting in inefficient use of the NN. Thus instead of say 2 nodes, 8 nodes are performing the same job with 6 nodes cancelling each other out. Here is the best illustration of the same. In case of XOR approximation you need minimum (I'm not sure) 2 nodes to approximate the function correctly, but you can use 100 nodes and most of them will be redundant. Also this will clearly cause over-fitting as the nodes instead of generalising on the training data are micro-managing the training data. Now we do not want our NN's to behave in such a manner, we want the other nodes to take over jobs of missing nodes (if any are missing) just like our brain takes over function when one centre (Say speech, hearing is damaged) and the state of art mechanism currently is dropout in layers.
H: How to apply StandardScaler and OneHotEncoder simultaneously in Spark Machine learning? I try to create a machine learning model, linear regression, to predict a price of a diamonds. All examples that I found online do not have a step with scaling of data, using MinMaxScaler or StandardScaler. But I personally think that this is an important step in ML. The Spark guide shows this feature, but separately from OneHotEncoding. Moreover, because the result of OneHotEncoding in Scala is different from Python (for instance, it looks like this: (4,[3],[1.0]) in Spark, while in Python it is a combination of 1 and 0), I am confused where to apply StandardScaler - after Indexing and before OneHotEncoder, or after OneHotEncoder or in some other step? My code after I cleaned data a little bit. This code works, but predictions are far from being good. I use Scala dataframe and pipeline: import org.apache.spark.ml.feature.{VectorAssembler,StringIndexer,VectorIndexer,OneHotEncoder} import org.apache.spark.ml.linalg.Vectors val cutIndexer = new StringIndexer().setInputCol("cut").setOutputCol("cutIndex") val colorIndexer = new StringIndexer().setInputCol("color").setOutputCol("colorIndex") val clarityIndexer = new StringIndexer().setInputCol("clarity").setOutputCol("clarityIndex") val cutEncoder = new OneHotEncoder().setInputCol("cutIndex").setOutputCol("cutVec") val colorEncoder = new OneHotEncoder().setInputCol("colorIndex").setOutputCol("colorVec") val clarityEncoder = new OneHotEncoder().setInputCol("clarityIndex").setOutputCol("clarityVec") val assembler = (new VectorAssembler() .setInputCols(Array("carat", "cutVec", "colorVec", "clarityVec", "depth", "table", "x", "y", "z")) .setOutputCol("features") ) val scaler = new StandardScaler().setInputCol("features").setOutputCol("scaledFeatures").setWithStd(true).setWithMean(false) val Array(training, test) = df_label.randomSplit(Array(0.75, 0.25)) import org.apache.spark.ml.Pipeline val lr = new LinearRegression() val pipeline = new Pipeline().setStages(Array(cutIndexer,colorIndexer, clarityIndexer,cutEncoder,colorEncoder,clarityEncoder, assembler, scaler, lr)) val model = pipeline.fit(training) val results = model.transform(test) The subset of data: +-----+-----+---------+-----+-------+-----+-----+----+----+----+ |label|carat| cut|color|clarity|depth|table| x| y| z| +-----+-----+---------+-----+-------+-----+-----+----+----+----+ | 326| 0.23| Ideal| E| SI2| 61.5| 55.0|3.95|3.98|2.43| | 326| 0.21| Premium| E| SI1| 59.8| 61.0|3.89|3.84|2.31| | 327| 0.23| Good| E| VS1| 56.9| 65.0|4.05|4.07|2.31| Thanks in advance! AI: Okay, I came up with a solution by myself. Applied OneHotEncoderEstimator that was added recently instead of OneHotEncode on every categorical feature. Then I used MinMaxScaler on features from VectorAssembler. val cutIndexer = new StringIndexer().setInputCol("cut").setOutputCol("cutIndex") val colorIndexer = new StringIndexer().setInputCol("color").setOutputCol("colorIndex") val clarityIndexer = new StringIndexer().setInputCol("clarity").setOutputCol("clarityIndex") import org.apache.spark.ml.feature.OneHotEncoderEstimator val encoder = new OneHotEncoderEstimator().setInputCols(Array("cutIndex", "colorIndex", "clarityIndex")).setOutputCols(Array("cutIndexEnc", "colorIndexEnc", "clarityIndexEnc")) val assembler = (new VectorAssembler() .setInputCols(Array("carat", "cutIndexEnc", "colorIndexEnc", "clarityIndexEnc", "depth", "table", "x", "y", "z")) .setOutputCol("features_assem") ) import org.apache.spark.ml.feature.MinMaxScaler val scaler = new MinMaxScaler().setInputCol("features_assem").setOutputCol("features") val Array(training, test) = df_label.randomSplit(Array(0.75, 0.25)) import org.apache.spark.ml.Pipeline val lr = new LinearRegression() val pipeline = new Pipeline().setStages(Array(cutIndexer,colorIndexer, clarityIndexer,encoder, assembler,scaler, lr)) val model = pipeline.fit(training) val predictions = model.transform(test)
H: How to adjust the hyperparameters of MLP classifier to get more perfect performance I am just getting touch with Multi-layer Perceptron. And, I got this accuracy when classifying the DEAP data with MLP. However, I have no idea how to adjust the hyperparameters for improving the result. Here is the detail of my code and result: . from sklearn.neural_network import MLPClassifier import numpy as np import scipy.io x_vals = data['all_data'][:,0:320] y_vals_new = np.array([0 if each=='Neg' else 1 if each =='Neu' else 2 for each in data['all_data'][:,320]]) y_vals_Arousal = np.array([3 if each=='Pas' else 4 if each =='Neu' else 5 for each in data['all_data'][:,321]]) DEAP_x_train = x_vals[:-256] #using 80% of whole data for training DEAP_x_test = x_vals[-256:] #using 20% of whole data for testing DEAP_y_train = y_vals_new[:-256] ##Valence DEAP_y_test = y_vals_new[-256:] DEAP_y_train_A = y_vals_Arousal[:-256] ### Arousal DEAP_y_test_A = y_vals_Arousal[-256:] mlp = MLPClassifier(solver='adam', activation='relu',alpha=1e-4,hidden_layer_sizes=(50,50,50), random_state=1,max_iter=11,verbose=10,learning_rate_init=.1) mlp.fit(DEAP_x_train, DEAP_y_train) print (mlp.score(DEAP_x_test,DEAP_y_test)) print (mlp.n_layers_) print (mlp.n_iter_) print (mlp.loss_) AI: If you are using SKlearn, you can use their hyper-parameter optimization tools. For example, you can use: GridSearchCV RandomizedSearchCV If you use GridSearchCV, you can do the following: 1) Choose your classifier from sklearn.neural_network import MLPClassifier mlp = MLPClassifier(max_iter=100) 2) Define a hyper-parameter space to search. (All the values that you want to try out.) parameter_space = { 'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)], 'activation': ['tanh', 'relu'], 'solver': ['sgd', 'adam'], 'alpha': [0.0001, 0.05], 'learning_rate': ['constant','adaptive'], } Note: the max_iter=100 that you defined on the initializer is not in the grid. So, that number will be constant, while the ones in the grid will be searched. 3) Run the search: from sklearn.model_selection import GridSearchCV clf = GridSearchCV(mlp, parameter_space, n_jobs=-1, cv=3) clf.fit(DEAP_x_train, DEAP_y_train) Note: the parameter n_jobs is to define how many CPU cores from your computer to use (-1 is for all the cores available). The cv is the number of splits for cross-validation. 4) See the best results: # Best paramete set print('Best parameters found:\n', clf.best_params_) # All results means = clf.cv_results_['mean_test_score'] stds = clf.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, clf.cv_results_['params']): print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params)) 5) Now you can use the clf to make new predictions. For example, check the performance on your test set. y_true, y_pred = DEAP_y_test , clf.predict(DEAP_x_test) from sklearn.metrics import classification_report print('Results on the test set:') print(classification_report(y_true, y_pred))
H: Improve results of a clustering I'm a beginner and I'm trying to do a clustering of a multi-sentence text, but my results are horrible. Any tips for me to improve my result? import pandas import pprint import numpy as np import pandas as pd from sklearn.cluster import KMeans from sklearn.metrics import adjusted_rand_score from sklearn.feature_extraction.text import TfidfVectorizer dataset = pandas.read_csv('text.csv', encoding = 'utf-8') comments = dataset['comments'] comments_list = comments.values.tolist() vetorize = TfidfVectorizer() X = vetorize.fit_transform(comments_list) clusters_number = 6 model = KMeans(n_clusters = clusters_number, init = 'k-means++', max_iter = 300, n_init = 1) model.fit(X) centers = model.cluster_centers_ labels = model.labels_ clusters = {} for verbatim, label in zip(verbatim_list, labels): try: clusters[str(label)].append(verbatim) except: clusters[str(label)] = [verbatim] pprint.pprint(clusters) #Top terms for cluster print("Top termos par cluster:") ordem_centroides = model.cluster_centers_.argsort()[:, ::-1] termos = vetorizar.get_feature_names() for i in range(clusters_number): print ("Cluster %d:" % i,) for ind in ordem_centroides[i, :10]: print (' %s' % termos[ind],) print() I have many different themes present in various clusters. I pre-processed my data (stopwords, lowercase, I removed the punct...). But still I have 'like to cancel order' in one cluster and 'love cancel order' in another. When, in fact, the ideal is to join all 'cancel order' in a single cluster. AI: The very obvious tip I would say is that K-means is not the algorithm you use for clustering text data in general. The nature of text data is a bit more complicated than structured data on which K-means is one of basic but still working algorithms (of course it also depends on the way you model the text i.e. how you convert a text dataset to a of numbers). Let me propose two things with one hint: Hint Document Clustering is also referred to as Topic Modeling. So you really need to have a look at this as I assume you didn't yet (according to using k-means for this problem). Now we see two standard algorithms for topic modeling but I strongly recommend you to see other algorithms as well. NMF Non-negative Matrix Factorization is a well-known decomposition method for non-negative matrices like TF-IDF or other variants of Bag-of-Words. You can simply apply it to your problem using SKLearn. LDA or Latent Dirichlet Allocation (don't get confused by Linear Discriminant Analysis). This is a pretty standard algorithm for topic modeling and implementations can be found in SKLearn, Gensim, NLTK, Spacy and other NLP/ML libraries. Hope it helped! Good Luck :)
H: Do we need the testing data to evaluate the Model Performance - Regression I have been working with Classification Modelling in R and Python for the last 6 months now. With the Classification, the evaluation of the Model was based on Precision, Recall, Hamming Loss, accuracy etc., These classification models needed the testing data to calculate these evaluation metrics. Is it the Same case with Regression when we calculate SSR, SSE, RMSE and other evaluation metrics. From an R point of View - Summary(LmRegressionModel) gives these evaluation metric figures one way or another. Why do we need the testing data then to evaluate the model here in Regression. AI: ML community has many more metrics than you just listed here, both for regression and classification. But the principal remains; calculating the metrics on the training set would likely lead to overfitting.
H: Is K-NN applicable for binary variables? I need help because I'm just new to machine learning and I do not know if k-nearest neighbors algorithm can be used to identify the appropriate program(s) for Student 11 in the table below. The school subjects (Math, English, etc.) are the features, while the 'Program' column has the class labels A, B, and C. The binary values represent the interest of a student in the subjects. Can K-NN algorithm find the similar students based on the binary variables in the table? AI: Yes, you just have to find a suitable distance metric, instead of using the default Euclidean distance. Euclidean distance will work, but it loses a lot of its positive points when used on a non-euclidean space. For you specific case, the Jaccard distance basically measures how many 1's are equal on both instances, ignoring the dimensions where both are 0's. This gives an interpretation like "if a program has a course, but the other doesn't, then they are dissimilar". Jaccard index is very useful in high-dimensional boolean matrices, such as generated by one-hot encodings. Other more intuitive choice is the perfect match distance, which would simply measure how many dimensions are different for the two instances and can be easily computed by $\sum_{i=1}^n |X_i - Y_i|$. In this case, the interpretation becomes similar to "if both programs have a course, or both program don't have a course, then they're similar". However, be careful with the K value of your K-NN. You only have two instances of class B, so you will have to choose 1 or 2 (technically at most 3) for the value of K.
H: Active Mask - image segmentation I'm trying to implement an active mask code derived from Active mask segmentation for the cell-volume computation and golgi-body segmentation of hela cell images, Srinivasa et al 2008. I have found a code that will allow me to do this, however, there is one section of the code that is giving me errors and I don't know why. Here is the section of the code: def _converge(P, R, b, max_iters_converge=1000): Pm = numpy.empty(P.shape, numpy.float32) # Pre-allocate: saves time argmax = numpy.zeros_like(P) maxval = numpy.zeros(P.shape, numpy.float32) for i in range(max_iters_converge): maxval *= 0 maxval -= 1e8 for m in range(int(P.max()) + 1): Pm[:, :] = (P == m) Pm = ndimage.gaussian_filter(Pm, b) + R[m] # Pm=ndimage.convolve(Pm,numpy.ones((2*b+1,2*b+1)))+R[m] argmax[maxval < Pm] = m maxval = maxval * (argmax != m) + (argmax == m) * Pm if (P == argmax).all(): break P = argmax.copy() mis = defaultdict(range(1, P.max() + 1).__iter__().__next__()) mis[0] = 0 # set 0 to 0, because it is special for i in range(P.size): P.flat[i] = mis[P.flat[i]] return P The area that is giving me the most problem is the: mis = defaultdict(range(1, P.max() + 1).__iter__().__next__()) Originally the code was written for Python 2.4 but I made some changes so that it will work on Python 3.6, but I don't really understand what the defaultdict is used for. Here is a link to the full original source file: active mask AI: The defaultdict in this case is meant to return the next increasing integer each time a new element is encountered in P.flat(). But to also return the same integer each time that same element is found again. You should have encountered the error: TypeError: first argument must be callable or None defaultdict expects a callable. It will call the passed in function each time there is a lookup miss, and set the dict to that value. To fix that error you need to remove the () from the __next__() to leave: mis = defaultdict(range(1, P.max() + 1).__iter__().__next__) However this is overly complicated for what it is doing, and can be simplified a bit by recognizing that there is no reason to have the upper limit in the range, it is just an increasing integer which can be done with itertools.count() liike: import itertools as it mis = defaultdict(it.count(1).__next__)
H: Image classification with different number of training image for each class I'm trying to train neural network for image classification with 4 different classes: Cars (22k training examples) Building (8k training examples) Pedestrian (5k training examples) Trees (1k training examples) The problem is that the number of training images is biased toward one/two class(es). I'm wondering if there is a way to train neural network in terms of the number of training examples per class? Do I have to limit the number of training examples per class to the minimum number of all classes? AI: It is not important that you don't have data of different classes which are unbalanced. What is important is that your data should have a real distribution. The distribution of your training data should be the same as your test environment. As you can read here, your data should be suited well for the task it is going to be used. Consequently, if the distribution of your samples is real, there will be no problem. Consider the point that for unbalanced data-sets we should use appropriate evaluation metrics like F1 score. Finally, if this is the real distribution of your data, I highly recommend you not to change the real distribution even if you want to augment your data. You should not change the relative ratio of different classes.
H: Cosine similarity between two folders (1 and 2) with documents, and find the most relevant set of documents (in folder 2) for each doc (in folder 2) I have one folder named iir, it has 500 txt files. I have another json file named video (with dictionary structure). I wish to compute: for each of the 500 txt files, find the cosine similarity with all the videos. After this rank the videos for each txt files and save that ranking list in another file. So far, I am able to read all the txt files. This is the code: import gensim import glob # go to directory all_files = glob.glob("ExtratingConceptFromVideoScripts/iir/*.txt") # go to files indata = tuple() for filename in all_files: with open(filename,'r')as file: for line in file.readlines(): d = line.strip().split( ',' ) indata = indata + (d[0], ) print(indata[0]) # print the contents of first file This is the output: Online edition (c)2009 Cambridge UP An Introduction to Information Retrieval Draft of April 1 Now, I read the contents of the json file: import gensim import glob # go to directory all_files = glob.glob('ExtratingConceptFromVideoScripts/data/corpus.json') # go to files vid = tuple() for filename in all_files: with open(filename,'r')as file: for line in file.readlines(): d = line.strip().split( ',' ) vid = vid + (d[:8], ) print(vid) Output: (['{"0": {"metadata": {"id": "fQ3JoXLXxc4"', ' "title": "| Board Questions | 12 Maths | Equivalence Class | Equivalence Class Board Questions |"', ' "tags": ["Board Questions"', ' "12 maths"', ' "12 maths Board Questions"', ' "Previous Year Board Questions"', ' "Maths Board Questions"', ' "Board questions based on Equivalence Classes"'],) Now, my problem is: the cosine similarity matrix finds the cosine similarity between the set of documents in one tuple (ie. it finds the cos sim between all files in doc iir). How to compute the cos sim of each iir doc with each doc in the vid. I tried to convert the vid json file to tuple, and then compute the cos sim between iir and vid, but it gives an error. Can anyone help me with explaining how to find cos sim between files with different formats, how to loop through files and compute cos sim, how to rank docs on the basis of highest cos sim. AI: Similarity is computed in a vector space model with a primary assumption that the pair of objects for which you want to compute similarity must be represented in the same vector space using some pre-processing operation. Second dataset is a JSON. You'll need to extract text from it. One way to do it: you can use title and tags strings for each video, concatenate it to a string to find string representation for each video. Use any text vectorization method post that. So get the video metadata into a text and then you can compute similarity.
H: what does this doc2vec based ML predict? I'm trying to understand what does this ML program - which based on doc2vec - predict: import logging, gensim from gensim.models.doc2vec import TaggedDocument from gensim.models import Doc2Vec import re import os import random from sklearn.metrics.pairwise import cosine_similarity from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import cross_val_score import numpy as np model = Doc2Vec.load('reviews_model.d2v') # Already trained sent = [] answer = [] docvec = [] for fname in ['yelp', 'amazon_cells','imdb' ]: with open ('sentiment labelled sentences/%s_labelled.txt'% fname, encoding = ('UTF-8')) as f: for i , line in enumerate(f): line_split = line.strip().split('\t') sent.append(line_split[0]) words = extract_word(line_split[0]) answer.append(int(line_split[1])) docvec.append(model.infer_vector(words, steps=10)) print (str(docvec) + 'time') combined = list(zip(sent, docvec, answer)) random.shuffle(combined) sent , docvec, answer= zip(*combined) clf = KNeighborsClassifier(n_neighbors=9) score = cross_val_score(clf, docvec, answer, cv =5) print (str(np.mean(score)) + str(np.std(score)) ) The output be something like: 0.7903333333333334 So what does it actually mean it's 79% correct? correct of predicting what exactly? P.S: the documents learned are positive and negative reviews. AI: The code is trying to predict sentiment of documents (Amazon reviews or yelp reviews). Kneighbors classifier works with vectors for input. A pre-trained doc2vec model is used to convert each review to a vector. The classifier is trained to predict sentiment label from vector representing each review. Accuracy is mean accuracy on the dataset: percentage of reviews which were correctly classified (as positive or negative).
H: Sparse Data Interpolation I have a price data set where on some days there are up to five data points and some days none at all. For example: 2.110 2017-04-19 1.910 2017-04-23 1.980 2017-04-24 1.980 2017-04-24 1.980 2017-04-24 1.230 2017-04-24 2.100 2017-05-04 1.920 2017-05-08 The total data set is around 500 points. What is the best way to go about interpolating this data and making it's frequency daily? I'm considering using Gaussian Process Modelling using PyMC3 but would really appreciate any ideas or advice. Additionally, the volatility of the price data must be preserved as best as possible as this will be input to a volatility model later on. AI: If you know that the multiple prices on a single day are chronological (and you don't have the actual timestamps), I would suggest simply taking the last price. This is then as close as you can get to using the Closing Price, which is the most common. Often you have Open, High, Low, Close, for example, but just use the Close prices. Interpolating, or imputing, the data can be done in many ways. One factor that might help decide on a method, is how many of the points are missing? You have ~500 data points: how many days in total? how many of those require imputing? For a relatively low number of missing values, scattered more or less randomly over your time-series, I would suggest going for simpler imputation methods, rather than model-based one. A model-based approach can introduce other biases/problems that would potentially be harder to debug and more difficult to understand, so I would hold off there initially. Forward-filling This is my go to method, to just get started. If only a few percent of the entire time-series are missing, something as simple as fill-forward might be acceptable. Even the gaps in data are randomly or evenly spaced along the timeline, this is also a reasonable approach. A nice property of this method is that it agrees with the theory on stochastic price-paths. These are usually modelled as Gaussian random variables, following the principles or Brownian motion. This all culminates in the idea of a Martingale Process, which states: at a particular time in the realized sequence, the expectation of the next value in the sequence is equal to the present observed value even given knowledge of all prior observed values. Random sampling One way to fill a seemingly stochastic price-path, would be to compute the mean and variance of the dataset, then draw random samples from that distribution to fill missing values. This should preserve the mean and variance of the dataset's distribution quite well (and fairly). Here is is important to realise that you may inadvertently be inducing a bias within that dataset, namely that you have use population statistics to alter the data. This means, depending on your model testing method later on, that you violate information flow through time. E.g. when predicting the 100th step, using the 99 preceding steps, it could be the case that the 99th step was one you imputed. Using the mean and variance of the entire dataset to do so has inherently encoded some information from the 100th step (and all following steps) into that 99th step. This is a subtle detail, but something to be aware of. If you can put the time-series in a Pandas DataFrame, it might give you some more ideas by skimming the DataFrame's built-in methods to handle missing data. Here is a more in-depth analysis of possibilities with some good explanations.
H: How the term "R-squared" in VIF(variance inflation factor) is different from normal R-squared calculation? In normal calculation of R2 , more the value of R2 , it indicates variable represents more variance across the dataset. But in the calculation of VIF (variance inflation factor), higher the value of R2 , more will be the multi-collinearity present in the variables, hence more unstable the dataset is. My question is whether R2 is the same on both cases or will it be different calculation? If different, how? If its the same, why R2 value treated as good and bad in both cases? Can someone explain this for me? Thank you! AI: It is calculated in the same way. Instead of the response variable $y$ of the original model with numeric regressors $x_1, \dots, x_p$, the response of the VIF-model is $x_i$ and the regressors all other $x_j, j \ne i$.
H: Maths question on mean squared error being dervied to bias and variance I am reading a book and have difficulty in understanding the math on bias- variance tradeoff. Below is the section that I am having trouble with: Given a set of training samples $x_1, x_2, ..., x_n$ and their targets $y_1, y_2, ..., y_n$, we want to find a regression function, $\hat{y}(x)$, which estimates the true relation $y(x)$ as correctly as possible. We measure the error of estimation, how good (or bad) the regression model is by mean squared error ($MSE$): I can derive mean squared error with partial derivative and the concept of slope. I also understand that $MSE$ is to minimize the total error. I also understand basics statics on expected value. Yet, I have been stuck in finding the relevant math and statistical concepts behind this formula for a week. The question is, what are the relevant math and statistical concepts behind this formula? For example, how $MSE = E[(y-\hat{y} )^2]$ becomes: $= E[(y-E[\hat{y} ] + E[\hat{y}] - \hat{y} )^2]$ Thank you! I can see that the first component after adding and subtracting E[y^] is unchanged. Then the formula operated according to $(a+b)^2 = a^2 + 2ab + b^2 $ where $2ac = +E[2(y - E[\hat{y}])(E[\hat{y}] - \hat{y})]$ Why 2ac becomes $2(y - E[\hat{y}])(E[\hat{y}] - E[\hat{y}]) $ AI: Because it's a costant, everything that is a costant value remains unchanged by the E, that's why you can "move" it outside. For example: If y is a costant or it is known, the E doesn't affect it, so you can "move" it outside the symbol, and write just y. The only thing that is unknown is the estimator yhat, in fact you have and not just . That's what's happening here. I found this on CrossValidated that might be more clear.
H: storing a huge dataset in h5py file format I work on preparing the luna16 dataset for feeding into the CNN model, after reading all '.mhd' files and the labels(0, 1) in the annotated CSV file, I get a memory error, I know the problem is because of the data size is need a lot of RAM that I do not have on my computer, is there any solution to that? AI: You can use h5py files but due to the fact that they store the dimensions, they take even more disk to be stored and while fetching you have to supply enough memory which is not possible in your case, so you have to read them chunk by chunk. There is a simpler approach. One of pandas's inputs is chunk. try to use it.
H: How do I remove outliers from my data? Should I use RobustScaler? I am aware I can use DecisionTree but I want to use XGBoost How do I remove outliers from my data? Should I use RobustScaler? I am aware I can use DecisionTree but I want to use XGBoost... Please can you help me, This is a bit urgent, I am not sure how to do it, I have researched and seen previous question but it doesnt work well and was not helpful. Thank you Cheers AI: First of all, you don't need to remove outlier because "Decision family algorithm" like XGBoost can handle it. Secondly, you can use Tukey method (Tukey JW., 1977): def detect_outliers(df,n,features): outlier_indices = [] # iterate over features(columns) for col in features: # 1st quartile (25%) Q1 = np.percentile(df[col], 25) # 3rd quartile (75%) Q3 = np.percentile(df[col],75) # Interquartile range (IQR) IQR = Q3 - Q1 # outlier step outlier_step = 1.5 * IQR # Determine a list of indices of outliers for feature col outlier_list_col = df[(df[col] < Q1 - outlier_step) | (df[col] > Q3 + outlier_step )].index # append the found outlier indices for col to the list of outlier indices outlier_indices.extend(outlier_list_col) # select observations containing more than 2 outliers outlier_indices = Counter(outlier_indices) multiple_outliers = list( k for k, v in outlier_indices.items() if v > n ) return multiple_outliers Outliers_to_drop = detect_outliers(data,2,["col1","col2"]) data.loc[Outliers_to_drop] # Show the outliers rows # Drop outliers data= data.drop(Outliers_to_drop, axis = 0).reset_index(drop=True) https://www.kaggle.com/yassineghouzam/titanic-top-4-with-ensemble-modeling And Thirdly, I suggest you try discrete (binning) continuous variable instead of remove outlier for xgboost.
H: Are dimensionality reduction techniques useful in deep learning I have been working on machine learning and noticed that most of the time, dimensionality reduction techniques like PCA and t-SNE are used in machine learning, but I rarely noticed anyone doing it for deep learning projects. Is there a specific reason for not using dimensionality reduction techniques in deep learning? AI: It highly depends on your task, your data and your network. Basically, PCA is a linear transformation of the current features. Suppose your data are images or a kind of data that locality is important. If you use PCA you are throwing away those locality information. Consequently, it is clear that people usually do not use them in convolutional networks. For sequential tasks, again it highly depends on your agent whether is online or not. If it is online, you don't have the entire signal from the beginning. Even if you have that for offline tasks, by doing such diminishing transformations you are again throwing away sequential information, I have to say I've not seen the use of them. I guess their main use is in tasks where your problem can be solved using simple MLPs which you don't keep sequential or local information. In those tasks due to the fact that you can employ PCA which leads to the reduction of highly correlated features, the number of parameters of your training model can be reduced significantly.
H: how to represent location-code as a feature in machine learning model? I am trying to predict the damage to a buildings after earthquake on a dataset which contains "district number" as feature. I think the feature will have a significant importance in predicting the label but I am not sure how to best represent it. Any thoughts? AI: You can get as creative as you want, but here are two general approaches that work for me. Clustering the data into known geographical divisions and create dummy variables. For example, in the United States a person can use zip codes. Find the center for known clusters (ie zipcodes) or by some similar unsupervised cluster and use the longitude and latitude. How you choose to augment that information depends on what exactly you're trying to predict.
H: Error not decreasing in a 3 layer deep CNN using TensorFlow I'm trying to train a CNN to play an online game by feeding images of the game along with the keyboard input. By playing the game for some time and collecting the data, I gathered 342 images with size 110x42. I'm feeding these images in the network like so: def convolutional_neural_network(x): weights = {'W_conv1': tf.Variable(tf.random_normal([3, 3, 1, 16])), 'W_conv2': tf.Variable(tf.random_normal([5, 5, 16, 32])), 'W_conv3': tf.Variable(tf.random_normal([5, 5, 32, 64])), 'W_conv4': tf.Variable(tf.random_normal([5, 5, 64, 64])), 'W_fc': tf.Variable(tf.random_normal([7 * 3 * 64, 1024])), 'out': tf.Variable(tf.random_normal([1024, n_classes]))} biases = {'b_conv1': tf.Variable(tf.random_normal([16])), 'b_conv2': tf.Variable(tf.random_normal([32])), 'b_conv3': tf.Variable(tf.random_normal([64])), 'b_conv4': tf.Variable(tf.random_normal([64])), 'b_fc': tf.Variable(tf.random_normal([1024])), 'out': tf.Variable(tf.random_normal([n_classes]))} x = tf.reshape(x, shape=[-1, 110, 42, 1]) conv1 = tf.nn.relu(conv2d(x, weights['W_conv1']) + biases['b_conv1']) conv1 = maxpool2d(conv1) conv2 = tf.nn.relu(conv2d(conv1, weights['W_conv2']) + biases['b_conv2']) conv2 = maxpool2d(conv2) conv3 = tf.nn.relu(conv2d(conv2, weights['W_conv3']) + biases['b_conv3']) conv3 = maxpool2d(conv3) conv4 = tf.nn.relu(conv2d(conv3, weights['W_conv4']) + biases['b_conv4']) conv4 = maxpool2d(conv4) fc = tf.reshape(conv3, [-1, 7 * 3 * 64]) fc = tf.nn.relu(tf.matmul(fc, weights['W_fc']) + biases['b_fc']) fc = tf.nn.dropout(fc, keep_rate) output = tf.sigmoid(tf.add(tf.matmul(fc, weights['out']), biases['out'], name='pred')) return output def train_neural_network(x): prediction = convolutional_neural_network(x) cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=prediction, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=1).minimize(cost) hm_epochs = 6 with tf.Session() as sess: sess.run(tf.initialize_all_variables()) for epoch in range(hm_epochs): epoch_loss = 0 for epoch_x, epoch_y, i in dataset.create_batches(): epoch_x = epoch_x.reshape(-1,4620) _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y}) epoch_loss += c print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss) #correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) correct = tf.equal(tf.greater(prediction, 0.5), tf.equal(y, 1.0)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print('Accuracy:', accuracy.eval({x: dataset.dataset['test']['x_test'], y: dataset.dataset['test']['y_test']})) train_neural_network(x) The errors are getting stuck in a particular value and floating up and down around this value by a small amount. I've tried incresing/decreasing the learning rate, improving the quality of images, changing the size of the batches... and nothing seems to make the network stable. Do you guys know what I'm doing wrong? AI: Two things: You're squashing the outputs via the sigmoid function before calculating the loss via tf.nn.sigmoid_cross_entropy_with_logits. This loss function takes the class logits as inputs, meaning you should be passing in the output from the linear layer without any non-linearity activation functions applied after. NOTE that applying a sigmoid function to the logits is a regularization technique used to diminish the effect of the model outputting very large logits, but in a simple three layer CNN like you've got going on here, using this technique is going to hurt more than help. You should be annealing the learning rate (rather than using a constant learning rate every epoch). As optimization starts to converge on a set of parameters, the parameter update magnitudes should get smaller and smaller otherwise you're very likely to jump past and around where you want to be. Out of curiosity, what accuracy/loss are you getting?
H: classification performance metric for high risk medical decisions What is the best classification performance metric for risky medical treatments like surgery? for example a patient should NOT suggest a surgery (negative) if he/she can be treated by medicine (positive). Does Negative predictive value (TN/TN+FN) works for this situation? AI: Minimizing false negatives is definitely a good strategy. You can also generalize to using weighted f-measure. It allows you to give tunable weightage.