text
stringlengths
83
79.5k
H: MSE to predict Values 0 and 1 I am building a deep neural network to predict values 0 and 1 .My training data contains class labels 0 and 1 .Currently i am getting a cross validation loss around .25 . How good is the model performing? AI: Predicting $0.5$ for all items in your case would also give you MSE of $0.25$. That is because independently of whether the true label is $0$ or $1$, the squared error for each example will be $0.5^2 = 0.25$ Your model is performing badly under an MSE measure, when such a simple model that does not take input data into account can get the same score. However, it is debatable whether MSE gives you a useful metric here.
H: Basic method of optimizing hyperparameters I recently read the LIPO blog post on the dlib blog: http://blog.dlib.net/2017/12/a-global-optimization-algorithm-worth.html It mentions that it can be used for optimizing hyperparameters of eg metaheuristic algorithmsike simulated annealing or genetic algorithms. I looked for info on how optimizing hyperparameters work in general and the Wikipedia page is the most informative I found but it doesn't answer my basic questions: https://en.m.wikipedia.org/wiki/Hyperparameter_optimization My question is just: what is the basic idea for optimizing hyperparameters? If I have some problem I'm trying to solve with simulated annealing, I know that the starting temperature and the cooldown rate are important in determining how well the algorithm does at finding a solution. I know that I could completely run the algorithm with one set of parameters, modify one of the parameters, completely run it again, then reset the parameters and modify the other parameter and run it again. This could give me a numerical gradient that I could use to modify the parameters via gradient descent. However... At this point I had to run the whole algorithm 3 times just to get a single modification of the hyperparameters. I feel like I must be missing something obvious because optimizing the hyperparameters would take many many hundreds or thousands of times or more the cost of running the whole thing once, which doesn't seem useful at all. Can someone clue me in? AI: Hyperparameter optimization follows the same rules as model selection. Each set of hyperparameters effectively represents a different model you are considering, so the data you use to fit the model with some set of hyperparameters needs to be different from the data you use to evaluate which set of hyperparameters you want to ultimately use. A common approach for evaluating hyperparameters is nested cross-validation. This basically means that you need to treat hyperparameter selection as part of your model training process, and when you evaluate your model you evaluate the entire process front to back, i.e. treating hyperparameter tuning as a component of model training with respect to cross-validating over your training process. There's an excellent discussion of this in section 7.10.2 ("The Wrong and Right Way to Do Cross-validation") of Elements of Statistical Learning, which you can read online and/or download for free. The general idea is that if you're not careful, you can actually overfit to your evaluation data. Play with this demo to see for yourself. But yes, your intuition is correct. Hyperparameter tuning is often very computationally expensive. One way people sometimes try to minimize this cost is by limiting the search space of feasible parameters to a small discrete set, e.g. grid search. Another approach is to use guassian processes or KDEs to approximate the cost surface in parameter space. You can even use a multi-armed bandit approach. Frankly, you can really use pretty much any non-linear optimization technique for hyperparameter tuning, as long as you follow the appropriate cross-validation rules. The trick is figuring out bang-for-your-buck in terms of how much time/effort/compute you are willing to expend exploring parameters vs. the potential improvement to your model. Additionally, there are concerns about the limits of repeated cross-validation/holdout evaluations, but that's a whole other rabbit hole.
H: Classify sentences containing typos into groups My data is a list of sentences, where each sentence contains between 1 and 4 words. These sentences are typed in manually so some of them contain typos and some additional words such as GmbH, GER etc. However, I do know the set of valid sentences. As an example we assume this valid set is given by {Hello human, Horse, Hello bird} and the data (where some sentences contain typos and extra words) is given by Hello human Horse Hello human GmbH Hello human GmbH, GER Horse GmbH Horse Hello humn Hell humn Hello human Hello bird I would like to give each sentence above an ID 1, 2 or 3 where 1 is for Hello world, 2 is Horse and 3 is Hello bird. But due to the typos and extra words such as GmbH, GER I cannot make a simple comparison between strings. Is there a numerical technique within NLP or a related field that I can use to achieve this task? AI: If I understand correctly, you're looking for string similarity. There are several techniques available, the most simple is "edit distance" (aka levenshtein distance), which is the count of the minimum insertion/deletion/substitution/transposition operations needed to get from one string to the other. For your particular task, I suspect "jaro-winkler similarity" would be better. JW is similar to ED, but was specifically designed for "entity resolution" (i.e. "record linkage"), which it looks like is what you're trying to accomplish. You can see a short demonstration of how this would work here
H: How to treat input that inherently has a tree structure? When having a single vector you use an MLP neural network When having a 2D structure you use a CNN neural network When having a sequence you use a RNN neural network Now you have preprocessed an instance and the result is a tree structure. Let's say for simplicity that the tree structure is always the same tree; only the node values differ among instances. What kind of neural network architecture would be required to consume the information of a tree structure but also leveraging the connections between the tree nodes? AI: As @Emre mentioned, RNN is a good option. It's worth noting that if the number of possible nodes in each tree is the same or at least has the same upper bound, you could use literally any architecture you want and just pass in the adjacency matrix. Alternatively, you could build an intermediate model to convert your graph into a graph embedding and then once again, you can do basically whatever you want with that. A pretty big piece of potentially important information here is what you are trying to accomplish, which could have significant consequences for how you want to represent your inputs.
H: How would be plotted the logistic functions associated to a multiclass logistic classifier in a X-Y plot? Logistic function is well known to be a good binary classificator, as it can be easily shown with this image (let x be the dot product of [x1,x2] and [w1, w2]): enter image description here I am currently learning how a multiclass logistic classifier works, and this time with x1 and x2 as axes, three classes can be represented as follows (from this link): According to several sites, both the "one vs the rest" and the multinomial logistic classifiers, would have one logistic regression for each of these 3 classes, with the difference of training them "together" or separately. If this is true, and we assume the next figure to be related to the previous example, I can't figure out how these 3 regressions (as in the first example) would be plotted in there. As much I could figure, in the "one vs the rest" case, how to draw the logistic function for blue and yellow categories against the rest, but how would it be for the red category against the rest? AI: If you are asking how to reproduce those plots, the following technique works to visualize the decision boundaries for any model: Generate a dense grid of coordinates that fill your plotting area Score each point on your grid Plot your results, coloring observations based on your model's predictions. This serves as the background. Overlay your test data as a scatter plot In your link, they generate the grid using np.meshgrid and constrtuct the background as a contour plot. The lines representing the decision boundaries of each respective 1-vs-all classifier is plotted using a closed form solution for logistic decision boundaries. EDIT: To answer the question in your comment, you don't have a single $X$ dimension, and consequently your model output doesn't correspond to a curve as simple as this: it's a 3D surface. The simplest solution would be to just apply the strategy I suggested earlier (and which is also described in your link) but instead of calling .predict() to construct the background coloration, you'd call .predict_proba() and use a sequential color map (e.g. the default veridis) so color intensity corresponds to your class likelihood, giving you something which should look like this. Alternatively, you could plot surface curves. Both of these solutions are projections of the $Y$ axis, $P(Y|x1,x2)$, onto the $X_1$-$X_2$ plane. If that isn't satisfactory, you could pass scored results through PCA to combine your $X$ dimensions into a single $X$ feature -- call this $PC_1$ -- and then plot $PC_1$ vs. $P(Y|x1,x2)$. Personally I think this is significantly less informative, but it would give you an x vs. y plot.
H: Binary classification toy problem I'm trying to build a toy model which can identify a constant difference between two variables: (if variable1- variable2>10 then 1 else 0). This should be a quite simple task for any regression model, but I want to solve it with NN. However all simple NN I built can not give me more than 51% accuracy. Is this something that I do not understand? My code: seed = 7 np.random.seed(seed) x1 = np.arange(50000) x2 = x1+10+(0.5-np.random.rand(len(x1))) X = np.column_stack((x1,x2)) Y = (x2-x1)>10 encoder = LabelEncoder() encoder.fit(Y) encoded_Y = encoder.transform(Y) train_X, test_X, train_y, test_y = train_test_split( X, Y, train_size=0.9, random_state=0) model = Sequential() model.add(Dense(2, input_dim=2,activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer=RMSprop(lr=0.001), loss=binary_crossentropy, metrics=[binary_accuracy]) history = model.fit(train_X, train_y, epochs=1000, batch_size=10, validation_data=(test_X, test_y), verbose=1) AI: Your problem is that neural networks work poorly when the input is not scaled to a simple range. A usual choice is to scale and offset each column so that it has mean 0 and standard deviation 1. In your case, x1 and x2 vary from 0 to 49999 and roughly 10 to 50009. This range for inputs will causes lots of numeric issues. With a balanced dataset as you have, 51% accuracy is basically just guessing (within experimental error), so the network has learned nothing. Try again with scaling - e.g. x1 = (x1 - 25000) / 14433 x2 = (x2 - 25000) / 14433 I have tested your code with this addition, and it gains 100% validation accuracy within the first epoch. If you want to assess other values in testing later you will need to scale them in the same way. Note your predictions may be off in testing when x1 and x2 are not close to 10 apart, because you have only trained with examples which are close to exactly 10 apart. How the network behaves when this is not the case - e.g. for inputs of x1 = 100 and x2 = 1000, or x1 = 90 and x2 = 15, may not generalise well compared to the original comparison function.
H: unique column value in python numpy my array is looking like this a=np.array([[ 25, 29, 19, 93], [ 27, 59, 23, 345], [ 24, 426, 15, 593], [ 24, 87, 50.2, 139], [ 13, 86, 12.4, 139], [ 13, 25, 85, 142], [ 62, 62, 68.2, 182], [ 27, 25, 20, 150], [ 25, 53, 71, 1850], [ 64, 67, 21.1, 1570], [ 64, 57, 73, 1502]]) i want to return the lowest value of column 2 based on the unique value of column 0. column 0 should contain unique values. I tries the following code, but was not giving me the exact result. Can some one help me to sort out this? thanks sidx = np.lexsort(a[:,[2,0]].T) dx = np.append(np.flatnonzero(a[2:,0] >a[:-2,0]), a.shape[0]-1) result = a[sidx[idx]] print result I want to get result like [25... 27 24 13 62 64...] a=[[196512 28978 Decimal('12.7805170314276')] [196512 34591 Decimal('12.8994111000000')] [196512 13078 Decimal('12.9135746000000')] [196641 114569 Decimal('12.9267705000000')] [196641 118910 Decimal('12.8983353775637')] [196641 100688 Decimal('12.9505091000000')]]this is a big list i used, df = pd.DataFrame(a) df.columns = ['a','b','c'] df.index = df.a.astype(str) dd=df.groupby('a').min()['c'] but i am getting, 195556 12.7805170314276 195937 12.7805170314276 196149 12.7805170314276 196152 12.7805170314276 196155 12.7805170314276 196262 12.7805170314276 AI: Here's an easy solution. The sort order changes, but that shouldn't be difficult to address if you really care: import pandas as pd df = pd.DataFrame(a) df.columns = ['a','b','c','d'] df.index = df.a.astype(str) # to preserve correspondence df.groupby('a').min()['b'] a 13.0 25.0 24.0 87.0 25.0 29.0 27.0 25.0 62.0 62.0 64.0 57.0 Name: b, dtype: float64 Edit: I think you meant to name your array y instead of a. This works for me: from decimal import Decimal y=np.array([[196512, 28978, Decimal('12.7805170314276')], [196512, 34591, Decimal('12.8994111000000')] , [196512, 13078, Decimal('12.9135746000000')] , [196641, 114569, Decimal('12.9267705000000')] , [196641, 118910, Decimal('12.8983353775637')] , [196641, 100688, Decimal('12.9505091000000')]]) df = pd.DataFrame(y) df.columns = ['a','b','c'] df.index = df.a.astype(str) dd=df.groupby('a').min()['c'] In [210]: dd Out[210]: a 196512 12.7805170314276 196641 12.8983353775637 Name: c, dtype: object
H: How can I fit categorical data types for random forest classification? I need to find the accuracy of a training dataset by applying Random Forest Algorithm. But my the type of my data set are both categorical and numeric. When I tried to fit those data, I get an error. 'Input contains NaN, infinity or a value too large for dtype('float32')'. May be the problem is for object data types. How can I fit categorical data without transforming for applying RF? Here's my code. AI: You need to convert the categorical features into numeric attributes. A common approach is to use one-hot encoding, but that's definitely not the only option. If you have a variable with a high number of categorical levels, you should consider combining levels or using the hashing trick. Sklearn comes equipped with several approaches (check the "see also" section): One Hot Encoder and Hashing Trick If you're not committed to sklearn, the h2o random forest implementation handles categorical features directly.
H: After a Feature Scaling do i have the same initial information? I'm studying the gradient descent algorithm for single hidden layer neural networks. Suppose that I have an initial dataset and then I use mean normalization in order to scale the features. Why mathematically do the normalized features carry the same information of the initial features? AI: I assume by mean normalization, you mean scaling each feature by subtracting the mean and dividing by standard deviation: $$ x_{\text{scaled}} = \frac{x - \bar{x}}{\sigma} $$ where $x$ is a feature. Even though you are changing all of the values of $x$, they are each being scaled by the same amount - initially, a translation by a constant (subtracting the mean), and then scaling by a constant (dividing by standard deviation). Here's a two-dimensional, randomly generated dataset generated with scikit-learn's make_blobs function (left) and the scaled version using the above equation for the $x$- and $y$-coordinates (right): The $x$- and $y$-values for each point have changed, but they are still all in the same place relative to each other. If you look closely, you can see that the structure of the data is identical, even though it has been scaled a bit, and you could go back to something that looks more like the original by simply 'zooming in' on the data. Because the structure of the data is the same, we say no information was lost. Now consider a transformation where we only take the $x$-value, and set all $y$-values to $0$: The structure of the data has changed, and there is no way to return to the original data by scaling or stretching space uniformly, so we say information was lost here. This is an extreme example, but hopefully it illustrates the point. One way to think about it is to think if it is more or less difficult for a classifier to distinguish between the classes after the transformation. In the first case, we can draw a line that perfectly separates the two clusters just as easily with the original data or the normalised data, but in the second case, there is no such line that separates the transformed data. By the way, if you normalise each example rather than each feature (as asked in your comment), for this data, you end up with something that looks like this: where all points land on either $(-1,1)$ or $(1,-1)$. This makes sense, because normalisation makes the range of the values span from $-1$ to $1$. When there are only two dimensions, one of them has to become $-1$ and the other has to become $1$. Hopefully it's fairly obvious that information is lost here, and it's generally not a good idea to do this. This is quite a hand-wavy explanation and doesn't really cover any actual information theory concepts, but hopefully it gives you some intuition for this. If you want to dive deeper into the mathematical side of things, have a look at the Wikipedia article for information theory.
H: Converting similarity matrix before inputting to t-sne I have a cosine similarity matrix where I want to adjust it to inputto t-sne. I fond the following explanation in a FAQ. As mentioned there I have made the diagonals to zero. what does it mean by symmetrize the pairwise similarity matrix, and normalize it to sum up to one? Can I use a pairwise similarity matrix as input into t-SNE? Yes you can! For instance, we successfully applied t-SNE on a dataset of word association data. Download the Matlab implementation, make sure the diagonal of the pairwise similarity matrix contains only zeros, symmetrize the pairwise similarity matrix, and normalize it to sum up to one. You can now use the result as input into the tsne_p.m function. AI: You need to scale the values by some constant factor so the sum of every entry in the matrix results in 1.0. You can achieve this by using mat /= mat.sum(), where mat is your matrix.
H: after grouping to minimum value in pandas, how to display the matching row result entirely along min() value The dataframe contains >> df A B C A 196512 196512 1325 12.9010511000000 196512 196512 114569 12.9267705000000 196512 196512 118910 12.8983353775637 196512 196512 100688 12.9505091000000 196795 196795 28978 12.7805170314276 196795 196795 34591 12.8994111000000 196795 196795 13078 12.9135746000000 196795 196795 24173 12.8769653100000 196341 196341 118910 12.8983353775637 196341 196341 100688 12.9505091000000 196641 196641 28972 12.7805170314276 196641 196641 34591 12.8994111000000 196346 196341 118910 12.8983353775637 196346 196341 100688 12.9505091000000 196646 196641 28980 12.7805170314276 196646 196641 34591 12.8994111000000 I tried to get minimum value for each group and display using the following code, df.columns = ['a','b','c'] df.index = df.a.astype(str) dd=df.groupby('a').min()['c'] it gives the result 196512 12.7805170314276 196795 12.7805170314276 196341 12.7805170314276 196346 12.7805170314276 but after grouping, I want to get the row with the minimum 'c' value, grouped by column 'a' and display that full matching row in result like, 196512 118910 12.8983353775637 196795 28978 12.7805170314276 196341 28972 12.7805170314276 196346 28980 12.7805170314276 AI: You can do this. But I doubt the efficiency. >> import pandas as pd >> df = pd.DataFrame({'a':[1,1,3,3],'b':[4,5,6,3], 'c':[1,2,3,5]}) >> df a b c 0 1 4 1 1 1 5 2 2 3 6 3 3 3 3 5 >> df[df['c'].isin(df.groupby('a').min()['c'].values)] a b c 0 1 4 1 2 3 6 3
H: Ordinal Integer variable vs Continuous Integer variable I am working on titanic dataset. I have one feature Pclass which I understand is an ordinal variable having values 1,2 and 3. I have created a new feature cabin_int from feature Cabin, which is essentially the number of cabins alloted to a passenger. So, it has values like 0,1,2,3 and 4. Now this new feature is not ordinal, it is just a continuous variable taking only integer values. My question is how does Machine Learning algorithm understands the difference between the two, if I pass these two columns (Pclass and cabin_int) as they are during training of my model? If some more feature engineering needs to be done, please tell. AI: There is a rule called there is no free launch. It means that there isn't a learning algorithm that solves all the problems. You as a machine learning practitioner should decide when and how to use which algorithm. Suppose that you want to recognize faces. This problem is a learning problem which if you increase the number of training data, you will get better results. In these cases neural nets and deep nets are highly recommended. In this case it is not logical to use non-linear SVM because it will be so costly and you may not even get good answers. the reason is that deep nets cares about local patterns but SVM considers all the input pattern simultaneously. Actually in your case, I guess your data is categorical. For categorical data, people often use decision trees. To illustrate an example, once I decided to train a simple MLP to distinguish whether an input pattern is in correct position, to solve 8-queen problem. I solve the game using Genetic algorithm and made data for training the net. The data I brought to net was categorical in some extant. I used it and the net was so good for the trained data, but input features similar to training data which were a bit different had bad recall rate. I trained a decision tree, I get so much better result. Which algorithm depends on your task and your input features.
H: Gradient Exchange I read a paper on Deep Neural Networks Compression (link: https://openreview.net/forum?id=SkhQHMW0W) and came across a term "gradient exchange", I tried making sense out of it but couldn't exactly understand what it intuitively refers to. What does the term gradient exchange mean? AI: Gradient exchange occurs in distributed learning systems that perform gradient descent, when one part of the distributed system need to use the gradient values from another part in order to complete a task. For example, you may distribute a large data set between multiple nodes, and want to calculate a gradient descent step as part of optimisation. One way to do so is calculate a subset of batch gradients on each node and collate them at a single node in order to alter parameters synchronously. This means it is necessary to fetch gradients from all nodes into a single node so that a combined gradient for some weight parameters can be calculated and the parameters updated consistently in the update step. Gradient exchange is just a term to describe that event - node A needs some gradients that node B has calculated, so they are requested (or pushed) and have to travel between the nodes. This is a relatively slow I/O process - it is necessary for distributed system to work, but for high performance you want to minimise time spent moving the data. Other data (such as the parameters) also needs to be shared between nodes. This particular piece of data regarding the gradients is singled out for the paper as the authors have discovered a way to compress it significantly without losing performance of the learning algorithm. This is partly because gradients can be treated approximately in the first place. Many learning algorithms further adjust or normalise gradients after they have been calculated, so using super-precise values is not as important as you might think. There may also be clever ways of splitting the update work so that each node only needs some of the gradients and only updates some of the parameters at each step. That will keep node CPU busy, possibly at the expense of more complicated communications. I do not know the details of any optimised distributed learning system in order to tell you the precise data exchanges and optimisations taking place. There are likely to be a few variations possible, depending on the framework and which algorithm is being implemented.
H: How to extract specific parts of text from a string? For example: Here is a textual input: "ALL imagery SINCE 1952 20 MULE T aerial BOOSTER & Multi-Purpose Neutralizer.MAY BE “AHMFLL |-E SWALL """""" -5 NETWT4LBS1 DZ (65 OZ) 1.84 kg " Output desired: 4 LBS 65 OZ 1.84 kg " AI: It is a specific pattern which can be detected by numbers and after that a unit name with two or three letters. In the other words, detect numbers first, and then find numbers which after of them there is two or three letters. Moreover, you can check these letters are in a predefined list of units or not.
H: Which one of these tasks will benefit the most from SPARK? My company processes Data (I am only an intern). We primarily use Hadoop. We're starting to deploy spark in production. Currently we have have two jobs, we will choose just one to begin with spark. The tasks are: The first job does analysis of a large quantity of text to look for ERROR messages (grep). The second job does machine learning & calculate models prediction on some data with an iterative way. My question is: Which one of the two jobs will benefit from SPARK the most? SPARK relies on memory so I think that it is more suited to machine learning. The quantity of DATA isn't that large compared to the logs JOB. But I'm not sure. Can someone here help me if I neglected some piece of information? AI: I think second job will benefit more from spark than the first one. The reason is machine learning and predictive models often run multiple iterations on data. As you have mentioned, spark is able to keep data in memory between two iterations while Hadoop MapReduce has to write and read data to file system. Here is a good comparison of the two frameworks : https://www.edureka.co/blog/apache-spark-vs-hadoop-mapreduce
H: What is the shape of conv3d and conv3d_transpose? I want to do a GAN with coloured pictures. This means I need a three dimensional input and therefore I like to use conv3d and conv3d_transpose. Unfortunately in the TensorFlow documentation, I can't find any formula for the output shape. Can anyone give me a hint on how to find the shape of the function's results? AI: Although your input data is three dimensional, you have to use Conv2D for your task. I guess Conv3d is used for data with temporal characteristic, yours is just a simple picture. To illustrate why you should Conv2D, suppose your input image is 224 * 224 * 3 and you employ a Conv2D layer with 10 filters. You have to specify stride and padding in order to specify the output shape. You have to specify dimensions to illustrate the height and width of you filters, also known as kernels, filter size will affect the output size if you assign padding to 'valid'. Here, there is a point. Suppose you have specified the filter size a 10 * 10 filter, then if the input shape was 224 * 224 * 1, each filter would be of size 10 * 10 * 1 to fit the input area. Now that the input is of size 224 * 224 * 3 the size of each kernel is 10 * 10 * 3 to fit the input volume. Consider in all cases the output of each convolution operation, better to say cross correlation, is a scalar. For more information take a look at videos here and for your case I encourage you watching Convolution Over Volume.
H: Scikit alternative for categorical data modeling? So, sklearn doesn't support categorical data in its models. Is there a known alternative for categorical data modeling (such as random forests, etc.) for Python? AI: There are definitely ways to process your data to make categorical data compatible with sklearn (e.g one-hot encoding). An alternative you can look into is h2o, which supports categorical features natively (although it doesn't offer the breadth of models of sklearn).
H: Find similar observations in two datasets I have two datasets A and B. What I would like to do is for each observation in A, I would like to find 5 observations from B that are closest and match to A. How should I start? Thank you for your help! AI: Look at "unsupervised nearest neighbor" algorithm. This algorithm needs records to be first expressed as vectors so that "distance" between two point so that it makes sense to talk about distance between two points. For each point in data A, you can look for K nearest neighbors from data B, after expressing all observations in a common vector space. You will have to handle categorical columns correctly (say using one-hot encoding), as there's no concept of (direct) distance between categorical data. Python scikit-learn library has a good implementation of this algorithm. Reading the API documentation is a good place to start.
H: Performing machine learning on small datasets As a beginner at machine learning, I wanted to work on a small project in which the dataset has only 80 rows and 5 columns. The dataset I am working with is related to a medical condition, with 4 columns as biomarkers, and the 5th column indicates whether the row (a patient) has the condition or not. So far, I have fitted the following 5 models (with accuracy and MCC scores): KNN (Accuracy: 43.5%, MCC:-0.164) Logistic Regression (Accuracy: 65.2%, MCC: .312) SVM (Accuracy: 60.9%, MCC: .214 Random Forest (Accuracy: 86.95%, MCC: .769) Decision trees (Accuracy: 65.2%, MCC: .312) I have used 5-fold cross validation to prevent overfitting, and yet most of my models are underperforming. I was also considering ensembling and bootstrapping, but with these lacking results, I am not sure how effective they would be. Do you have any tips concerning either: Better algorithms for small datasets Improvements I could make on the algorithms I have so far Another method (e.g. Regularization) AI: With 5-fold cross validation, in each fold you're reducing your training dataset to 64 observations and evaluating against 16 observations. Assuming your data is balanced and you're stratifying folds, you're only giving your models 32 observations from each class to learn from, and misclassification of a single test set observation results in a 6.25% point change in accuracy for that fold. Even if it's classified correctly in the other folds, that single misclassification will still have a 1.25% point change on the across-folds average. Yikes. It should be no surprise your models aren't performing well: not only do they not have much data to train on, but your evaluation methodology is extremely unforgiving. So yeah, your data is pretty darn small. Off the top of my head, I can think of a handful of general strategies you can potentially use to address this: Use more of your data for training. Instead of 5-fold CV, try leave-one-out or bootstrap. Generate fake data. This probably won't get you very far, but it's at least an option. Check out the SMOTE algorithm. Transfer learning. Depending on what you're doing, it might be possible to leverage a pre-trained model and then tweak it slightly to suit your needs. If this is an option it can be extremely powerful, but chances are this won't be something you can reasonably pursue. Anyway, here's an article demonstrating this on another small data medical example: a pre-trained general purpose image classifier was trained to detect cancer from just 600 images! https://arxiv.org/abs/1711.10752 Go Bayesian. Bayesian methods allow you to incorporate outside information as a "prior belief". If you have subject matter expertise (or better yet, citations) that you can use to set your expectations for values of the model parameters or hyperparameters, bayesian methods will allow you to incorporate that information explicitly, and this can in turn help your model find good parameters faster since it doesn't need to learn everything about your problem directly from the little data available to it. If you're not careful, this can result in you giving yourself the model you want to have, in which case your evaluation metrics may not be as informative as you think they are. Here there be dragons: this approach is powerful, but it can be hard to do correctly, and playing with the prior is basically an invitation for people to be skeptical of your methods even if what you're doing is sound. Get more data. I'm guessing this isn't an option or you would have done so already. But if there's a chance it's out there: go find it. Try poking around the literature associated with this condition, maybe you'll get lucky and find a public dataset. If you're feeling bold, you could try emailing other researchers and just ask them politely if you can use their data.
H: When to use different Word2Vec training approaches? So I am learning Word2Vec for the first time and my question is quite basic: How to know what approach to use? I.e, Word2Vec in Tensorflow or Word2Vec trained with Gensim ? In what cases would implementing it through the more manual first approach be useful vs. the second one? If there is already an easier way to train a word2vec model using gensim, why is that not used always? Furthermore, what is the benefit in using a pre-trained model like the Google News dataset? What happens when there are words that are not included in the news dataset? Sorry if this question is basic, I just want to get a clearer grasp of the overall picture. AI: The advantage of using pre-trained vectors is being able to inject knowledge from a larger corpus than you might have access to: word2vec has a vocabulary of 3 million words and phrases trained on the google news dataset comprising ~100 billion tokens, and there's no cost to you in training time. In addition, they are fast and easy to use, just load the embeddings and look them up. It's straightforward to substitute different sets of pre-trained vectors (fastText, GloVe etc) as one might be more suited to a particular use case. However, when your vocabulary does not have an entry in word2vec, by default you'll end up with a null entry in your embedding layer (depending on how you handle it). You'll need to consider the scale/impact and how to address it (keep/discard/consider online training). As yazhi says a decision must be made about how to handle out of vocabulary words. The advantage of learning word vectors from your own corpus is that they would be derived from your dataset, so if you have reason to believe that the composition of your data is significantly different from the corpus used for the pre-trained vectors then that may result in better downstream performance. However, that comes at a cost in time taken to train your own vector representations.
H: Can I use cosine similarity as a distance metric in a KNN algorithm Most discussions of KNN mention Euclidean,Manhattan and Hamming distances, but they dont mention cosine similarity metric. Is there a reason for this? AI: Short answer: Cosine distance is not the overall best performing distance metric out there Although similarity measures are often expressed using a distance metric, it is in fact a more flexible measure as it is not required to be symmetric or fulfill the triangle inequality. Nevertheless, it is very common to use a proper distance metric like the Euclidian or Manhattan distance when applying nearest neighbour methods due to their proven performance on real world datasets. They will therefore be often mentioned in discussions of KNN. You might find this review from 2017 informative, it attempts to answer the question "which distance measures to be used for the KNN classifier among a large number of distance and similarity measures?" They also consider inner-product metrics like the cosine distance. In short, they conclude that (no surprise) no optimal distance metric can be used for all types of datasets, as the results show that each dataset favors a specific distance metric, and this result complies with the no-free-lunch theorem. It is clear that, among the metrics tested, the cosine distance isn't the overall best performing metric and even performs among the worst (lowest precision) in most noise levels. It does however outperform other tested distances in 3/28 datasets. So can I use cosine similarity as a distance metric in a KNN algorithm? Yes, and for some datasets, like Iris, it should even yield better performance (p.30) compared as to Euclidian.
H: Beginner math books for Machine Learning I'm a Computer Science engineer with no background in statistics or advanced math. I'm studying the book Python Machine Learning by Raschka and Mirjalili, but when I tried to understand the math of the Machine Learning, I wasn't able to understand the great book that a friend suggest me The Elements of Statistical Learning. Do you know any easier statistics and math books for Machine Learning? If you don't, how should I move? AI: Although you need book, I recommend the following courses respectively for understanding statistics which are used for machine learning and other tasks in data science. They are free. Learn Statistics - Intro to Statistics Course Intro to Descriptive Statistics Inferential Statistics: Learn Statistical Analysis If I want to recommend a book, I would recommend the following book which is free under CC license. It has nice examples and is so much practical; moreover, there are lots of codes in it which help you feel statistics in real world examples. Think Python by Allen B. Downey Python Data Science Handbook Also the following link may help: From Google Itself Good And Concise
H: Use cases for graph algorithms and graph data structures in finance and banking I work in a bank and most data is in tabular format in relational databases. I have been reading about graph algorithms (page rank), graph libraries (spark graphx) and graph databases (neo4j). I would like to pick a use case from my field (finance). What use case would be suitable for graph algirithms and databases? AI: There are many use cases of graph theory in Finance industry and it is a very broad question. As Emre said can be used for Fraud Detection, Risk Modelling, Economic Networks etc. These below links can give you better understanding of different application, please go through for better understanding: Applications of Graph Theory In Finance Graph Theory for Systemic Risk Models Graph theory: connections in the market Analysis of Equity Markets: A Graph Theory Approach Portfolio Diversification From Graph Theory to Models of Economic Networks. A Tutorial Do let me know if you need any additional information.
H: Is my understanding of On-Policy and Off-Policy TD algorithms correct? After reading several questions here and browsing some pages on the topic, here is my understanding of the key difference between Q-learning (as an example of off-policy) and SARSA (as an example of on-policy) methods. Please correct me if I am misled. 1) With an on-policy algorithm we use the current policy (a regression model with weights W, and ε-greedy selection) to generate the next state's Q. 2) With an off-policy algorithm we use a greedy version of the current policy to generate the next state's Q. 3) If an exploration constant ε is set to 0, then the off-policy method becomes on-policy, since Q is derived using the same greedy policy. 4) However, on-policy method uses one sample to update the policy, and this sample comes from on-line world exploration since we need to know exactly which actions the policy generates in current and next states. While off-policy method may use experience replay of past trajectories (generated by different policies) to use a distribution of inputs and outputs to the policy model. Source: https://courses.engr.illinois.edu/cs440/fa2009/lectures/Lect24.pdf One more reading: http://mi.eng.cam.ac.uk/~mg436/LectureSlides/MLSALT7/L3.pdf AI: 1) With an on-policy algorithm we use the current policy (a regression model with weights W, and ε-greedy selection) to generate the next state's Q. Yes. To avoid confusion, it may be better to use the terms "behaviour policy" for the the policy that controls current actions and "target policy" for the policy being evaluated and/or learned. 2) With an off-policy algorithm we use a greedy version of the current policy to generate the next state's Q. Sort of. The only requirement for an algorithm to be off-policy is that the target policy is different to the behaviour policy. The usual target policy in Q-learning is not necessarily a greedy version of the behaviour policy, but is the maximising policy over Q. However, if the behaviour policy is ε-greedy over Q, and adapting to updates in Q, then yes your statement holds. 3) If an exploration constant ε is set to 0, then the off-policy method becomes on-policy, since Q is derived using the same greedy policy. This is true when comparing SARSA with Q learning, but may not hold when looking at other algorithms. This greedy-only action selection would not be a very efficient learner in all environments. 4) However, on-policy method uses one sample to update the policy, and this sample comes from on-line world exploration since we need to know exactly which actions the policy generates in current and next states. While off-policy method may use experience replay of past trajectories (generated by different policies) to use a distribution of inputs and outputs to the policy model. Experience replay is not directly related to on-policy vs off-policy learning. Technically though, yes when the experience is stored and used later, that makes it off-policy for SARSA if Q values have changed enough between the sample and current parameters of the learning agent. However, you will see experience replay used more often with off-policy methods, since off-policy learners that boot-strap (i.e. use Q value of next state/action to help estimate current Q value) are less stable when used with function approximators. Experience replay helps to address that problem.
H: Why is ReLU used as an activation function? Activation functions are used to introduce non-linearities in the linear output of the type w * x + b in a neural network. Which I am able to understand intuitively for the activation functions like sigmoid. I understand the advantages of ReLU, which is avoiding dead neurons during backpropagation. However, I am not able to understand why is ReLU used as an activation function if its output is linear? Doesn't the whole point of being the activation function get defeated if it won't introduce non-linearity? AI: In mathematics (linear algebra) a function is considered linear whenever a function$f: A \rightarrow B$ if for every $x$ and $y$ in the domain $A$ has the following property: $f(x) + f(y) = f(x+y)$. By definition the ReLU is $max(0,x)$. Therefore, if we split the domain from $(-\infty, 0]$ or $[0, \infty)$ then the function is linear. However, it's easy to see that $f(-1) + f(1) \neq f(0)$. Hence by definition ReLU is not linear. Nevertheless, ReLU is so close to linear that this often confuses people and wonder how can it be used as a universal approximator. In my experience, the best way to think about them is like Riemann sums. You can approximate any continuous functions with lots of little rectangles. ReLU activations can produced lots of little rectangles. In fact, in practice, ReLU can make rather complicated shapes and approximate many complicated domains. I also feel like clarifying another point. As pointed out by a previous answer, neurons do not die in Sigmoid, but rather vanish. The reason for this is because at maximum the derivative of the sigmoid function is .25. Hence, after so many layers you end up multiplying these gradients and the product of very small numbers less than 1 tend to go to zero very quickly. Hence if you're building a deep learning network with a lot of layers, your sigmoid functions will essentially stagnant rather quickly and become more or less useless. The key take away is the vanishing comes from multiplying the gradients not the gradients themselves.
H: Filter row depending on specific object value and delete those instances I have some categorical data which also contains '?' as data in some rows. I need to filter those rows depending on '?', that which row contain that instances will be deleted. I tried to drop those rows by applying these command but I failed. train = train.drop[~train.str.contains('\?')] train = train.drop[train['?']] How could I identify those rows which contain '?' instance and drop those row ? AI: You can replace ? with nan and use dropna(). This will work if you don't already have rows with nan entries that you want to keep. train = train.replace('?', np.nan).dropna() Another option is to filter rows where any value is ?. train = train[~(train == '?').any(axis=1)] Update: After looking at your data I found the problem: Your csv file has spaces after the commas, so the rows containing ? have a leading space. If you use train = pd.read_csv('adult.data', sep=', ', engine='python') to read your data then either of the above methods will work.
H: rows to columns in data.table R (or Python) This is something I can't achieve with the reshape2 library for R. I have the following data: zone code literal 1: A 14 bicl 2: B 14 bicl 3: B 24 calso 4: A 51 mara 5: B 51 mara 6: A 125 gan 7: A 143 carc 8: B 143 carc i.e.: each zone has 4 codes with its corresponding literal. I would like to transform it to a dataset with one column for each of the four codes and one column for each of the four literals: zone code1 literal1 code2 literal2 code3 literal3 code4 literal4 1: A 14 bicl 51 mara 125 gan 143 carc 2: B 14 bicl 24 calso 51 mara 143 carc Any easy way to achieve this in R? If not, I would also be comfortable with a solution in Python. AI: Here is a python solution, given a dataframe (df) containing the data you have above: >>> from itertools import chain >>> data = [] >>> for zone in df.zone.unique(): ... codetuples = [(row[2], row[3]) for row in df[df['zone']==zone].itertuples()] ... data.append([zone] + list(chain.from_iterable(codetuples))) ... >>> df = pandas.DataFrame(data, columns=['zone', 'code1', 'literal1', 'code2', 'literal2', 'code3', 'literal3', 'code4', 'literal4']) >>> df zone code1 literal1 code2 literal2 code3 literal3 code4 literal4 0 A 14 bicl 51 mara 125 gan 143 carc 1 B 14 bicl 24 calso 51 mara 143 carc Explanation df.itertuples() returns an iterator through the rows of a dataframe as tuples. The first entry (0 indexed in the tuple) will be the index, so the 2nd and 3rd columns of the df will be the two you are interested in. There is no guarantee of order for code1 vs code2; I stored the data from the df in a variable codetuples so you can sort or something. There is also no guarantee that you will have exactly 4 pairs of code and literal, so you could put error checking in there, if you needed to. Once you have an acceptable list of four tuples, from_iterable() flattens this list. Then append the zone number to the front and store it as another dataframe.
H: Big Data - Data Warehouse Solutions? I have a dozen of databases that stores different data, and each of them are 100TBs in size. All of the data is stored in AWS services such as RDS, Aurora and Dynamo. Many times I find myself need to perform "joins" across databases, for example a student ID that appears in multiple databases with data that I want to gather. The joins are usually done after data is streamed out of the database, since the data is not located in the same database, and this sometimes requires hours just for thousands of records. Can services such as AWS redshift or Google BigQuery allow you to somehow "import" data from many data sources and then you can perform SQL queries to join them? How about Hadoop and Hive? Where we steam data out from the database and place it as files in Hadoop, and let Hive Query the data? AI: Can services such as AWS redshift or Google BigQuery allow you to somehow "import" data from many data sources and then you can perform SQL queries to join them? It depends on your data and the type of joins you are performing. But, yeah, databases like Redshift can definitely perform better in your use case as they are column-based databases. Read this post and the associated answers for understanding how columnar data stores handle data. How about Hadoop and Hive? Hadoop + Hive is mostly a DIY hosted/cloud version of what Redshift gives you on cloud.
H: Bootstrapping or Randomly Dividing Dataset to reduce variance? If I have 10,000 training samples then what should I do: Bootstrapping and train 10 classifiers on it and then aggregating Or randomly divide the dataset into 10 parts and train 10 classifiers on them and then aggregating. Which will be better? Will the 2nd method reduce variance and will it be better than 1st method AI: I think the second method will yield less correlated models than the first method. It is particularly true with decision trees which tend to quickly overfitting in the bottom nodes. It will help reduce the variance. However, by using the second approach, you will end with 10 smaller datasets and so you risk to introduce a variance error due to the too small number of observations. Discussing about decision trees again, it would mean that your trees algorithms will tend to overfitting upper in the tree. And so you will increase your variance error. In my opinion, for most of datasets, it is still better to use the first approach than the second. I think that very low correlated estimators won't bring a better improvement that the first method. We can also observe that differences in the two approaches depend also of the number of observations, the number of features, the kind of estimators you are using.. A benchmark would be really interesting !
H: Is there a maximum limit to the number of features in a Neural Network? I have created a dataset which has rather large number of features for example-100,000. Is it too large for a decent computer to handle ( I have a 1080ti )? AI: It highly depends on your data. If it's image, I guess it is somehow logical but if not I recommend you constructing covariance matrix and tracking whether features have correlation or not. If you see many features are correlated, it is better to discard correlated features. You also can employ PCA to do this. Correlated features cause larger number of parameters for neural network. Also I have to say that maybe you can reduce the number of parameters if your inputs are images by resizing them. In popular nets the length and height of input images are usually less than three hundred which makes the number of input features 90000. Also you can employ max-pooling after some convolution layers, if you are using convolutional nets, to reduce the number of parameters. Refer here which maybe helpful.
H: How to estimate probabilities of different classes for a Text Suppose I have a piece of writing and I want to assign probabilities to different genres (classes) based on its contents. For example Text #1 : Comedy 10%, Drama 50%, Fiction 20%, Romance 1%, Mythology 5%, Adventure 10% Text #2 : Comedy 40%, Drama 3%, Fiction 2%, Romance 30%, Mythology 5%, Adventure 10% We have given keywords possibly ngrams in each class through which we make a comparison Class 1 Comedy : k11, k12, ..., k1m Class 2 Drama : k21, k22, ..., k2n Class 3 Fiction : k31, k32, ..., k3o Class 4 Romance : k41, k42, ..., k4p Class 5 Mythology : k51, k52, ..., k5q Class 6 Adventure: k61, k62, ..., k6r What can be the best probabilistic model that we can use for this task AI: If I understand correctly, we are interested in soft multilabel classification, where a single text can have multiple correct genres. According to your comment, we don't have any training data, just a list of keywords associated with each genre. We can try computing the similarity between each document and each keyword list: Normalize the document (convert to lowercase, remove punctuation, diacritics, non-alphanums, etc) Remove stopwords Convert the document to tf-idf vector over our genre keyword vocabulary: Each document gets an n-length vector where each entry is the frequency of the ith genre keyword in the document. Normalize this vector to magnitude 1. Convert each genre keyword list to a tf-idf vector in the same way (again over the keyword vocabulary for all genres). Compute the cosine similarity between the document vector and each genre vector. For each document, this will give us a number in the range [0,1] for each genre. For example: Comedy Drama Fiction Romance Mythology Adventure Text #1: 0.15 0.11 0.03 0.00 0.00 0.07 If we were doing single label classification we could normalize each row to add up to 1 and we might have a working model. However there is no such trick for multilabel classification here. We don't have a good way to calibrate these values into probability estimates. At this point the only solution I see is to build a small training set so we can fit our model to actual data. After gathering some training examples, we can run a multilabel regression with sigmoid activation and binary crossentropy loss with the cosine similarities as input features to get a probability estimate for each class. Using this method our list of genre keywords will at least save us having to build a large training set to solve the problem directly with bag-of-words or similar approaches.
H: How do I get the name of Sagemaker Estimator's job I'm having a stumbling block with SageMaker. How do I know what my job name is? For example: mnist_estimator = MXNet(entry_point='/home/ec2-user/sample-notebooks/sagemaker-python-sdk/mxnet_mnist/mnist.py', role=role, output_path=model_artifacts_location, code_location=custom_code_upload_location, train_instance_count=1, train_instance_type='ml.m4.xlarge', base_job_name=’foo’, hyperparameters={'learning_rate': 0.1}) and then when I call fit() it prints out: INFO: Creating training job with name: foo-2018-01-10-20-13-57-893 and when I look in my S3 bucket I see: 2018-01-10 15:20:45 411784 artifacts/foo-2018-01-10-20-13-57-893/output/model.tar.gz So my job name is "foo-2018-01-10-20-13-57-893" and not "foo", but how can I get this from python? I guess I'm looking for a way of extracting that from the Estimator itself, but I'm just blanking on it. I'm reading the python source but that's not helping me, probably because I'm just learning python also. AI: The _current_job_name contains the name of the job. So in the example from the question: print(mnist_estimator._current_job_name) would print foo-2018-01-10-20-13-57-893
H: Backpropagation with multiple different activation functions How does back-propagation handle multiple different activation functions? For example in a neural network of 3 hidden layers, each with a separate activation function such as tanh, sigmoid and ReLU, the derivatives of each of these functions would be different, so do we just compute the output error of each of these layers with the derivative of the activation function of that layer? Also will the gradient of the cost function used to compute the output error of the last layer change at all, or is the activation function of that layer used to compute the gradient of the cost function? AI: In short, all the activation functions in the backpropagation algorithm are evaluated independently through the chain rule, thus, you can mix and match to your hearts content. What are we optimizing in backpropagation? Backpropagation allows you to update your weights as a gradient function of the resulting loss. This will tend towards the optimal loss (the highest accuracy). After each forward pass of your training stage, you get an output at the last layer. You then calculate the resulting loss $E$. The consequence of each of your weights on your final loss is computed using its partial derivative. In other words, this is how much loss is attributed to each weight. How much error can be attributed to that value. The larger this value is the more the weight will change to correct itself (training). $\frac{\partial E}{\partial w^k_{i, j}}$ How can we compute such a random partial derivative? Using the chain rule of derivatives, and putting together everything that led to our output during the forward pass. Let's look at what led to our output before getting into the backpropagation. The forward pass In the final layer of a 3-layer neural network ($k = 3$), the output ($o$), is a function ($\phi$) of the outputs of the previous layer ($o^2$) and the weights connecting the two layers ($w^2$). $y_0 = o^3_1 = \phi(a^3_1) = \phi(\sum_{l=1}^n w^2_{l,1}o^2_l)$ The function $\phi$ is the activation function for the current layer. Typically chosen to be something with an easy to calculate derivative. You can then see that the previous layers' outputs are calculated in the same way. $o^2_1 = \phi(a^2_1) = \phi(\sum_{l=1}^n w^1_{l,1}o^1_l)$ So the outputs of the third layer can also be written as a function of the outputs of layer 1 by substituting the outputs of layer 2. This point becomes important for how the backpropagation propagates the error along the network. Backpropagation The partial derivative of the error in terms of the weights is broken down using the chain rule into $\frac{\partial E}{\partial w^k_{i, j}}$ = $\frac{\partial E}{\partial o^k_{j}} \frac{\partial o^k_{j}}{\partial a^k_{j}} \frac{\partial a^k_{j}}{\partial w^k_{i,j}}$. Let us look at each of these terms separately. 1. $\frac{\partial E}{\partial o^k_{j}}$ is the error caused by the output of the previous layer. For the last layer, using R2 loss, the error of the first output node is $\frac{\partial E}{\partial o^3_{1}} = \frac{\partial E}{\partial y_{1}} = \frac{\partial }{\partial y_{1}} 1/2(\hat{y}_1-y_1)^2 = y_1 - \hat{y}_1$ In words, this is how far our result, $y_1$, from the actual target $\hat{y}_1$. This is the same for all previous layers, where we need to substitute in the errors propagating through the network, this is written as $\frac{\partial E}{\partial o^k_{j}} = \sum_{l \in L} (\frac{\partial E}{\partial o^{k+1}_{l}} \frac{\partial o^{k+1}_{l}}{\partial a^{k+1}_{l}} w^{k}_{j,l}) $ where L is the set of all neurons in the next layer $k+1$. 2. $\frac{\partial o^k_{j}}{\partial a^k_{j}}$ This is where the current layer's activation function will make a difference. Because we are taking the derivative of the output as a function of its input. And the output is related to the input through the activation function, $\phi$. $\frac{\partial o^k_{j}}{\partial a^k_{j}} = \frac{\partial \phi(a^k_{j})}{\partial a^k_{j}}$ So just take the derivative of the activation function. For logistic function this is easy and its $\frac{\partial o^k_{j}}{\partial a^k_{j}} = \frac{\partial \phi(a^k_{j})}{\partial a^k_{j}} = \phi(a_j)(1-\phi(a_j))$ 3. $\frac{\partial a^k_{j}}{\partial w^k_{i,j}}$ a is simply a linear combination of w and the subsequent layers outputs. Thus, $\frac{\partial a^k_{j}}{\partial w^k_{i,j}} = o_i$ Finally You can see that the activation functions of your layers are evaluated separately in the backpropagation algorithm. They will just be added onto your ever growing back-chain as independent terms within your chain rule.
H: Q Learning Neural network for tic tac toe Input implementation problem I've recently become interested in machine learning, specifically neural networks, and after creating ones to solve basic problems such as XOR and Sin and Cos graphs, however i am now looking into reinforcement learning and specifically q learning with neural networks, to try this out and see if i could implement this, i used a q learning neural network for tic tac toe. However I am unsure as to why my program does not learn correctly. One idea I have is that i have implemented the input neurons incorrectly.I am using visual basic console application so just let me know if you need me to post any code My view on how to implement this: Use 18 input neurons and have the first 9 being the state before placing a move, and the next 9 being the state after placing a move. another question I have in terms of when you teach the neural network, would you feed it the old state and then the state after both you and the opponent have made a move?al networks, so they can play against each other and teach themselves Second Problem This next potential problem i have is how I use my threshold and exploration values to choose whether to pick a random action or one the neural network chooses. I increase my threshold linearly from 0 to 1 throughout the iterations. while exploration is a random value between 0 and 1. is this correct? or is there a better way of doing it? Any feedback would be greatly appreciated, and if anyone has any problems with my question such as it being unclear, not making sense or anything else please let me know so i can fix it thank you for all who take the time out to try and help. AI: One idea I have is that i have implemented the input neurons incorrectly. There is no single correct way to implement the solution to this problem. There are less efficient and more efficient ways for specific problems. Use 18 input neurons and have the first 9 being the state before placing a move, and the next 9 being the state after placing a move. This should work. The first set of 9 neurons will receive a representation of the current game state. The second set of 9 neurons will receive a representation of the intended action. For predicting Q values from a state, action pair, this is a reasonable approach. Staying with the same architecture of 18 inputs, you could also have the second set of neurons be one-hot encoded to where the agent will to place the 'X' or 'O'. In the game Tic Tac Toe however, you can look for more compact representations if you like. You have already noticed that because the game is simple and deterministic, that you can represent the agent's action as simply "desired next state". And in fact, the current state does not actually matter to a player, other than to enforce the rules of what are valid next states. You won't be implementing the rules of the game into the agent - they are part of the environment. Therefore, you can do away with the initial state altogether in your estimate, and work with the end state of each move - this is called the afterstate in the RL literature, and you will find it used a fair bit in deterministic games, or even in non-deterministic games where the randomness happens before the action choice (e.g. in Backgammon). An afterstate representation is more efficient because it encodes the fact that you don't care what route a player took to get to certain board position, you just care about the value of that position as the game continues. Having said all that, if your goal is to learn basic RL, then you don't need to be looking for the most efficient solution, just one that works. Don't expect your NN-based learner with state-action logic to be the most efficient learner however. Another question I have in terms of when you teach the neural network, would you feed it the old state and then the state after both you and the opponent have made a move? Not as inputs to the network at the same time, no. In your (state, action [=next_state]) representation, your action representation should be the board state for the current player's move and before the other player takes any action. The resulting next state however, will be after the other player takes their action. If you want to train two separate bots against each other, then each would see the current state, then it would choose an action, then it would either get the reward for winning, or the opponent would take a turn. If the opponent won, then the first bot should receive the (negative) reward. If the opponent's move was not final, then the first bot should see the state after the opponent's move and get to choose its next action. Again, this is inefficient. For a win/draw/lose game like Tic Tac Toe, you don't need two separate agents, each with their own learning algorithm, in order to train through self-play. You can instead alter Q-Learning slightly to work with the minimax algorithm. In brief this means alternating between the agent selecting actions that maximise the expected reward (for player 1) or minimise it (for player 2). However, like before, your 2 networks set up should be able to work, and is quite interesting dynamic - you could try different learning parameters, different NNs etc, and see which learns to win quickly (but don't forget starting player has an advantage for early random play, so you'd want to switch which learning algorithm was used for which player to get a fair assessment). The difference again is in terms of efficiency - a single network inside a modified RL with minimax will typically learn faster that two separate networks.
H: TypeError: unsupported operand type(s) for *: 'float' and 'dict_values' I'm attempting to compute the class_weights for an highly imbalanced set of 9 classes based on the examples discussed in How to set class weights for imbalanced classes in Keras?. Here is the code: import numpy as np import math # labels_dict : {ind_label: count_label} # mu : parameter to tune def create_class_weight(labels_dict,mu=0.15): total = np.sum(labels_dict.values()) keys = labels_dict.keys() class_weight = dict() for key in keys: score = math.log(mu*total/float(labels_dict[key])) class_weight[key] = score if score > 1.0 else 1.0 return class_weight # random labels_dict labels_dict = {0: 3400, 1: 1700, 2: 4700, 3: 6800, 4: 3400, 5: 2300, 6: 8300, 7: 1000, 8:9600} create_class_weight(labels_dict) I'm getting an error log like this: File "<ipython-input-34-7a9feda1053b>", line 20, in <module> create_class_weight(labels_dict) File "<ipython-input-34-7a9feda1053b>", line 11, in create_class_weight score = math.log(mu*total/float(labels_dict[key])) TypeError: unsupported operand type(s) for *: 'float' and 'dict_values' I'm running the code with Python 3.6.3. What modifications am I supposed to make? AI: You can't use numpy to sum the values of a dictionary. You have to use sum function. total = sum(labels_dict.values()) Now you can check that total is an integer : print(type(total)) class 'int'
H: Should I Impute target values? I am new to data science and I am currently playing around a bit. Data exploration and preparation is really annoying. Eventhough I use pandas. I achieved imputing missing values in independant variables. For numerical data by using the Imputer with the means strategy and for one categorical variable I used the Labelencoder and afterwards imputed with the mode strategy. But now I face the issue that the dependant variable $y$ also contains missing values. Should I delete those lines or should I impute $y$ which is numerical. AI: For the missing data problem, one thing to be aware of, is the missingness mechanism. Depending of the dataset, the NA's (Missing Values) you have could be a result of a condition of the phenomenon and you shouldn't impute then using mean but maybe. Besides, for the dependent variable, if you want to train a model with the independent ones to predict this, let's say Y, you wouldn't train a model using this observation with NA on the dependent (target?). Then, you would drop this lines or maybe using another technique which takes into account the dependence of the other variables. I think a good start is to give look at this: Missing-data imputation It shows the limitations of using some approaches like yours and defines the mechanisms of missing data.
H: General Machine Learning Workflow Question I'm very new to machine learning and want to understand the general process by which it is carried out. I've worked through the famous 'iris' tutorial and want to ask if the principles in that tutorial are applicable to every future machine learning project I undertake. I work in biological sciences, and am interested in applying machine learning algorithms to biological sequence data as away of categorizing classes or doing unsupervised clustering. From my understanding, each project starts with: Me defining the 'scope' or aim of what I want to learn from the raw data. Generating features/attributes from my OWN code/algorithms that could potentially differentiate between the two classes This basically generates a huge matrix of x_features by y_entries Feed the matrix into a machine learning algorithm (I'm sure this is vastly oversimplified). To give an example, say I have 10,000 protein sequences, and I believe 5000 are 'Class1' and 5000 are 'Class2', but I do not know how to differentiate them by eye. I need to generate x_features (in some informed way) of this sequence using my own custom algorithms, and feed the resulting 10000 entries into an algorithm. Is this the right approach? I'd be eternally grateful if someone could direct me towards a beginners tutorial that revolves around analyzing biological sequence data. AI: I agree with most of the answer. However, I think you are missing some points including the cross-validation step. I try below to provide an overview of a common machine learning project. I assume a common project is a supervised machine learning problem (like iris dataset). 1. You defining the 'scope' or aim of the project : You have to define the purpose of the learning. When working with business in a company, it is a good idea to correctly express the need, the value of the project and its goal. You have to define the evaluation metrics (accuracy, recall, F1 score, AUC...). You can also define a minimum result you want to reach (say 80% accuracy for example). You can also ask yourself about the level of interpretability you need (do you care about model explanation? If no, maybe you could try more blackbox algorithms such as boosting, neural networks...). 2. Explore your data : Using statistics, visualization and intuition, try to learn your dataset and understand your features and labels. You can also search for missing data and outliers. Correct these observations refers as data cleaning process. Understanding your input variables will greatly help you to create and select relevant features. 3. Generating features/attributes from your own code/algorithm : This phase refers as features engineering. It is about creating features relevant to the learning problem. In this phase, you can clean your missing data and outliers in order to help the learning. You can derive new features from input variables relevant to your learning problem (handle categorical variables, rescale your features, apply transformations on input variables). 4. cross-validation : Cross-validation refers to your algorithm evaluation. In supervised machine learning, it is common, at least, to split dataset into 3 datasets (train, validation and test). Train dataset (about 60% of data) aims to train the algorithm. Validation dataset (20% of data) helps to find the best hyperparameters of your model (max depth for a tree, regularization for a linear/logistic regression...). Finally, test set (20% of data) gives you the true result you get on unseen data. It is the final evaluation. 5. Machine learning, feed the matrix into an algorithm : In this part, you train machine learning algorithms with regards to the cross-validation process (part 4). You can test different models. Some yield different performance results. Interpretability is not the same neither. To help the learning, you can diagnose your algorithm performs on both train and validation sets. This diagnostic is also called learning curves. It can tell you how to improve your learning. The purpose of learning curve is to help handle the underfitting/overfitting tradeoff. Underfitting is when you have a large bias error meaning your algorithm is not complex enough while overfitting means your algorithm is too complex and learns perfectly but is not able anymore to generalize learning on new unseen observations. You can also look at residuals (errors between predictions and real values) to improve your algorithm. Make features selection may also improve your algorithm learning. 6. Restitution Interpret the model and the performance you get. Create restitutions to business? Run into production? Improve your machine learning model and performance is mostly about improving the above introduced points. By making new exploration, create new features, try a more powerful algorithm and so on, you can reach best results. Machine learning is a whole pipeline you have to optimize. I also think machine learning projects managements are really suitable with agile approaches.
H: Neural network q learning for tic tac toe - how to use the threshold I am currently programming a q learning neural network tha does not work. I have previously asked a question about inputs and have sorted that out. My current idea to why the program does not work is to do with the threshold value. this is a neural network - q learning specific variable. basically the theshold is a value that is between 0 and 1, you then make a random number between 0 and 1, if this random number is larger than the threshold then you pick a completely random choice, otherwise the neural network chooses by finding the largest q value. My question is that with this threshold value, i am currently implementing it as starting at almost 0, then increasing linearly until it reaches 1 by the time the program has reached the final iteration. Is this correct? The reason i suspect this is incorrect is that when plotting an error graph from training the neural network, the program doesnt not learn at all, but when the threshold reaches almost 1, it starts to learn very fast, and if you run more iterations after it reaches 1, the all the game sets in the replay memory become the same and the error is basically 0 from their on in. Any feedback is greatly appreciated and if this question in unclear in anyway just let me know and i will try and fix it. Thank you to anyone who helps out. AI: You are effectively implementing $\epsilon$-greedy action selection. The usual way to represent this in RL, at least that I am familiar with, is not as a "threshold" for probability of choosing the best estimated action, but as a small probability, $\epsilon$, of not choosing the best estimated action. For consistency with RL literature that I know, I will use the $\epsilon$-greedy form, so instead of considering what happens as your threshold rises from 0 to 1, I will consider what happens when $\epsilon$ drops from 1 to 0. It is the same thing. I hope you can either adjust to using $\epsilon$ or mentally convert the rest of this answer so it is about your threshold . . . When monitoring Q-Learning, you have to be careful how you measure success. Monitoring the behaviour on the learning games will give you slightly off feedback. The agent will make exploratory moves (with probability $\epsilon$), and the results from a learning game might involve the agent losing even though it already has a policy good enough to not lose from the position where it started exploring. If you want to measure how well the agent has learned the game, you have to stop the training stage and play some games with $\epsilon$ set to $0$. I suspect this could be one problem - that you are measuring results from behaviour during training (note this would work with SARSA) In addition, choosing values that are too high or low for your problem will reduce the speed of learning. High values interfere with Q-learning because it has to reject some of data from exploratory moves, and the agent will rarely see a full game played using its preferred policy. Low values stifle learning because the agent does not explore different options enough, just repeating the same game play when there might be better moves that it has not tried. For Tic Tac Toe and Q-learning I would suggest picking a value of $\epsilon$ between $0.01$ and $0.2$ In fact, with Q-learning there is no need to change the value of $\epsilon$. You should be able to pick a value, say $0.1$, and stick with it. The agent will still learn an optimal policy, because Q-learning is an off-policy algorithm.
H: Q learning Neural network Tic tac toe - When to train net This is another question I have on a q learning neural network being used to win tic tac toe, which is that im not sure i understand when to actually back propogate through the network. What i am currently doing is when the program plays through the game, if the number of game sets recorded has reached the max amount, every time the program makes a move, it will pick a random game state from its memory and back propagate using that game state and reward. this will then continue every time the program makes a move as the replay memory will always be full from then on. The association between rewards and game state and action from history, is that when a game has been completed, and the rewards have been calculated for each step, meaning that the total reward per step has been calculated, the method i use to calculate the reward is: Q(s,a) += reward * gamma^(inverse position in game state) in this case, gamma is a value predetermined to reduce the amount that the reward is taken into account the further you go back, and the inverse position in game state means that if there have been 5 total moves in a game, then the inverse position in game state when changing the reward for the first move would be 5, then for the second, 4, third 3 and so on. this just allows the reward to be taken less into account the earlier the move is. Should this allow the program to learn correctly? AI: This update scheme: Q(s,a) += reward * gamma^(inverse position in game state) has a couple of problems: You are - apparently - incrementing Q values rather than training them to a reference target. As a result, the estimate for Q will likely diverge, predicting total rewards that are impossibly high or low. Although in your case with a zero sum game and initial random moves, it may just random walk around zero for a long time first. Ignoring the increment, the formula you are using is not from Q-learning, but effectively on policy Monte Carlo control, because you use the end-of-game sum of rewards as the Q value estimate. In theory, with a few tweaks this can be made to work, but it is a different algorithm than you say you want to learn. It is worth clarifying a few related terms (you clearly know these already, but I want to make sure you have them separated in your understanding of the rest of the answer): Reward. In RL, a reward (a real number) can be returned on every increment, after taking an action. The set of rewards is part of the problem definition. Often noted as $R$ or $r$. Return (aka Utility). The sum of all - maybe discounted - rewards from a specific point. Often noted as $G$ or $U$. Value, as in state value or action value. This is usually the expected return from a specific state or state, action pair. $Q(S_t, A_t)$ is the expected return when in state $S_t$ and taking action $A_t$. Note that using $Q$ does not make your algorithm Q-learning. The $Q$ action value is the basis for several RL algorithms. Your formula reward * gamma^(inverse position in game state) gives you the Return, $G$ seen in a sampled training game, $G_t = \gamma^{T-t} R_T$ where $T$ is the last time step in the game. That's provided the game only has a single non-zero reward at the end - in your case that is true. So you could use it as a training example, and train your network with input $S_t, A_t$ and desired output of $G_t$ calculated in this way. That should work. However, this will only find the optimal policy if you decay the exploration parameter $\epsilon$ and also remove older history from your experience table (because the older history will estimate returns based on imperfect play). Here is the usual way to use experience replay with Q learning: When saving experience, store $S_t, A_t, R_{t+1}, S_{t+1}$ - note that means storing immediate Reward, not the Return (yes you will store a lot of zeroes). Also note you need to store the next state. When you have enough experience to sample from, typically you do not learn from just one sample, but pick a minibatch size (e.g. 32) and train with that many each time. This helps with convergence. For Q-learning, your TD target is $R_{t+1} + \gamma \text{max}_{a'} Q(S_{t+1}, a')$, and you bootstrap from your current predictions for Q, which means: For each sample in the minibatch, you need to calculate the predicted Q value of all allowed actions from the next state $S_{t+1}$ - using the neutral network. Then use the maximum value from each state to calculate $\text{max}_{a'} Q(S_{t+1}, a')$. Train your network on the minibatch for a single step of gradient descent, with NN inputs $[S_t, A_t]$ and training label of the TD target from each example. Yes that means you use the same network to first predict and then learn from a formula based on those predictions. This can be a source of problems, so you may need to maintain two networks, one to predict and one that learns. Every few hundred updates, refresh the prediction network as a copy of the current learning network. This is quite common addition to experience replay (it is something that Deep Mind did for DQN), although may not be necessary in your case for a game as simple as Tic Tac Toe. The TD target is a bootstrapped and biased estimate of expected $G$. The bias is a potential source of problems (you may read that using NNs with Q-learning is not stable, this is one of the reasons why). However, with the right precautions, such as experience replay, the bias will reduce as the system learns. In case you are wondering, it is the use of both $S_t$ (as NN input) and $S_{t+1}$ (to calculate TD target) in the Q-learning algorithm, which effectively distributes the end-of-game reward back to the start of the game's Q value. In your case (and in many episodic games), it should be fine to use no discount, i.e. $\gamma = 1$ From your previous question, you noted that you were training two competing agents. That does in fact cause a problem for experience replay. The trouble is that the next state you need to train against will be the state after the opponent has made a move. So the opponent is technically viewed as being part of the environment for each agent. The agent learns to beat the current opponent. However, if the opponent is also learning an improved strategy, then its behaviour will change, meaning your stored experiences are no longer valid (in technical terms, the environment is non-stationary, meaning a policy that is optimal at one time may become suboptimal later). Therefore, you will want to discard older experience relatively frequently, even using Q-learning, if you have two self-modifying agents.
H: Neural Network Performs Bad On MNIST I've been struggling with Neural Networks for a while now. I get the math behind backpropagation. Still as reference I'm using the formulas from here. The Network learns XOR: Prediction After Training: [0.0003508415406266712] Expected: [0.0] But basically doesn't learn anything on the MNIST dataset: Error after n trainings examples: - 0 Total Net Error: 4.3739634316135225 - 10000 Total Net Error: 0.4876292680858326 - 20000 Total Net Error: 0.39989816082272944 - 30000 Total Net Error: 0.49507443066631834 - 40000 Total Net Error: 0.5483594859079792 - 50000 Total Net Error: 0.5135921029479789 - 59000 Total Net Error: 0.4686434346776871 [Prediction] [Expected] - [0.047784337754445516] [0] - [0.09444684951344406] [0] - [0.0902378720783441] [0] - [0.09704810171673675] [0] - [0.02940947956812051] [0] - [0.12494839048272757] [1] - [0.1512762065177885] [0] - [0.055847446615593155] [0] - [0.22983410239796548] [0] - [0.09162426430286492] [0] The Old Code ( can be ignored ) I've broken the network down as much as possible. With no matrix or vector multiplication. Here the code for the different classes: Main File: # Load Trainigs Data rawImages, rawLabels, numImagePixels = get_data_and_labels("C:\\Users\\Robin\\Documents\\MNIST\\Images\\train-images.idx3-ubyte", "C:\\Users\\Robin\\Documents\\MNIST\\Labels\\train-labels.idx1-ubyte") # Prepare Data print("Start Preparing Data") images = [] labels = [] for i in rawImages: insert = [] for pixel in i: insert.append(map0to1(pixel, 255)) images.append(insert) for l in rawLabels: y = [0] * 10 y[l] = 1 labels.append(y) print("Finished Preparing Data") # Create Network mnistNet = Network((numImagePixels, 16, 16, 10)) # Train print("Start Training") for index in range(len(images)): netError = mnistNet.train(images[index], labels[index]) if index % 10000 == 0: print(index, " Total Net Error: ", netError) prediction = mnistNet.predict(images[0]) print("After Training The Network Predicted:", prediction, "Expected Was:", labels[0]) class Network: def __init__(self, topology): # Make Layer List self.layerList = [] # Make Input Layer self.layerList.append(Layer(0, topology[0], 0)) # Make All Other Layers for index in range(1, len(topology)): self.layerList.append(Layer(index, topology[index], topology[index-1])) def predict(self, x): # Set x As Value Of Input Layer self.layerList[0].setInput(x) # Feed Through Network for index in range(1, len(self.layerList)): self.layerList[index].feedForward(self.layerList[index-1].getA()) # Return The Output Of The Last Layer return self.layerList[-1].getA() def train(self, x, y): # Feed Through Network prediction = self.predict(x) # Container For The Calculated Layer Errors errorsPerLayer = [] # Calculate Error Of The Output Layer errorOutputLayer = listSubtract(prediction,y) # Add The Error To The Container errorsPerLayer.append(errorOutputLayer) # Calculate The Total Error Of The Network totalError = calcTotalError(errorOutputLayer) # Calculate The Error Of The Hidden Layers for layerNum in range(len(self.layerList)-2, 0, -1): # Get The Error Of The Next Layer errorOfNextLayer = errorsPerLayer[0] # Forward The Calculation To The Next Layer, Which Returns The Weighted Error, By Giving It It's Error weightedError = self.layerList[layerNum+1].calculateWeightedError(self.layerList[layerNum].getNeuronNum(), errorOfNextLayer) # Forward The Calculation To The Current Layer, Which Returns The Error Of The Layer, By Giving It The Number Of Neurons In The Current Layer And The Weighted Error Of The Next Layer currentLayerError = self.layerList[layerNum].calculateError(weightedError) # Add The Just Calculated Error To The List errorsPerLayer.insert(0, currentLayerError) # Insert 0 As Error For The Input Layer, It's Not Important But That Way It's Size Matches The One Of The Layer List errorsPerLayer.insert(0, 0) # Update Weights And Biases for layerNum in range(1, len(self.layerList)): # Get The Output Of The Previous Layer aOfPrevLayer = self.layerList[layerNum-1].getA() # Forward The Error Of The Current Layer And The Output Of The Previous Layer To The Current Layer For Calculating Delta W self.layerList[layerNum].updateWeightsAndBiases(errorsPerLayer[layerNum], aOfPrevLayer) #print("The Network Predicted: ", prediction, " Expected Was: ", y, " The Error Of The Output Layer Is: ", errorOutputLayer) # Return The Total Error Of The Network For Usage Outisde This Class return totalError def getNetworkInfo(self): for layer in self.layerList: print(layer.getLayerInfo()) ---------- class Layer: def __init__(self, layerNum, numNeurons, numNeuronsPrevLayer): self.neurons = [] # Set The Number Of The Layer self.layerNum = layerNum # Create The Neurons for index in range(numNeurons): self.neurons.append(Neuron(numNeuronsPrevLayer)) # Print Info print("Layer ", layerNum, " makes ", numNeurons, " Neurons", len(self.neurons)) def feedForward(self, aPrevLayer): # Give It To The Neurono For Processing for neuron in self.neurons: neuron.feedForward(aPrevLayer) def calculateWeightedError(self, numNeuronsCurrentLayer, errorOfNextLayer): # The Container For The Weighted Error Of The Next Layer weightedError = [] # The Calulation For Every Neuron Of The Current Layer One After Another for neuronNum in range(numNeuronsCurrentLayer): eSum = 0 # The Error Of The Neuron With The Neuron For Later Calculation for e, n in zip(errorOfNextLayer, self.neurons): # Forward The Calculation To The Current Neuron, By Giving It It's Error And The Connecting Neuron Num eSum += n.weightError(e, neuronNum) # Add The Summed And Weighted Error weightedError.append(eSum) # Return The Error Of The Current Layer return weightedError def calculateError(self, weightedError): # The Container For The Error Of The Current Layer errorOfCurrentLayer = [] # The Weighted Error For The Neuron With The Neruon for wE, n in zip(weightedError, self.neurons): # Add The Product Of The Weighted Error With The Z Of The Current Neuron Run To Sigmoid Prime errorOfCurrentLayer.append(wE * sigmoidPrime(n.getZ())) # Return The Error Of The Current Layer return errorOfCurrentLayer def updateWeightsAndBiases(self, errorOfCurrentLayer, aOfPrevLayer): # The Error For The Neuron With The Neuron for e, n in zip(errorOfCurrentLayer, self.neurons): # Error Of Current Layer Is Equal To The Delta Of The Bias So Apply That n.updateBias(e) # Forward The Error And All The Activity Of The Previous Layer To The Current Neuron To Update It's Weights n.updateWeights(e, aOfPrevLayer) def setInput(self, x): # Set It To Every Neuron for neuron, val in zip(self.neurons, x): neuron.setInput(val) def getA(self): aOfLayer = [] for neuron in self.neurons: aOfLayer.append(neuron.getA()) return aOfLayer def getNeuronNum(self): return len(self.neurons) def getLayerInfo(self): return "Layer( %i ), has %i Neurons" % (self.layerNum, len(self.neurons)) ---------- class Neuron: def __init__(self, numNeuronsPrevLayer): self.a = 0 self.z = 0 self.b = 0.5 if numNeuronsPrevLayer != 0: self.w = np.random.uniform(low = 0, high = 0.5, size=(numNeuronsPrevLayer,)) def feedForward(self, aPrevLayer): # Reset Z self.z = 0 # Calculate Z for w, a in zip(self.w, aPrevLayer): self.z += w*a # Add Bias self.z += self.b # Calculate A self.a = sigmoid(self.z) def weightError(self, e, neuronNum): # Weight Error With The Connecting Weight return e * self.w[neuronNum] def updateWeights(self, e, aOfPrevLayer): # The Weight With The Matching Activity Of The Previous Layer for index in range(len(self.w)): # The Delta Of The Weight Is The Error Of That Neuron Mutliplied With The Through The Weight Connected Activiy Of The Previous Layer self.w[index] -= e * aOfPrevLayer[index] def updateBias(self, e): # E Is The Delta Of The Bias1 self.b -= e def setInput(self, x): self.z = x self.a = x def getA(self): return self.a def getZ(self): return self.z def getB(self): return self.b def getW(self): return self.w ---------- **helper functions** def map0to1(val, valMax): return val/valMax def calcTotalError(errorOutputLayer): totalError = 0 for e in errorOutputLayer: totalError += e**2 totalError *= 0.5 return totalError def listSubtract(list1, list2): subbed = [] for l1, l2 in zip(list1, list2): subbed.append(l1-l2) return subbed The New Code Main File import random from network import * from mnistreader import * def map0to1(val, valMax): return val/valMax # THIS WORKS ''' trainX = [ [0.0,0.0], [1.0,0.0], [0.0,1.0], [1.0,1.0] ] trainY = [ [ 0.0], [ 1.0 ], [ 1.0 ], [ 0.0 ] ] # Create Network xorNet = Network((2,2,1)) # Train for index in range(100000): randIndex = random.randint(0, 3) xorNet.train(trainX[randIndex], trainY[randIndex]) print("Prediction After Training:", xorNet.predict(trainX[0]), "Expected:", trainY[0]) print("Prediction After Training:", xorNet.predict(trainX[1]), "Expected:", trainY[1]) print("Prediction After Training:", xorNet.predict(trainX[2]), "Expected:", trainY[2]) print("Prediction After Training:", xorNet.predict(trainX[3]), "Expected:", trainY[3]) ''' mnistNet = Network((784, 30, 10)) # Load Trainigs Data rawImages, rawLabels, numImagePixels = get_data_and_labels("C:\\Users\\Robin\\Documents\\MNIST\\Images\\train-images.idx3-ubyte", "C:\\Users\\Robin\\Documents\\MNIST\\Labels\\train-labels.idx1-ubyte") # Prepare Data print("Start Preparing Data") images = [] labels = [] for i in rawImages: insert = [] for pixel in i: insert.append(map0to1(pixel, 255)) images.append(insert) for l in rawLabels: y = [0] * 10 y[l] = 1 labels.append(y) print("Finished Preparing Data") # Define Variables learningRate = 0.0001 error = 10 # Training while error > 0.1: for tNum in range(len(images)): error = mnistNet.train(images[tNum], labels[tNum], learningRate) print("Error:", error, "\n Prediction:\n", mnistNet.predict(images[1]), "\nExpected:", rawLabels[1], "\n\n") # Test Prediction print("For", rawLabels[1], "Predicted\n", mnistNet.predict(images[1])) Network Class import numpy as np from layer import * from transferfunction import * class Network: def __init__(self, shape): # Save The Shape Of The Nework self.shape = shape # Create A List Of Layers self.layers = [] # Create Input Layer self.layers.append(Layer((shape[0],), layerType = 'Input')) # Create Hidden Layers for numNeurons, numNeuronsPrevLayer in zip(shape[1:], shape[:-2]): self.layers.append(Layer((numNeurons, numNeuronsPrevLayer), layerType = 'Hidden')) # Create Output Layer self.layers.append(Layer((shape[-1], shape[-2]), layerType = 'Output')) def predict(self, x): # X Is A Row So Shape It To Be A Column x = np.array(x).reshape(-1, 1) # Set X To Be The Ouput Of The Input Layer self.layers[0].setOutput(x) # Feed Through Other Layers for layerNum in range(1,len(self.layers)): self.layers[layerNum].feedForward(self.layers[layerNum-1].getOutput()) # Return The Output Of The Output Layer return self.layers[-1].getOutput() def train(self, x, y, learningRate): ''' 1. Feed Forward 2. Calculate Error 3. Calulate Deltas 4. Apply Deltas Error Output Layer = f'(z) * (prediction - target) Error Hidden Layer = f'(z) * ( transposed weights next layer DOT error next layer ) Delta Bias = learning rate * error Delta Weights = learning rate * ( error DOT transposed activity previous layer ) ''' # Feed Through Network prediction = self.predict(x) # Y Is A Row So Shape It To Be A Column y = np.array(y).reshape(-1, 1) # Calculate Error error = prediction - y # Calculate Total Error totalError = 0.5 * np.sum(error**2) # Create Container For The Deltas deltas = [] # Calculate Delta For Output Layer deltas.append( np.multiply(sigmoidPrime(self.layers[-1].getZ()), error) ) # Calculate Deltas For Every Hidden Layer for layerNum in range(len(self.layers)-2, 0, -1): # Compute The Weighted Error Of The Next Layer weightedErrorOfNextLayer = np.dot(self.layers[layerNum+1].getW().T, deltas[0]) # Compute The Delta And Add It In Front deltas.insert(0, np.multiply(sigmoidPrime(self.layers[layerNum].getZ()), weightedErrorOfNextLayer) ) # Insert Placeholder To Make Delta List As Big As The Layer List deltas.insert(0, 0) # For Numerical Grandient Checking Do numericalGradients = self.performNumericalGradientChecking(x,y) # Update Weights And Biases for layerNum in range(1, len(self.layers)): self.layers[layerNum].updateBias(deltas[layerNum], learningRate) self.layers[layerNum].updateWeight(deltas[layerNum], self.layers[layerNum-1].getOutput(), learningRate, numericalGradients[layerNum-1]) # Show Information #print('Network Predicted: \n', prediction, '\nTarget:\n', y, '\nError: ', totalError) return totalError def performNumericalGradientChecking(self, x, y): # Container Saving All Current Weight Values weightSave = [] # Save All The Weight Values for layerNum in range(1, len(self.layers)): weightSave.append(self.layers[layerNum].getW()) # Define Epsilon To Be A Small Number epsilon = 1e-4 # Gradient Container numericalGradients = [] #Perform The Check For Every Layer Therfore Every Set Of Weights for layerNum in range(1, len(self.layers)): # Feed Forward With Changed Weights(+epsilon), And Compute Cost self.layers[layerNum].setW(weightSave[layerNum-1] + epsilon) prediction = self.predict(x) loss2 = 0.5 * np.sum ((prediction - y)**2) # Feed Forward With Changed Weights(-epsilon), And Compute Cost self.layers[layerNum].setW(weightSave[layerNum-1] - epsilon) prediction = self.predict(x) loss1 = 0.5 * np.sum ((prediction - y)**2) # Reset Weight self.layers[layerNum].setW(weightSave[layerNum-1]) # Calculate Numerical Loss numericalGradient = (loss2 - loss1) / (2 * epsilon) # Add The Numerical Grandient numericalGradients.append(numericalGradient) return numericalGradients def __str__(self): strBuff = '' for layer in self.layers: strBuff += layer.getInfo() return strBuff Layer Class import numpy as np from transferfunction import * class Layer: def __init__(self, neurons, layerType = 'Hidden'): ''' Neurons Is A Tuple Consisting Of [0]=NumNeurons And [1]=numNeuronsPrevLayer ''' # Remember The Type Of The Layer self.layerType = layerType # Remember How Many Neurons This Layer Has self.neuronCount = neurons[0] # Create Layer Based On Type if layerType.lower() == 'input': # Create Container For Input Data self.a = [] elif layerType.lower() == 'hidden' or layerType.lower() == 'output': # Create Container For Activation self.z = [] # Create Container For Neurons Input self.a = [] # Create Weights self.w = np.random.uniform(low = 0.0, high = 0.4, size=(neurons[0], neurons[1])) # Create Prev Delta Weight For Momentum self.momentum = 0.3 self.prevDelta = np.full((neurons[0], neurons[1]), 0) # Create Bias self.b = np.full((neurons[0],1), 0, dtype=float) else: print('Wrong Type Of Layer Specified') def feedForward(self, aPrevLayer): self.z = np.dot(self.w, aPrevLayer) + self.b self.a = sigmoid(self.z) def updateBias(self, e, learningRate): self.b -= learningRate * e def updateWeight(self, e, aPrevLayer, learningRate, numericalGradient): # Calulate The Delta Of The Weights deltaW = np.dot(e, aPrevLayer.T) # Compare DeltaW With The Numerical Grandient check = np.linalg.norm(deltaW - numericalGradient) / np.linalg.norm(deltaW + numericalGradient) # DEBUG print(check) # The Weight Change Is The Delta With The Addition Of The Momentum self.w -= ( learningRate * deltaW ) + ( self.momentum * self.prevDelta) # Save The Current DeltaW self.prevDelta = deltaW def getW(self): return self.w def getZ(self): return self.z def getOutput(self): return self.a def setOutput(self, x): self.a = x def setW(self, w): self.w = w def getInfo(self): if self.layerType.lower() == 'input': return 'Input Layer With ' + str(self.neuronCount) + ' Neurons\n' else: return self.layerType + ' Layer With ' + str(self.neuronCount) + ' Neurons And Weights Of Shape: ' + str(self.w.shape) + ' With Biases Of Shape: ' + str(self.b.shape) + '\n' So my Question simply is: "What's wrong?" Problems: Error gets stuck at 0.45 When using a hidden layer with 800 neurons I get the warning division by zero and all becomes NaN's from sigmoid prime Numerical gradient checking for the hidden Layer is: 1.0 and for the output layer: 0.995784895209. I know that is supposed to be a very small number. But on the second trainings example it creates a overflow error and becomes NaN's Major Edit I'm truly grateful for all suggestions so far, I've updated the question using a vectorized form so it's easier to get an overview of what I'm doing here. I tried gradient checking now too, not sure if I implemented it right (used the tutorial by Welch Labs (https://youtu.be/pHMzNW8Agq4)) I hope the Code is readable AI: Your network has 28 x 28 = 784 (normal MNIST size) inputs, 16 + 16 hidden nodes and 10 outputs. This is not enough for an enough accurate model as a result. This question suggests to use 256 x 256 hidden nodes and Wikipedia page on MNIST gives for 2-layer reference the values: 784-800-10 meaning 800 x 10 nodes. Wikipedia gives error rate 0.016 for that 800 x 10 solution. Sidenote: For 6-layer Deep Neural Network the numbers are : 784-2500-2000-1500-1000-500-10 so the new numbers I gave aren't that big. Of course the error rate on DNN is 0.0035 so it needs those layers. edit: Do I have to change the learning rate which I now defined to be 0.3? Answer in the referred question says 0.0001 is more appropriate value. Edit of question author I marked this as solved because now it is working. I implemented it again and after tweaking the hyperparameters it worked. So this is, in a way the solution to my problem. Because I asked why this problem occurs in the first place, my implementation error aside.
H: How to define weights on Keras neural network I have a neural network model written in Keras with (8,5,5,5,32) neurons, as follows: # Sequential model = Sequential() # Neural network model.add(Dense(5, input_dim=len(X[0]), activation='sigmoid' )) model.add(Dense(5, activation='sigmoid' )) model.add(Dense(5, activation='sigmoid' )) model.add(Dense(len(y[0]), activation='sigmoid' )) # Compile model # sgd = optimizers.SGD(lr=0.01, decay=0.0, momentum=0.0, nesterov=False) model.compile(loss='mean_squared_error', optimizer='adam', metrics=['acc']) # Fit model history = model.fit(X, y, nb_epoch=200, validation_split=0.2, batch_size=30) When I access the weights after training, I get: model.get_weights() which returns a long array of arrays corresponding to the weights in between neurons of each layer. What I don't understand is a smaller array represents (same number of weights as number of neurons per layer) in between larger arrays (same number of sub-arrays as number of neurons per layer). Is this the biais that the neurons experience? Here are the results of 'model.get_weights()': [array([[ 0.31680015, 0.22357693, -0.63079047, -0.04600599, -0.26949674], [ 0.0525099 , -0.41120723, 0.28259486, -0.37071031, 0.42028651], [ 0.35435981, -0.35501873, -0.05099263, 0.3633016 , -0.64845532], [ 0.60027206, 0.40594664, 0.29894602, -0.13255124, 0.52797431], [-0.32299024, 0.54219592, 0.34114835, -0.59672344, -0.47126439], [-0.51338726, 0.64451784, -0.35283062, 0.47248691, -0.31077194], [-0.59289241, -0.207461 , 0.00371859, -0.52090681, 0.10946763], [-0.37216368, -0.23905358, -0.38580573, 0.0401655 , -0.34231418]], dtype=float32), array([-0.03448975, 0.04991768, 0.11635038, -0.04274927, 0.0325128 ], dtype=float32), # WHAT IS THIS ARRAY REPRESENTING? array([[ 0.10943446, -0.24749899, 0.58269709, 0.54208171, -0.05888808], [ 0.02320727, 0.08465887, -0.79114383, -0.19608408, 0.55898732], [-0.81141329, 0.19124934, -0.69268334, -0.44021448, 0.72605485], [ 0.32895803, 0.08196118, 0.53820646, 0.6348688 , -0.06715827], [-0.0850288 , 0.5077976 , 0.36972848, 0.44874495, 0.36402631]], dtype=float32), array([-0.30626452, 0.15301916, -0.17855364, 0.12410269, 0.2502442 ], dtype=float32), array([[-0.31872895, -0.46534333, -0.4664084 , -0.23720025, 0.30465502], [-0.37690881, -0.00396255, 0.38115206, 1.20845091, 0.69348788], [ 0.15064301, -0.29923961, 0.13108611, -0.29579154, -0.34181508], [ 0.62893951, 0.49498206, 0.02549251, 0.6561147 , -0.52280194], [ 1.07029617, 0.66126752, 0.50944209, 0.58811921, -0.04030331]], dtype=float32), array([ 0.57747394, 0.59574115, 0.68391681, 0.78335029, 0.26046163], dtype=float32), array([[ -1.09961331e+00, -7.06328213e-01, -7.93772519e-01, -5.50112486e-01, -5.05448937e-01, -4.78618711e-01, -3.32313687e-01, -5.46549559e-01, 3.88661295e-01, 3.64094585e-01, -1.93489313e-01, -7.61669278e-02, 1.23761639e-01, 4.93125141e-01, 4.78168607e-01, -7.07402304e-02, -2.54306406e-01, -2.37895012e-01, -1.36467636e-01, -7.16407061e-01, 1.32701367e-01, -4.23079096e-02, -4.71717492e-03, 2.56372184e-01, 1.89603701e-01, 2.20276624e-01, -2.55215704e-01, -1.04997739e-01, 2.81909049e-01, -7.00806752e-02, 2.99933195e-01, 3.84294897e-01], [ -9.60830688e-01, -6.18087530e-01, -8.46309483e-01, -6.05162561e-01, -2.39057451e-01, 3.03133931e-02, -2.07459703e-01, -6.84834659e-01, 3.47823203e-01, -6.44357502e-02, 2.55657077e-01, -2.37801671e-01, 2.57411227e-02, -1.01771923e-02, -6.76048512e-04, 3.06296106e-02, 2.05646217e-01, 8.02281871e-02, -4.01538044e-01, -5.49115181e-01, 3.00252885e-01, 3.31445992e-01, -1.75046876e-01, -3.36513370e-01, 1.65666446e-01, 1.74015135e-01, -2.15066984e-01, 3.79294544e-01, 1.67991996e-01, 2.39770293e-01, -5.49201332e-02, -1.67401493e-01], [ -1.24525476e+00, -5.97414970e-01, -1.36500984e-01, -5.60880482e-01, -4.26550537e-01, 1.46522000e-01, -6.26730978e-01, -8.33723724e-01, -2.20034972e-01, 3.54697943e-01, 2.86612272e-01, 2.60758907e-01, -5.10771237e-02, 1.91444799e-01, 3.32518548e-01, 1.51452944e-01, -2.18744278e-01, -2.07690187e-02, 9.60563496e-02, -2.26809219e-01, 8.80904198e-02, 2.33646557e-01, 2.45599806e-01, 2.53560930e-01, 1.55982673e-01, 6.49829209e-01, -1.26019821e-01, 5.47675073e-01, 3.28564644e-01, 8.67465809e-02, -1.40921310e-01, -2.35581279e-01], [ -1.51756608e+00, -1.11596704e+00, -4.97624874e-01, -4.95555460e-01, -4.83801186e-01, -2.23367065e-01, -1.08115244e+00, -9.68795598e-01, 4.32208836e-01, 1.16957083e-01, -1.02919623e-01, -8.19303747e-03, 4.21310306e-01, 1.09493546e-01, 1.54512182e-01, -1.46762207e-01, 1.58293337e-01, -3.95552874e-01, -2.07770184e-01, -1.90177709e-01, 2.07072627e-02, 6.61122620e-01, 5.44478893e-01, -1.46910429e-01, 4.22070086e-01, 2.49319345e-01, 6.19665794e-02, 9.74300727e-02, 3.37298632e-01, 2.90907085e-01, 8.78930092e-02, 1.25872776e-01], [ -7.40014732e-01, -7.02967405e-01, -1.42469540e-01, 1.66655079e-01, -1.59682855e-01, -2.07361296e-01, -9.04432237e-02, -1.22986667e-01, -3.28961462e-01, 9.21192244e-02, -2.13514805e-01, -1.59033865e-01, -2.79709876e-01, -1.64602488e-01, 1.96248814e-01, -1.98676869e-01, 2.80951142e-01, -4.50290412e-01, 2.34707281e-01, -3.13370705e-01, -2.24865358e-02, 2.63352484e-01, 4.90205318e-01, 1.96813330e-01, 4.13736820e-01, 7.11815134e-02, -1.92510381e-01, 5.77223562e-02, -4.25750390e-02, -3.79479416e-02, 6.17611647e-01, 3.11740283e-02]], dtype=float32), array([-1.66324592, -1.4556272 , -1.10267663, -0.52273899, -0.59953606, -0.52498704, -1.15079761, -1.19500864, 0.13858946, -0.01602219, 0.02289196, 0.2923668 , 0.14240141, 0.19787838, 0.15867165, 0.24876554, 0.21616565, -0.35774839, -0.40798596, -0.71499836, 0.36547631, 0.48771939, 0.40948448, 0.01851768, 0.56893039, 0.93250728, -0.32663229, 0.34419572, 0.15536141, 0.21145843, 0.53275597, 0.0364211 ], dtype=float32)] AI: You are correct! The array you see between those for your weights are the bias of the node in the starting layer. Notice how there are 5 elements in that array, this coincides with the 5 nodes in your first hidden layer.
H: cosine_similarity returns matrix instead of single value I am using below code to compute cosine similarity between the 2 vectors. It returns a matrix instead of a single value 0.8660254. [[ 1. 0.8660254] [ 0.8660254 1. ]] from sklearn.metrics.pairwise import cosine_similarity vec1 = [1,1,0,1,1] vec2 = [0,1,0,1,1] print(cosine_similarity([vec1, vec2])) AI: Based on the documentation cosine_similarity(X, Y=None, dense_output=True) returns an array with shape (n_samples_X, n_samples_Y). Your mistake is that you are passing [vec1, vec2] as the first input to the method. Also your vectors should be numpy arrays: from sklearn.metrics.pairwise import cosine_similarity import numpy as np vec1 = np.array([[1,1,0,1,1]]) vec2 = np.array([[0,1,0,1,1]]) #print(cosine_similarity([vec1, vec2])) print(cosine_similarity(vec1, vec2)) X : ndarray or sparse array, shape: (n_samples_X, n_features) Input data. So you have to specify the dimension. np.array([1, 2]).shape has funny shape: (2, )
H: Can't interpret the text information and ratings matrix imported to NN I have a Recommender system which uses a Collaborative bayesian approach using pSDAE for recommending scientific articles from the Citeulike Dataset The text information (as input to pSDAE) is in the file mult.dat and the rating matrix (as input for the MF part) is in the file cf-train-1-users.dat and is loaded using the following code: def get_mult(): X = read_mult('mult.dat',8000).astype(np.float32) return X def read_user(f_in='cf-train-1-users.dat',num_u=5551,num_v=16980): fp = open(f_in) R = np.mat(np.zeros((num_u,num_v))) for i,line in enumerate(fp): segs = line.strip().split(' ')[1:] for seg in segs: R[i,int(seg)] = 1 return R The raw data is in proper Excel format with citations as doc-id, title, citeulike-id, raw-title, raw-abstract. The mult.dat file containing hte text information looks like: 63 1:2 1666:1 132:1 901:1 1537:2 8:1 9:1 912:1 The trainusers.dat file looks like: 10 1631 3591 10272 14851 4662 13172 12684 5324 3595 3404 Here is the link to th ipynb for the whole Recommender system: https://github.com/js05212/MXNet-for-CDL/blob/master/collaborative-dl.ipynb AI: I am the author of the CDL paper. For the mult.data file, in 63 1:2 1666:1 132:1 901:1 1537:2 8:1 9:1 912:1 63 is the number of words for this documents, 1:2 means word 1 appears twice in the document, 1666:1 means word 1666 appears once in the document, etc. For the trainuser.dat, in 10 1631 3591 10272 14851 4662 13172 12684 5324 3595 3404 10 is the number of positive samples for this user, the rest is a list of 10 items that are related to (liked by) this user. You can check the README file in the Datasets collection for more details on the datasets.
H: What do you pass for the cv parameter in the sklearn method cross_val_score In sklearn, there is a method for cross validation called cross_val_score. One of the parameters of this method is 'cv'. I understand in cross validation, there is no splitting the data into training and validation (70-30 split). Instead, you split the data into 'k' subsamples, then train it on the K-1 subsamples and validate using the kth sample. And repeat it for each of the 'k' subsamples. So is this cv = k, i.e the number of subsamples in which you split the training data? AI: It determines the splitting strategy used by sklearn. The default (“none”) is 3-fold CV. Doc
H: How to check for overfitting with SVM and Iris Data? I am using machine learning predictions for the sample iris dataset. For instance, I am using the support vector machines (SVMs) from scikit-learn in order to predict the accuracy. However, it returns an accuracy of 1.0. Here is the code I am using: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=51) svm_model = svm.SVC(kernel='linear', C=1, gamma='auto') svm_model.fit(X_train,y_train) predictions = svm_model.predict(X_test) accuracy_score(predictions, y_test) How to find out or to measure if this over-fitting or if the model is so good? I assume that its not over-fitting but what are the best ways to validate this? AI: You check for hints of overfitting by using a training set and a test set (or a training, validation and test set). As others have mentioned, you can either split the data into training and test sets, or use cross-fold validation to get a more accurate assessment of your classifier's performance. Since your dataset is small, splitting your data into training and test sets isn't recommended. Use cross validation. This can be done using either the cross_validate or cross_val_score function; the latter providing multiple metrics for evaluation. In addition to test scores the latter also provides fit times and score times. Using your example; from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris from sklearn import svm from sklearn.metrics import accuracy_score iris = load_iris() X = iris.data[:, :5] # we only take the first two features. y = iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=51) svm_model = svm.SVC(kernel='linear', C=1, gamma='auto') svm_model.fit(X_train,y_train) predictions = svm_model.predict(X_test) accuracy_score(predictions, y_test) raw accuracy: 0.96666666666666667 Using the cross_val_score function, and printing the mean score and 95% confidence interval of the score estimate: from sklearn.model_selection import cross_val_score scores = cross_val_score(svm_model, iris.data, iris.target, cv=5) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) Accuracy: 0.98 (+/- 0.03) Of course the iris dataset is a toy example. On larger real-world datasets you are likely to see your test error be higher than your training error, with cross-validation providing a lower accuracy than the raw number. So I wouldn't use the iris dataset to showcase overfitting. Choose a larger, messier dataset, and then you can start working towards reducing the bias and variance of the model (the "causes" of overfitting). Then you can start exploring tell-tale signs of whether it's a bias problem or a variance problem. See here: https://www.quora.com/How-many-training-samples-are-needed-to-get-a-reliable-model-in-ML/answer/Sean-McClure-3?srid=zGgv
H: How does k fold cross validation work? You split the data in k subsamples. Train it on k-1 subsamples, test it on kth subsample, record the performance with some error merric. Do it k times for each of the k subsamples, record the error each time. Then choose the model with the lowest error? Is it the same as ensemble technique? AI: Imagine you have 1500 labeled data points, and you want to estimate how well some classifier will work on new data. One (naive) method would be to train a model with all 1500 of your data points, and then check how many of the 1500 data points were classified correctly. This is not likely to give a good estimate of the performance on new data, because new data was not used to test the model. Some models like decision trees and neural networks will often be able to get 100% accuracy on the training data, but perform much worse on new data. So you think to yourself that you will split the data into two sets - a training set which you will build a model with, and a testing set that you will use to evaluate the model. Lets say you decided to train the model with 1000 of your examples, and evaluate with 500. This should give a reasonable estimate of how well your model will perform on new data, but it seems a bit limited; after all, one third of your data has not been used for training at all! We only have predictions for the 500 test samples - if these ones randomly happened to be easier to classify correctly on average, then our performance estimate is overly optimistic. Cross validation is a way to address this. Lets set $k=3$, so the data is split into three sets of 500 points (A, B and C). Use A & B to train a model, and get predictions for C with this model. Use B & C to train a model, to get predictions for A. Finally, use A & C to train a model, and get predictions for B. Now we have a prediction for every point in our labeled data that came from a model trained on different data. By averaging the performance of each of these models, we can end up with a better estimate of how well the model will perform on new data. Note that you should then re-train your model using all 1500 labeled points if you want to apply it to new data. Cross validation is only for estimating the performance of this new model. Also if your data is large enough, cross validation is probably unnecessary and you could just make a single train/test or train/valid/test split.
H: Isolation Forest height limit absent in SkLearn implementation In the original publication of the Isolation Forest algorithm, the authors mention a height limit parameter to control the granularity of the algorithm. I did not find that explicit parameter on the Sklearn implementation of the algorithm, and I was wondering whether it is possible to control granularity in some other way? AI: Unfortunately, it seems like there is no hlim parameter incorporated into sklearn.ensemble.IsolationForest. Calculation of anomaly score is just based on the depth each point settles to and by the average path length. The only way to tune in a bit is by using contamination which calculates the threshold needed to set for anomaly score. To achieve the granularity that was given in the original paper using hlim=6 to detect a cluster of small points, using a lot of estimators may solve the problem (still depends heavily on how you sample data from smaller cluster into a lot of estimators). But if that small cluster of data is very less in number, I don't think this idea works and there is nothing much we can do from the current implementation in sklearn. Hope this helps.
H: Image similarity without perspective I want to determine the similarity between images based on different features. The images show the same type of object (e.g. cars). I want to order images based on their similarity (e.g. through a feature vector). There are ways to solve this, for example a convolutional neural network. However, the images may be taken from slightly different perspectives: Sometimes directly from the front, sometimes with a slight ankle. What I DONT want is images ordered by their perspective rather than by the actual object. My questions: What can I do to avoid primary order by perspective Do I have to expect the ankle of the picture to be a primary factor in sorting order or will it more likely be negligible? Is there a way to extract/visualise the features the network uses (unsupervised) in order to manipulate the sorting order (e.g. by saying I want to increase the weight of THIS visual feature when ordering)? AI: Generative Adverserial Networks can be tolerant of this type of perturbation. In fact, if you look at figure 8 of the DCGAN paper, the authors demonstrate how the GAN learned to encode perspective directly such that the authors are able to use the GAN to rotate the camera on an image by applying a "turn vector".
H: How to use different classes of words in CountVectorizer() Suppose I have a piece of writing and I want to assign probabilities to different genres (classes) based on its contents. For example Text #1 : Comedy 10%, Horror 50%, Romance 1% Text #2 : Comedy 40%, Horror 3%, Romance 30% We have given keywords in each class through which we make a comparison. Below is the code that explains this scenario better from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics.pairwise import cosine_similarity import numpy as np # Comedy keywords_1 = ['funny', 'amusing', 'humorous', 'hilarious', 'jolly'] # Horror keywords_2 = ['horror', 'fear', 'shock', 'panic', 'scream'] # Romance keywords_3 = ['romantic', 'intimate', 'passionate', 'love', 'fond'] text = ('funny hilarious fear passionate') cv1 = CountVectorizer(vocabulary = keywords_1) data = cv1.fit_transform([text]).toarray() vec1 = np.array(data) vec2 = np.array([[1, 1, 1, 1, 1]]) print(cosine_similarity(vec1, vec2)) cv2 = CountVectorizer(vocabulary = keywords_2) data = cv2.fit_transform([text]).toarray() vec1 = np.array(data) vec2 = np.array([[1, 1, 1, 1, 1]]) print(cosine_similarity(vec1, vec2)) cv3 = CountVectorizer(vocabulary = keywords_3) data = cv3.fit_transform([text]).toarray() vec1 = np.array(data) vec2 = np.array([[1, 1, 1, 1, 1]]) print(cosine_similarity(vec1, vec2)) The problem with this approach is that vocabulary in CountVectorizer() doesn't consider different word classes (Nouns, Verbs, Adjectives, Adverbs, plurals, etc.) of a word in a text. For example, let's say we have keywords list as below keywords_1 = [(...), ('amusement', 'amusements', 'amuse', 'amuses', 'amused', 'amusing'), (...), ('hilarious', 'hilariously') (...)] and we want to compute similarity as follows cv1 = CountVectorizer(vocabulary = keywords_1) data = cv1.fit_transform([text]).toarray() vec1 = np.array(data) # [[f1, f2, f3, f4, f5]]) # fi is the count of number of keywords matched in a sublist vec2 = np.array([[n1, n2, n3, n4, n5]]) # ni is the size of sublist print(cosine_similarity(vec1, vec2)) How can we modify the above code to capture this scenario. Any advice is appreciated. AI: First of all your question is about stemming words as mentioned in the other answer which can be found in any Python NLP library such as Spacy or NLTK. The other point to mention here is that despite the other answer, what libraries has as Stop Words list is not actually stop word! Do no remove them! In NLP stop words should be extracted based on working corpus not based on a predefined list. In practice removing this kind of stop words usually reduces the performance on specific domain corpuses. The Third point is that depending on the classifier and loss function you use, TF-IDF might be better than Count Vectorizer. I suppose it works better specially if Log Loss is the cost function but I am not sure. Just give it a try.
H: Is PCA considered a machine learning algorithm I've understood that principal component analysis is a dimensionality reduction technique i.e. given 10 input features, it will produce a smaller number of independent features that are orthogonal and linear transformation of original features. Is PCA by itself considered as a learning algorithm or it is a data pre-processing step. AI: It's not uncommon for someone to label it as an unsupervised technique. You can do some analysis on the eigenvectors and that help explain behavior of the data. Naturally if your transformation still has a lot of features, then this process can be pretty hard. Nevertheless it's possible thus I consider it machine learning. Edit: Since my answer was selected (no idea why) I figured i'll add more detals. PCA does two things which are equivalent. First, and what is commonly referred, it maximizes the variances. Secondly, it minimizes the reconstruction error by looking at pair-wised distances. By looking at the eigenvectors and eigenvalues, it becomes rather simple to deduce which variables and features are contributing to the variance and also how different variables move in conjunction with others. In the end, it really depends on how you define "learning". PCA learns a new feature space that captures the characteristics of the original space. I tend to think that can be meaningful. Is it complex? No, not really, but does that diminish it as an algorithm? No I don't think so.
H: How does KNN handle categorical features For a K nearest neighbors algorithm using a Euclidean distance metric, how does the algorithm compute euclidean distances when one(or all) of the features are categorical? Or does it just go by the most commonly occurring value among the neighbors? So e.g. if the 2 features of 3 neighbors are age and gender with values: age,gender=[ [20,M], [31,F], [23,M] ], and we need to pick the 2 nearest neighbors for a new observation [20,F], how do we convert the gender to a numeric scale to compute euclidean distances? AI: It doesn't handle categorical features. This is a fundamental weakness of kNN. kNN doesn't work great in general when features are on different scales. This is especially true when one of the 'scales' is a category label. You have to decide how to convert categorical features to a numeric scale, and somehow assign inter-category distances in a way that makes sense with other features (like, age-age distances...but what is an age-category distance?). If all features are categorical, and inter-category distances are all treated as somehow equal, your job actually gets a little easier since you aren't converting between categorical and scalar scales.
H: Convolutional Neural networks Hi all: I have a very fundamental question on how CNN works. I understand fully the training process as to take a bunch of images, start with random filters, convolve, activate, calculate loss, back propagate and learn weights. Fully understood. But once the training is done, the last convolution layer has the most complex and complete features like faces, ears, wheels and such filters that can get activated by full features. If that is so, during testing, do we need to pass our images through all the layers again? Why don't we convolve our images against the last convolution layer and see how many of these complex feature filters get activated? And pass that on to the fully connected layer for classification? I understand there might me inconsistencies in the layers and the inputs but except that anything more important? AI: Why don't we convolve our images against the last convolution layer and see how many of these complex feature filters get activated? The answer is that all the layers are fully dependent on the exact features of the previous layer. The last layer is simply not capable of taking a raw image as input and outputting meaningful values. Most likely it is not even going to be able to fit to the shape of the output, it will require dozens of channels, where a colour image only has 3. If you somehow forced it to fit, then the colour image data is meaningless to it. It might still be made to output something, but it would essentially be gibberish, and unrelated to the desired outcome. Each layer is a function of the previous layer. Ignoring the details of convolution, a neural network is essentially a composition of multiple functions (let's call them $f, g, h, i, j$ for example) so that: $$y = j(i(h(g(f(x))))$$ You are essentially asking here, can you just do $y = j(x)$ instead of running all those functions in sequence. And the answer is no.
H: Ways to convert textual data to numerical data I've been looking for ways to wrangle my data which contains both text and numerical attributes. There are of course several algorithms for numerical data, but I am looking for suggestions regarding how to deal with textual data, for instance: for sorting based on K-means clustering and predicting missing data using kNN. I would really appreciate any thoughts regarding that. I am using scikit-learn. AI: Do you really mean textual attribute or just Categorical attribute? e.g. if an attribute has three values $a$,$b$ and $c$ it does not mean that you need to work with text but just categories. Here I assume you have really textual attribute e.g. | attr1: Age | attr2: msg | | ------------- |---------------------------------------| | 45 | I do NLP | | 21 | I do math | | 34 | I am a mathematician who does NLP | In this case you have 2 options; either going for classic ways like Bag of Words representation of text data and counting frequency of terms (words/bi-grams/tri-grams/etc.) and see how it works, or try to shortcut the way if you have a specific corpus with specific info to be extracted e.g. in example above a "Keyword Extraction" after a Stemming step will give you a vocabulary of fields in each of which a person can get a value ($0,1$). For K-means, one should have in mind that it works with numerical data. So if you want to use it, first you need to embedd your text into a n-dimensional space using text embedding algorithms (word2vec, doc2vec, etc) or even using frequency terms (as they are numbers) but the problem with text embedding is that despite you want to train your own Neural Network (which needs descent amount of data) you have to use pre-trained NNs which might not work well if your corpus is about a very specific kind of text. The other point is that KNN algorithm is supervised and k-means is unsupervised. So be careful that you understood the underlying concepts properly otherwise you will end up with improper results.
H: Getting unexpected result while using CountVectorizer() I am trying to use CountVectorizer() in a loop, But I am getting an unexpected result. On the other hand, if I use it outside the loop then it works fine. I believe there is some small problem with the logic. from sklearn.feature_extraction.text import CountVectorizer keys_1 = ['funny', 'amusing', 'humorous', 'hilarious', 'jolly'] keys_2 = ['horror', 'fear', 'shock', 'panic', 'scream'] keys_3 = ['romantic', 'intimate', 'passionate', 'love', 'fond'] text = ('funny amusing fear passionate') for i in range(3): keys = 'keys_' + str(i+1) cv = CountVectorizer(vocabulary = keys) data = cv.fit_transform([text]).toarray() print(data) cv1 = CountVectorizer(vocabulary = keys_1) data = cv1.fit_transform([text]).toarray() print(data) cv2 = CountVectorizer(vocabulary = keys_2) data = cv2.fit_transform([text]).toarray() print(data) cv3 = CountVectorizer(vocabulary = keys_3) data = cv3.fit_transform([text]).toarray() print(data) Output [[0 0 0 0 0 0]] [[0 0 0 0 0 0]] [[0 0 0 0 0 0]] [[1 1 0 0 0]] [[0 1 0 0 0]] [[0 0 1 0 0]] AI: The problem lies in the line: keys = 'keys_' + str(i+1) Here, keys becomes a string variable while I guess you would expect it to take the value of the list you defined in the first lines.. Try instead with a dictionary: my_keys= { "keys_1" : ['funny', 'amusing', 'humorous', 'hilarious', 'jolly'], "keys_2" : ['horror', 'fear', 'shock', 'panic', 'scream'], "keys_3" : ['romantic', 'intimate', 'passionate', 'love', 'fond'] } text = ('funny amusing fear passionate') for i in range(3): keys = my_keys['keys_' + str(i+1)] cv = CountVectorizer(vocabulary = keys) data = cv.fit_transform([text]).toarray() print(data)
H: Does class_weight solve unbalanced input for Decision Tree? I've read in sklearn's documentation that we have to take special care in balancing the input for a decision tree, but it doesn't tell you what function to use. However, I've found the parameter class_weight. If I use class_weight: balanced as a parameter, will that mean that I can omit balancing the input by hand? AI: Yes you don't need to balance your train data by hand. But your test data can still be (truly) unbalanced.
H: CNN - How does backpropagation with weight-sharing work exactly? Consider a Convolutional Neural Network (CNN) for image classification. In order to detect local features, weight-sharing is used among units in the same convolutional layer. In such a network, the kernel weights are updated via the backpropagation algorithm. An update for the kernel weight $h_j$ in layer $l$ would be as follows: $h_j^l = h_j^l - \eta \cdot \frac{\delta R}{\delta h_j^l} = h_j^l - \eta \cdot \frac{\delta R}{\delta x_j^{L}} \cdot \frac{\delta x_j^{L}}{\delta x_j^{L - 1}} \cdot ... \cdot \frac{\delta x_j^{l}}{\delta h_j^l}$ How can the kernel weights be updated and still be the same (=shared)? I have 2 possible explanations: Weights of the same layer, which are initialized to the same value, will stay the same (independently of the input). This would imply that the expression $\frac{\delta R}{\delta h_j^l}$ is the same for all of these weights $h_1^l$ to $h_J^l$. This does not make sense, since $x_j^l$ is different for different j's. Or am I missing something here? There is a trick, e.g. after the back-propagation update, the shared weights are set to their mean. EDIT The confusion I had was that I didn't consider that if a weight is shared, its parameter $h_j^l$ appears several times in the loss function. When differentiating for $h_j^l$, several terms (considering the according inputs) will "survive". Therefore the updates will be the same. AI: I think you're misunderstanding what "weight sharing" means here. A convolutional layer is generally comprised of many "filters", which are usually 2x2 or 3x3. These filters are applied in a "sliding window" across the entire layer's input. The "weight sharing" is using fixed weights for this filter across the entire input. It does not mean that all of the filters are equivalent. To be concrete, let's imagine a 2x2 filter $F$ striding a 3x3 input $X$ with padding, so the filter gets applied 4 times. Let's denote the unrolled filter $\beta$. $$X = \begin{bmatrix} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \end{bmatrix}$$ $$F = \begin{bmatrix} w_{11} & w_{12} \\ w_{21} & w_{22} \end{bmatrix}$$ $$\beta= [w_{11}, w_{12}, w_{21}, w_{22} ]$$ $$F*X = \begin{bmatrix} \beta \cdot [x_{11}, x_{12}, x_{21}, x_{22}] & \beta \cdot [x_{12}, x_{13}, x_{22}, x_{23}] \\ \beta \cdot [x_{21}, x_{22}, x_{31}, x_{32}] & \beta \cdot [x_{22}, x_{23}, x_{32}, x_{33}] \end{bmatrix} $$ "Weight sharing" means when we apply this 2x2 filter to our 3x3 input, we reuse the same four weights given by the filter across the entire input. The alternative would be each filter application having its own set of inputs (which would really be a separate filter for each region of the image), giving a total of 16 weights, or a dense layer with 4 nodes giving 36 weights. Sharing weights in this way significantly reduces the number of weights we have to learn, making it easier to learn very deep architectures, and additionally allows us to learn features that are agnostic to what region of the input is being considered. EDIT: To further motivate this, here's an animation of a 3x3 filter applied to a 5x5 input
H: Choosing an embedding feature dimension I'm trying to tackle a classification problem with a neural net tensor using flow. I have some continuous variable features and some categorical features. The continuous features are normalized using sklearn's StandardScaler. For the categorical features I am using a series of embedding features that I'm concatenating together with my continuous features. The embedding features are created like so : airline = tf.feature_column.categorical_column_with_hash_bucket( 'AIRLINE', hash_bucket_size=10) then : tf.feature_column.embedding_column(airline, 8) However I am having trouble choosing my embedding feature output size. I understand this transforms my sparse one hot encode "airline" feature into a float vector of size 8. Is there a heuristic I can use to choose an embedding feature size ? My neural net's accuracy remains stuck at 31%. It doesn't seem to be learning even after a 100 epochs. Could the size of the embedding features be a cause for such a behaviour ? AI: I think post below is a good resource. https://developers.googleblog.com/2017/11/introducing-tensorflow-feature-columns.html Basically, all categorical variable is initially converted to one-hot encoding, then layer size defined by dimension argument is stacked on top of one-hot encoding; thus learning optimal representation of categorical variable based on specified dimension. There is general rule in the blog post to take the 4th root of the number of categories. Another approach is to perform MDS to inspect your categorical variables to decide dimensions.
H: Clustering for multiple variable There are total 50 students(john, Roy..) and used some action to do a job. My dataSet something like this John Roy Micheal Ron ....... Smith A B B A C A A C B B C A A B B . . . . . . . . . . . . . . . F G E A G Here A,B,C...G are strings. Final data is like this... A B C D E F G John 3 34 23 34 4 3 1 Roy 5 23 12 3 5 39 46 ................................... This means John used "A" 3 times whereas Roy used "A" 5 times. So based on their action I want to cluster them (i.e. the student used the same number of activities will be in the same group). Which clustering method can be used and how ? AI: K-means Your data has $7$ dimensions so k-means is worth to try. See the PCA of your data and check if any cluster is visible there as K-means will have a tough time if clusters are not Gaussian. the setup is: Each person is a point in $7D$ space (a $50\times7$ matrix) Apply PCA and inspect it. If different clusters visible then you will have a result Fuzzy C-Means I would suggest soft clustering algorithms. Soft clustering means that output is not binary (each sample belong only to one cluster and does not belong to others) but it assigns a membership score for belongness of each sample to each cluster. It minimizes the same objective function as K-means but with a weight which is calculated in each iteration and can be found here. Libraries usually have this algorithm under the name FCM. K-means Through GMM Another soft version of K-means is called Gaussian Mixture Models in which you try to estimate Gaussian kernels whose superposition describes the data (as you see again the Gaussian distribution of data is crucial here). The setup is: Choose a $k$ as the initial number of clusters and produce $k$ arbitrary Gaussian kernels (i.e. with arbitrary $\mu$ and $\sigma$) Use Expectation Maximization algorithm for updating the new clustering structure at each step. Spectral Clustering Define a Similarity Matrix from the data by any means. For example calculate the distances between points in $7D$ space and reverse that. Or apply a RBF kernel for determining the similarity between them. Then calculate the second eigenvalue-eigenvector pair according to sorted eigenvalues. Use K-means to cluster those elements of eigenvectors. If you have more detailed questions just drop a comment here. Good Luck!
H: Format for time series data with non-trivial sampled data I have a data stream that I would like to share with some data scientists. It is a regularly captured time series with some fields that are simple scalars, booleans. Each sample has a UTC time and fractional seconds since start of capture. Also captured is a 3d spline that varies in size. These splines are also regularly sampled. Additionally, there are some other multi dimensional fields as well ([x,y,z,pitch,yaw,roll]). For simple data sets I would normally use CSV. However due to the nature of the more complex data I need a more appropriate format. What are my options for formats that would allow easy loading into Matlab or other common data science tools? AI: Based on this article about Data Formats in Data Science, the short answer to your question is: use JSON. This may seem natural next step after csv and no more, but it is actually quite interesting: JSON is widely supported in many programming languages, it is a valid MIME type in Internet standards (application/json, data stream mentioned in your question) and it has several standard and non standard extensions (geoJSON, binary json). If there is not a standard structure, JSON makes it easy to invent your own, because it is easily modifiable. According to the article JSON works as value in HADOOP. For HADOOP it is suitable, because basically JSON is plain text when you think it as a data item. Also article mentions JSON more efficient than CSV.
H: Link prediction with input data I have a list of files, an I use the KNN algorithm to classify these files. dataset = pd.read_csv(file) training_samples = get_sample_number(dataset) X_train = dataset.iloc[:training_samples, 5:9] y_train = dataset.iloc[:training_samples, 9] X_test = dataset.iloc[training_samples:, 5:9] # Feature Scaling sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.fit_transform(X_test) # Fitting classifier to the training set classifier = KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) Now I have my categories in my y_pred array. But I want to save the result in the file where I read the dataset. How can I link a prediction to the right row in the file (or dataset)? AI: First as "timleathart" mentioned, you need to fix your code by changing this line : X_test = sc.fit_transform(X_test) to: X_test = sc.transform(X_test) For your question : you have already the number of samples (training_samples) used for training. so all you need is to iterate over the y_pred and save the values in new column in the dataset starting from "training_samples" as row index.
H: Naïve Bayes and Training Data I am creating my own implementation of a Naïve Bayes classifier. While it’s behaviour and functionalities are clear to me, my concerns are on the nature of the training and testing data. I acquired several sets of product reviews from Amazon. The first thing I do is parsing them, that is, taking the rating (1 to 5 stars) and the text, which I parse with a regex to only contain alphabetical lowercase characters and spaces. Next, I convert ratings to polar values, so 1 and 2 stars become “-“ and 4 and 5 stars become “+”. I’m intentionally skipping reviews with 3 stars; could this be an issue? Here come my real concerns. When using a percentage split to generate training and testing sets, should both of them contain the same share of positive and negative reviews (such as 7 positive and 7 negative reviews for training and 3 positive and 3 negative reviews for testing)? Right now I’m acquiring as many positive as negative reviews from the chosen set, but I’m wondering if that should be the case. For instance, if a set contains 7 positive reviews and 4 negative ones, I discard 3 positive reviews to equate them. Furthermore, I observed that negative reviews tend to contain longer texts on average. So, if I’m using an equal number of positive and negative reviews, but they differ on average text length, would this have an impact on the way my classifier tries to predict? AI: I’m intentionally skipping reviews with 3 stars; could this be an issue? It's not a problem, it's just your definition of your question. It assumes rate 3 as neutral and blow/above that as negative/positive. One may say I assume every rate is positive so each number is a level of positiveness then rate 3 works for him. The main point is that you get the answer for the question you define so be careful about what you want from your classifier to set up your question correctly. When using a percentage split to generate training and testing sets, should both of them contain the same share of positive and negative reviews? Nice question. Before specifically taking Naive Bayes into account, it is a general machine learning problem when the population of classes are imbalanced. If this is the case then better to balance them, not only in train/test split but also during train itself as the dominating class will bias your result. If the classes are not that imbalanced then you can split things randomly and it's fine. Regarding Bayes Classifier itself the balance inside training set should be more important as NB learns from the statistics of your training set. On test set, it just uses the already learned statistics so portion of classes should not impact. I observed that negative reviews tend to contain longer texts on average. would this have an impact on the way my classifier tries to predict? Again nice question. Yes it does but that is not necessarily that bad. Longer text means that the probability of having larger number for term frequencies are higher. This is a feature itself! (Just think again that you already found a difference between classes. Well ... you want NB to do the same right?!!) The only thing is that if the difference in lengths of comments is very huge, then it might affect the value of features in a way that you need normalization (e.g. instead of counting terms as features, you can use TF-IDF which is a bounded score) Hope it helps. Good Luck!
H: Should I prevent augmented data to leak to the test/cross validation sets I have been working with the cats vs dogs dataset from kaggle which consist on 25000 images of cats and dogs labelled accordingly (btw, great dataset, totally recommended!) One of the things I did was to augment the data by simply fliping the images 180 degrees, so an image like this will became this So far so good, now I have 50000 images, but the question is: should I make SURE that an image that has been augmented remains totally into the training set? I mean, with the previous two images, what would happen if I have one of them in the training set and another one in the test (or cross validation) set? does it mean that I am leaking training data into test data? I understand that technically they are DIFFERENT images, but still my intuitions seems to resist such idea. Am I correct to assume that augmented images should not leak to the test set? AI: Yes! The accuracy of your algorithm should be tested on images that you expect to receive. This would not include data that has been rotated, shifted, blurred, etc. However, augmenting your data in such ways can and usually will lead to better results. Split your data first. Then augment only the training data while staying away from the validation data. Only use your testing data when you are confident that your solution leads to good results on your validation set without overfitting it.
H: High-level features of a neural network I understand how to build and train a neural network like shown below, as well as those low-level features/filters. I wonder what are those high-level features: how exactly do you obtain them from a trained neural network? (Are those like the "eigenfaces"?) Note: the image is by NVIDIA, and I don't know the specifics of the classification problem here. If needed, suppose the network is trained to distinguish human from cat. AI: The image, and variants of it that are commonly used are for illustrative purposes only. They generally do not represent data that has been extracted from real CNNs. The first "Low-level features" part of the diagram is possibly from a real network (I am not sure in this case, it looks more like a constructed filter, e.g. Sobel, to me). That is because it is feasible and relatively easy to interpret the first layer's filter weights directly as images, and the filters do indeed look like the components that they detect. The "Mid-level features" and "High-level features" in your specific diagram have probably been constructed without using a neural network. They are likely to be an artists impression of what the high level features might be. They may have been sampled from real datasets, then just cropped and arranged into the image. Caveat: I cannot find absolute evidence for the specific image being constructed for illustration only, just I suspect this to be the case. It is possible to extract visualisations of features detected by deeper layers. The two common ways to do this are: Dataset matching. Finding examples in the dataset which trigger a specific neuron to output a high value. This can be isolated to a crop of the original image, because you know the combined sizes of all the filters and pools that occur before layer you are interested in. Optimising the input image. Using gradient ascent, but instead of changing the weights, make a cost function that scores the neuron you want to visualise and keep adjusting the input until You can get more information from resources such as this article on feature visualisation.
H: How do you avoid 'analysis paralysis' when choosing a method to implement? When you have multiple methods to accomplish a task, how do you choose which one to implement? AI: This is a very broad question but in general you can define some criteria that you want the method to meet (eg low error, efficiency, scalability) and then you score the methods, which one solves your problem best.
H: Keras - Computing cosine similarity matrix of two 3D tensors Using TF backend, I need to construct a similarity matrices of two 3D vectors, both with shape (batch_size, N, M), being N and M natural numbers. The function tf.losses.cosine_distance is only between 1D tensors. I need to build a Tensor matrix batch_sizexNxM such that matrix[k][i][j] will be the cosine similarity of the Tensor1[k][i] and Tensor2[k][j]. How can I do this? AI: I know of no pairwise distance operations in Keras or tensorflow. But the matrix math can be implemented in TF/Keras backend code and then placed in a Keras layer. Here's the matrix representation of the cosine similarity of two vectors: $$ cos(\theta) = \frac{\mathbf{A}\cdot\mathbf{B}}{\|\mathbf{A}\|_2 \|\mathbf{B}\|_2} $$ I'll show the code and a test that confirms that it works. First, generate non-trivial test data. import numpy as np import keras import keras.backend as K # set up test data n_batch = 100 n = 400 # number of points in the first set m = 500 # number of points in the second set d = 200 # number of dimensions A = np.random.rand(n_batch, n, d) B = np.random.rand(n_batch, m, d) Define pairwise cosine similarity function. # convenience l2_norm function def l2_norm(x, axis=None): """ takes an input tensor and returns the l2 norm along specified axis """ square_sum = K.sum(K.square(x), axis=axis, keepdims=True) norm = K.sqrt(K.maximum(square_sum, K.epsilon())) return norm def pairwise_cosine_sim(A_B): """ A [batch x n x d] tensor of n rows with d dimensions B [batch x m x d] tensor of n rows with d dimensions returns: D [batch x n x m] tensor of cosine similarity scores between each point i<n, j<m """ A, B = A_B A_mag = l2_norm(A, axis=2) B_mag = l2_norm(B, axis=2) num = K.batch_dot(A_tensor, K.permute_dimensions(B_tensor, (0,2,1))) den = (A_mag * K.permute_dimensions(B_mag, (0,2,1))) dist_mat = num / den return dist_mat Build a shallow Keras model around the function. # build dummy model A_tensor = K.constant(A) B_tensor = K.constant(B) A_input = keras.Input(tensor=A_tensor) B_input = keras.Input(tensor=B_tensor) dist_output = keras.layers.Lambda(pairwise_cosine_sim)([A_input, B_input]) dist_model = keras.Model(inputs=[A_input, B_input], outputs=dist_output) dist_model.compile("sgd", "mse") Compare to sklearn implementation sk_dist = np.zeros( (n_batch, n, m) ) for i in range(n_batch): sk_dist[i,...] = cosine_similarity(A[i,...], B[i,...]) keras_dist = dist_model.predict(None, steps=1) np.allclose(sk_dist, keras_dist) > True
H: What are useful evaluation metrics used in machine learning I am using CNN in order to predict codes after analyzing text. As an example, I will write "I am crazy" .. the model will predict some code " X321". All this based on CNN. I want to evaluate my model. I used Fscore (recall and precision). Can you advice me more metrics? AI: For model evaluation there are different metrics based on your model: Confusion matrix Classification accuracy: (TP + TN) / (TP + TN + FP + FN) Error rate: (FP + FN) / (TP + TN + FP + FN) Paired criteria Precision: (or Positive predictive value) proportion of predicted positives which are actual positive TP / (TP + FP) Recall: proportion of actual positives which are predicted positive TP / (TP + FN) Sensitivity: proportion of actual positives which are predicted positive TP / (TP + FN) Specificity: proportion of actual negative which are predicted negative TN / (TN + FP) True positive rate: proportion of actual positives which are predicted positive TP / (TP + FN) True negative rate: proportion of actual negative which are predicted negative TN / (TN + FP) Positive likelihood: likelihood that a predicted positive is an actual positive sensitivity / (1 - specificity) Negative likelihood: likelihood that a predicted negative is an actual negative (1 - sensitivity) / specificity Combined criteria BCR: Balanced Classification Rate ½ (TP / (TP + FN) + TN / (TN + FP)) BER: Balanced Error Rate, or HTER Half Total Error Rate: 1 - BCR F-measure harmonic mean between precision and recall 2 (precision . recall) / (precision + recall) Fβ-measure weighted harmonic mean between precision and recall (1+β)2 TP / ((1+β)2 TP + β2 FN + FP) The harmonic mean between specificity and sensitivity is also often used and sometimes referred to as F-measure. Youden's index: arithmetic mean between sensitivity and specificity sensitivity - (1 - specificity) Matthews correlation: correlation between the actual and predicted (TP . TN – FP . FN) / ((TP+FP) (TP+FN) (TP + FP) (TN+FN)) ^ (1/2) comprised between -1 and 1 Discriminant power normalized likelihood index sqrt(3) / π . (log (sensitivity / (1 – specificity)) + log (specificity / (1 - sensitivity))) <1 = poor, >3 = good, fair otherwise You can find much more here. Also there are some explanations here and you can find useful code snippet from here which are implemented.
H: CNN'S are what? I have a very fundamental question on what CNN'S actually are. I understand fully the training process as to take a bunch of images, start with random filters, convolve, activate, calculate loss, back propagate and learn weights. Fully understood.... But recently I came across this line on Slack CNN'S can act as a frequency filter as well for example, a blur is a low-pass filter and it can be implemented as convolution with fixed weights; Please Explain? (Can't understand this at all) AI: If I've got the point of your question, it means that convolutional nets can learn different filters. You may train a net with random initialization and after training you can find learned-filters which are familiar to you or not. You may see that your model has learned Sobel filter which is a high pass filter or maybe you may see that the weights which are learned are exactly the same as a mean filter which is a low pass filter. In all cases network tries to learn filters that help it find appropriate features of the inputs. The learned filters can be anything. They may be known to you or not. So, yes. They can learn all kind of features like low pass and high pass filters. Also for visualization purposes to fully figure out what is going on inside them I recommend you taking a look at here.
H: What loss function should I use if I have been working on a classification problem which involves both multi-label and multi-class labels? For example I have apple and pear pictures. What I am trying to do is to predict if a picture is an apple or pear picture and AT THE SAME TIME predicting whether the fruit is big and/or yellow. Thus predicting the fruit type (apple/pear) is multi-class problem, one vs all. Predicting whether the fruit big and/or yellow is multi-label problem. If label ordering is [apple, pear, big, yellow] then a “big yellow apple” picture should have the label [1,0,1,1]. First two part is mutually exclusive, one-hot, however last two are not. So, what loss function should I use for this problem? AI: The labels of data are not mutually exclusive so you can't say this is a one vs. all problem, because more than one entry may be one in the output vector. Moreover, if in the seen there should be an apple or pear this can be considered as an exhaustive problem which means one of them should happen for each input. My opinion is that for this problem you don't have to make a new cost function. For the mutually exclusive part, as you have truly stated that, and for the second part of your vector, the well known cross-entropy cost function will perform fine. I guess the problem is something else. For problems with mutually exclusive classes, we use soft max layer as the last layer for neural nets while for cases which classes are not mutually exclusive, you can use sigmoid as the activation function. In your case that you have combination of them I suggest you an alternative approach: Change your mutually exclusive part to a binary output, means if the corresponding entry is less than half, you can understand that it is e.g. an apple otherwise, it is the other class. and for the rest, just keep the output vector as it is. Finally use sigmoid activation function as the last layer if you are using neural nets.
H: Can someone spot anything wrong with my LSTM forex model? The model below reads in data from a csv file (date, open, high, low, close, volume), arranges the data and then builds a LSTM model trying to predict next day's close based on a number of previous days close values. However, validation accuracy is about 53.8% no matter if i... - change hyperparameters - make it a deep model - uses many more features than just close To test if I made a simple mistake I generated another data source that was a signal I created from adding sin, cosine and a little noise so that i KNEW a model should be able to be trained and it was. The model below got about 94% validation without any tuning. With that in mind. How come when I try to use it on actual data (eurusd 1-min data) it doesn't seem to work..? Anyone that sees an error or can point me in the right direction? import pandas as pd import numpy as np fpath = 'Data/' fname = 'EURUSD_M1_1' df = pd.read_csv(fpath + fname + '_clean.csv') # file contains date, open, high, low, close, volume # "y" is whether the next period's Close value is higher or lower than current Close value outlook = 1 df['y'] = df['Close']<df['Close'].shift(-outlook) # Drop all NAN's df.dropna(how="any",inplace=True) # Get X and y. To keep it simple, just use Close X_df = df['Close'] # "y" is whether the next period's Close value is higher or lower than current Close value outlook = 1 y_df = df['Close']<df['Close'].shift(-outlook) # Train/test split def train_test_split(X_df,y_df,train_perc): idx = int(train_perc/100*X_df.shape[0]) X_train_df = X_df.iloc[0:idx] X_test_df = X_df.iloc[idx:] y_train_df = y_df.iloc[0:idx] y_test_df = y_df.iloc[idx:] return X_train_df.as_matrix(), X_test_df.as_matrix(), y_train_df.as_matrix(), y_test_df.as_matrix() X_train_df, X_test_df, y_train, y_test = train_test_split(X_df,y_df,90) # Scaling def scale(X): Xmax = max(X) Xmin = min(X) return (X-Xmin)/(Xmax - Xmin) X_train_scaled = scale(X_train_df) X_test_scaled = scale(X_test_df) # Build the model import tensorflow as tf from keras.models import Sequential from keras.layers import LSTM, Dense # ### Constants num_time_steps = 5 # Num of steps in batch (also used for prediction steps into the future) num_features = 1 # Number of features num_neurons = 97 num_outputs = 1 # Just one output (True/False), predicted time series learning_rate = 0.0001 # learning rate, 0.0001 default, but you can play with this nb_epochs = 10 # how many iterations to go through (training steps), you can play with this batch_size = 32 # Reshaping X_train_scaled = np.reshape(X_train_scaled,[-1,1]) nb_samples_train = X_train_scaled.shape[0] - num_time_steps X_train_scaled_reshaped = np.zeros((nb_samples_train, num_time_steps, num_features)) y_train_reshaped = np.zeros((nb_samples_train)) for i in range(nb_samples_train): y_position = i + num_time_steps X_train_scaled_reshaped[i] = X_train_scaled[i:y_position] y_train_reshaped[i] = y_train[y_position] model = Sequential() stacked = False if stacked == True: model.add(LSTM(num_neurons, return_sequences=True, input_shape=(num_time_steps,num_features), activation='relu', dropout=0.5)) model.add(LSTM(num_neurons, activation='relu', dropout=0.5)) else: model.add(LSTM(num_neurons, input_shape=(num_time_steps,num_features), activation='relu', dropout=0.5)) model.add(Dense(units=num_outputs, activation='sigmoid')) model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics=['accuracy']) history = model.fit(X_train_scaled_reshaped, y_train_reshaped, batch_size = batch_size, epochs = nb_epochs, validation_split=0.3) AI: What you are up against is this fundamental property of most tradable, liquid financial price series, and that is, they are Brownian Motion. In discrete time, it's also known as random walk. The most important property of Brownian Motion is that it's memoryless, whose mathematical expression is $E[p_{t+1} | p_{-{\infty} : t}] = p_t$ Recurrent neural net, particularly the LSTM flavor, is very powerful in capturing and modeling a long memory process. In fact, it was invented to deal with state that depends on itself many time steps ago (that famous LSTM paper was dated 20 yrs ago[1]). What you've demonstrated using RNN (assuming your code and training are free of bug) is precisely this property. Put it another way, there is no way to beat a fair coin in predicting tomorrow's FX price. Another perspective on your attempt. If such naive model could accurately predict FX time series, it would have been exploited by the whales in the hedge fund industry long ago and the opportunity would cease to exist. [1] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
H: Hidden Markov Model on R Studio Is there any detailed materials that can help explain how to set up HMM on R Studio? AI: You can take a look at here. As you can read from there: Download and unpack hiddenDomains If you haven't already done this, download the latest version of hiddenDomains from the Sourceforge website Now unpack it shell$ tar -xzvf hiddenDomains.VERSION.NUMBER.tar.gz Where VERSION and NUMBER represent the release that you downloaded. Install R and required R packages (if necessary) Download and install R if you don't already have it. If you don't already have them, you will need to install two hidden Markov model libraries: depmixS4 HiddenMarkov You can do this by starting R on the command line: shell$ R Or, if you prefer RStudio, you can use that. Regardless of how you started R, type the following commands to install the packages. > install.packages("depmixS4") > install.packages("HiddenMarkov") Here and here may also be helpful for knowing how to use it.
H: LSTM Implementation using tensorflow (anaconda) I'm new to TensorFlow and currently I'm trying to implement an LSTM using jupyter notebook. But when I run the following code segment, I got some errors and couldn't find any solution. How can I work through this error? Code: lstmCell = tf.contrib.rnn.BasicLSTMCell(lstmUnits) lstmCell = tf.contrib.rnn.DropoutWrapper(cell=lstmCell, output_keep_prob=0.75) value, _ = tf.nn.dynamic_rnn(lstmCell, data, dtype=tf.float32) Error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-29-db6a6fc2c55e> in <module>() ----> 1 lstmCell = tf.contrib.rnn.BasicLSTMCell(lstmUnits) 2 lstmCell = tf.contrib.rnn.DropoutWrapper(cell=lstmCell, output_keep_prob=0.75) 3 value, _ = tf.nn.dynamic_rnn(lstmCell, data, dtype=tf.float32) C:\Anaconda3\lib\site-packages\tensorflow\python\util\lazy_loader.py in __getattr__(self, item) 51 52 def __getattr__(self, item): ---> 53 module = self._load() 54 return getattr(module, item) 55 C:\Anaconda3\lib\site-packages\tensorflow\python\util\lazy_loader.py in _load(self) 40 def _load(self): 41 # Import the target module and insert it into the parent's namespace ---> 42 module = importlib.import_module(self.__name__) 43 self._parent_module_globals[self._local_name] = module 44 C:\Anaconda3\lib\importlib\__init__.py in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 C:\Anaconda3\lib\importlib\_bootstrap.py in _gcd_import(name, package, level) C:\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_) C:\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_) C:\Anaconda3\lib\importlib\_bootstrap.py in _load_unlocked(spec) C:\Anaconda3\lib\importlib\_bootstrap_external.py in exec_module(self, module) C:\Anaconda3\lib\importlib\_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) C:\Anaconda3\lib\site-packages\tensorflow\contrib\__init__.py in <module>() 29 from tensorflow.contrib import data 30 from tensorflow.contrib import deprecated ---> 31 from tensorflow.contrib import distributions 32 from tensorflow.contrib import estimator 33 from tensorflow.contrib import factorization C:\Anaconda3\lib\site-packages\tensorflow\contrib\distributions\__init__.py in <module>() 31 from tensorflow.contrib.distributions.python.ops.distribution_util import matrix_diag_transform 32 from tensorflow.contrib.distributions.python.ops.distribution_util import softplus_inverse ---> 33 from tensorflow.contrib.distributions.python.ops.estimator import * 34 from tensorflow.contrib.distributions.python.ops.geometric import * 35 from tensorflow.contrib.distributions.python.ops.independent import * C:\Anaconda3\lib\site-packages\tensorflow\contrib\distributions\python\ops\estimator.py in <module>() 19 from __future__ import print_function 20 ---> 21 from tensorflow.contrib.learn.python.learn.estimators.head import _compute_weighted_loss 22 from tensorflow.contrib.learn.python.learn.estimators.head import _RegressionHead 23 from tensorflow.python.framework import ops C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\__init__.py in <module>() 90 91 # pylint: disable=wildcard-import ---> 92 from tensorflow.contrib.learn.python.learn import * 93 # pylint: enable=wildcard-import 94 C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\__init__.py in <module>() 21 22 # pylint: disable=wildcard-import ---> 23 from tensorflow.contrib.learn.python.learn import * 24 # pylint: enable=wildcard-import C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\__init__.py in <module>() 23 from tensorflow.contrib.learn.python.learn import basic_session_run_hooks 24 from tensorflow.contrib.learn.python.learn import datasets ---> 25 from tensorflow.contrib.learn.python.learn import estimators 26 from tensorflow.contrib.learn.python.learn import graph_actions 27 from tensorflow.contrib.learn.python.learn import learn_io as io C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\__init__.py in <module>() 295 from tensorflow.contrib.learn.python.learn.estimators._sklearn import NotFittedError 296 from tensorflow.contrib.learn.python.learn.estimators.constants import ProblemType --> 297 from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNClassifier 298 from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNEstimator 299 from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNRegressor C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn.py in <module>() 28 from tensorflow.contrib.layers.python.layers import optimizers 29 from tensorflow.contrib.learn.python.learn import metric_spec ---> 30 from tensorflow.contrib.learn.python.learn.estimators import dnn_linear_combined 31 from tensorflow.contrib.learn.python.learn.estimators import estimator 32 from tensorflow.contrib.learn.python.learn.estimators import head as head_lib C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py in <module>() 29 from tensorflow.contrib.layers.python.layers import optimizers 30 from tensorflow.contrib.learn.python.learn import metric_spec ---> 31 from tensorflow.contrib.learn.python.learn.estimators import estimator 32 from tensorflow.contrib.learn.python.learn.estimators import head as head_lib 33 from tensorflow.contrib.learn.python.learn.estimators import model_fn C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py in <module>() 47 from tensorflow.contrib.learn.python.learn.estimators import tensor_signature 48 from tensorflow.contrib.learn.python.learn.estimators._sklearn import NotFittedError ---> 49 from tensorflow.contrib.learn.python.learn.learn_io import data_feeder 50 from tensorflow.contrib.learn.python.learn.utils import export 51 from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_io\__init__.py in <module>() 19 from __future__ import print_function 20 ---> 21 from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_data 22 from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_labels 23 from tensorflow.contrib.learn.python.learn.learn_io.dask_io import HAS_DASK C:\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_io\dask_io.py in <module>() 24 try: 25 # pylint: disable=g-import-not-at-top ---> 26 import dask.dataframe as dd 27 allowed_classes = (dd.Series, dd.DataFrame) 28 HAS_DASK = True C:\Anaconda3\lib\site-packages\dask\dataframe\__init__.py in <module>() 1 from __future__ import print_function, division, absolute_import 2 ----> 3 from .core import (DataFrame, Series, Index, _Frame, map_partitions, 4 repartition, to_delayed, to_datetime, to_timedelta) 5 from .groupby import Aggregation C:\Anaconda3\lib\site-packages\dask\dataframe\core.py in <module>() 38 if PANDAS_VERSION >= '0.20.0': 39 from pandas.util import cache_readonly ---> 40 pd.core.computation.expressions.set_use_numexpr(False) 41 else: 42 from pandas.util.decorators import cache_readonly AttributeError: module 'pandas.core.computation' has no attribute 'expressions' Tensorflow version - '1.4.0' Python version - 3.6.3:: Anaconda, Inc. AI: Based on the solution here you have different choices: The simplest solution is to revert back to Pandas 0.19.2. For this purpose use the following command in your command line or terminal. conda install pandas=0.19.2
H: To be useful, doesn't a test set often become a second dev set? I'm a little unclear about the expected use/value of a test set in machine learning. Here is a story that explains my confusion, assuming you're using a train/dev/test split: You use your dev set to choose the best hyperparameters and make various tweaks, and when you're finally "done" you evaluate it on your test set. Your test performance comes back much worse than your dev performance. So now you conclude, "My dev set must be too small, causing me to overfit my hyperparameters." So you make your dev set bigger, find new hyperparameters, and evaluate on your test set again. Now your dev and test performances are close to each other. But note that you used your test set twice in that case. So in some sense you were fitting your hyperparameters to your test set, and it became a second development set. To try to answer my own question: I guess you could say that the value of the test set is that without it, you would have never known you were overfitting your hyperparameters. And as long as we aren't using the test set "too much" in the above way (increasing the dev set size in a cycle again and again), it is still "mostly" unbiased. However we would have to concede that the test set is only truly unbiased if we only need it once. Do you think this is an accurate take? Incidentally, I'm unsure if there's anything else you can do (other than increasing dev set size) if your test performance comes back much worse. Well, I suppose you can cry ;). But are there any other options? AI: You're absolutely right and yes, it is actually possible to overfit to your validation data if you're not careful. Some researchers at google published an interesting article about this problem and a way to address it called the Reusable Holdout. The general idea is that you only access the test set through a special intermediary algorithm. Obviously this isn't how most people work though. In practice, I think a common approach is to use several holdouts: use one for most of your prototyping, and then once you're satisfied you can extend your evaluation to one or more of your additional holdouts.
H: Are linear regression models with non linear basis functions used in practice? I know that popular linear regression models such as Lasso or Logistic Regression are widely used in practice because they perform reasonably good, are efficient and interpretable. As far as I know, the only way for these models to learn non-linear relationships between $X$ and $y$ is by applying non-linear transformations to $X$ through basis functions $\phi(X)$ (as it is summarized in the scikit-learn documentation). I wonder if this approach is actually used in real world problems, where usually we don't have any prior information about how to design $\phi$. AI: Yep, that's a thing. It's called a "Generalized additive model (GAM)": https://en.wikipedia.org/wiki/Generalized_additive_model You may also be interested in "multivariate adaptive regression splines (MARS)": https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines EDIT: Regarding demonstrating that these are actually used, I'm not sure what you're looking for. I've never seen a survey to try and gauge the popularity of specific models, and I'm not sure how useful such a thing would be. I could speak from my subjective experience, but I don't consider that particularly meaningful either. If you just want examples: Hits for GAM models on PubMed - 2114 hits Hits for MARS models on Pubmed - 190 hits For context: Hits for Random Forest models on PubMed - 5604 Hits for Deep Learning on PubMed - 4435 hits Hits for Neural Network on PubMed - 57170 hits Take these counts with a solid dose of salt: in my experience random forests are in much wider use in practice than neural networks. But like I said earlier, you shouldn't really trust my subjective experience either since that's just anecdotal.
H: How exactly does matrix factorization help with collaborative filtering We start with a matrix of user ratings for different movies with some elements unknow i.e the rating of a yet to be seen movie by an user. We need to fill in this gap. So How can you decompose or factorize a matrix where all elements are not known in advance. after you somehow decompose it, do you just multiply the matrices back to recreate the original one, but now with the missing item populated? How do you know which factorization method like non negative,singular value,eigen to choose,without going into too much math? AI: Answers to your question You factorize the matrix in order to approximate original one as closely as possible. This is generally done by starting with randomized values, and updating based on error (between product of factors and original matrix). In other words, for a given matrix A, you are trying to find matrices C & D such that Error(A - (C x D)) is lowest. The algorithm is designed to find an approximation , which might result in original missing entries being replaced by new values (recommendations or ratings). Do you just multiply? Yes. That it is the essence of calculation. For every user and product, multiplication gives you ratings or score. Sorting by score, and picking the index you get recommendation for each user. It also allows you to now store much smaller matrices than the original one. Choice of factorization will be also dictated by application. If your application has only positive ratings, then it is better to use non-negative matrix factorization. You may start using matrix factorization methods without knowing the implementation to start with, as long as you are aware of the overall idea (and the pitfalls in using it). Further Comments It is a little perplexing that you start with a matrix with many missing entries (unseen items), approximate the matrix via factorization and expect to get non-missing entries (which help in doing predictions). If the task is to approximate original entries, then recommendations won't be good since you can have missing/zero entries in the approximation and still get lowest error on approximation task. The idea: regularization imposed in the algorithm (point 1 above) ensures that noise in the original data is filtered and only patterns are detected. But this idea has its fair share of critics. The introduction section of this paper by Stefen Rendle gives a readable introduction to what is happening in simple matrix factorization methods and what can go wrong. The paper also re-formulates the task with better optimization criteria. You can also read this post by Simon Funk which explains the mechanics in a readable language (and code), for matrix factorization applied to recommendations.
H: Is there a package for using SQL to manipulate Pandas dataframes in Python? Rather than learn a new package/language, I'd like to use my existing SQL skills to manipulate pandas dataframes in Python. Does anyone know of a way to do this, or perhaps a package that will allow me to do this? AI: I found a package called pandasql, which is based on sqldf for R. It seems quite a bit slower than doing the transformations with the pandas package, but it gets the job done. Just put the SQL query into a string like this: query_string = """ select * from df """ Then use the string in the pandasql.sqldf package, as follows: new_dataframe = pandasql.sqldf(query_string, globals()) Choose globals() or locals(), depending on the scope you want for your variables. As I mentioned, it seems a bit slow, but I couldn't find anything else. I may use this from time to time until I become better at Pandas. Sean
H: Compute backpropagation I have the question which is mentioned in the above picture. It is trying to find the derivative of f with respect to weight matrix W1. Can any one help me how I can accomplish that? I solved in this way : AI: You are missing one little step here. At the output node the backpropagation algorithm starts with the loss function $L$. You want to find out how much the weight $w_1$ impacts our loss function thus $\frac{\partial L}{\partial w_1}$. We can break this down with the chain rule as $\frac{\partial L}{\partial w_1} = \frac{\partial L}{\partial f} \frac{\partial f}{\partial w_1} = \frac{\partial L}{\partial f} \frac{\partial f}{\partial \sigma} \frac{\partial \sigma}{\partial w_1}$. Why do we do this? We do this to simplify the math. Because the $L$ is a function of $f$ and $f$ is a function of $\sigma$ and $\sigma$ is a function of the weights. But if we ignore the loss function for now. How do we calculate $\frac{\partial f}{\partial w^2_{i, j}}$? $\frac{\partial f}{\partial w^2_{i,j}} = \frac{\partial f}{\partial h^2_i} \frac{\partial h^2_i}{\partial w^2_1}$ $f = h_1^2 w_{1,1}^3 + h_2^2 w_{2,1}^3$ $\frac{\partial f}{\partial h^2_i} = w_{i, 1}^3$ $\frac{\partial h^2_i}{\partial w^2_1} = \sigma(W^2h^1) (1 - \sigma(W^2h^1))$ Backpropagation Based on this answer we can see that $h_i^2$ is a function of the inputs from the previous layer and the activation at the neuron. Thus, we can substitute this function into our equation and continue taking the derivatives and we can see how the error propagates backwards through the network. $\frac{\partial f}{\partial w^1_{i,j}} = \frac{\partial f}{\partial h^2_i} \frac{\partial h^2_i}{\partial h^1_i} \frac{\partial h^1_i}{\partial w^1_{i,j}}$ (Update) The answer This may not be correct, the work needs to be checked. $f = \begin{bmatrix} w_{11}^3 & w_{21}^3 \end{bmatrix} \begin{bmatrix} h_{1}^2 \\ h_{2}^2 \end{bmatrix} = \textbf{W}^3\textbf{h}^2$ $\textbf{h}^2 = \begin{bmatrix} h_{1}^2 \\ h_{2}^2 \end{bmatrix} = \sigma(\begin{bmatrix} w_{11}^2 & w_{21}^2 \\ w_{21}^2 & w_{22}^2\end{bmatrix} \begin{bmatrix} h_{1}^1 \\ h_{2}^1 \end{bmatrix}) = \sigma(\textbf{W}^2\textbf{h}^1)$ $\textbf{h}^1 = \begin{bmatrix} h_{1}^1 \\ h_{2}^1 \end{bmatrix} = \sigma(\begin{bmatrix} w_{11}^1 & w_{21}^1 \\ w_{21}^1 & w_{22}^1\end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix}) = \sigma(\textbf{W}^1\textbf{X})$ Then using the chain rule $\frac{\partial f}{\partial \textbf{W}^1} = \frac{\partial f}{\partial \textbf{h}^2} \frac{\partial \textbf{h}^2}{\partial \textbf{h}^1} \frac{\partial \textbf{h}^1}{\partial \textbf{W}^1} = \textbf{W}^3 * \sigma(\textbf{W}^2\textbf{h}^1)(1-\sigma(\textbf{W}^2\textbf{h}^1)) * \sigma(\textbf{W}^1\textbf{X})(1-\sigma(\textbf{W}^1\textbf{X}))$
H: Gradient Descent in logistic regression Logistic and Linear Regression have different cost functions. But I don't get how the gradient descent in logistic regression is the same as Linear Regression. We get the Gradient Descent formula by deriving the Squared Error cost function. However in Logistic Regression we use a Logarithmic Cost function instead. I think I am lost here. AI: Gradient Descent is an universal method, you can us it with basically every loss function you can find in known ML algorithms. In your case, you have only to derive the logarithmic cost function. You can find a detailed calculation at https://math.stackexchange.com/questions/477207/derivative-of-cost-function-for-logistic-regression
H: Reducing text input size into word2vec without affecting performance too badly? So I am implementing Word2Vec for the first time, and I have a set of training data that I would like to train a word2vec model on. Predictably, the problem is the dataset is rather large, and I have more limited computational power than I would like. This is a very common problem of course, but are there any ways to minimize the input text without too horribly affecting the performance? For example, if I had the example sentences: Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much. They were the last people you’d expect to be involved in anything strange or mysterious, because they just didn’t hold with such nonsense. Mr. Dursley was the director of a firm called runnings, which made drills. He was a big, beefy man with hardly any neck, although he did have a very large mustache. I could do some thing like take out the stop words... but as I googled, that's a bad idea according to one link and a good idea according to another... so which is it? I also looked into text summarization but that seems harder than implementing word2vec. Another idea I naively had was to randomly take out X% of sentences from the text. But that would obviously be a performance hit in the model, and I'm not sure how big. So, are there any general methods for trying to do this, and is stop word removal a way to do it? AI: I don't think there really is a right or wrong answer to the "removing stopwords" question. Some people will argue that throwing away information will reduce model performance, while others argue that it'll increase noise. I personally follow a simple rule of thumb. If my model depends on sentence structure, then I keep stopwords. If i'm modeling topics and more interested in important phrases, then I remove them. This seems to work well. If you're looking for ways to reduce you matrix, then yes removing stopwords is a perfectly acceptable idea. Another thing you can do is apply common feature reduction techniques like LSA or chi2 to find the most important words and reduce your input space to the most meaningful words. However, doing this may dramatically effect the performance or your word2vec model. But if it is your only choice, then why not give it a go.
H: What exactly is bootstrapping in reinforcement learning? Apparently, in reinforcement learning, temporal-difference (TD) method is a bootstrapping method. On the other hand, Monte Carlo methods are not bootstrapping methods. What exactly is bootstrapping in RL? What is a bootstrapping method in RL? AI: Bootstrapping in RL can be read as "using one or more estimated values in the update step for the same kind of estimated value". In most TD update rules, you will see something like this SARSA(0) update: $$Q(s,a) \leftarrow Q(s,a) + \alpha(R_{t+1} + \gamma Q(s',a') - Q(s,a))$$ The value $R_{t+1} + \gamma Q(s',a')$ is an estimate for the true value of $Q(s,a)$, and also called the TD target. It is a bootstrap method because we are in part using a Q value to update another Q value. There is a small amount of real observed data in the form of $R_{t+1}$, the immediate reward for the step, and also in the state transition $s \rightarrow s'$. Contrast with Monte Carlo where the equivalent update rule might be: $$Q(s,a) \leftarrow Q(s,a) + \alpha(G_{t} - Q(s,a))$$ Where $G_{t}$ was the total discounted reward at time $t$, assuming in this update, that it started in state $s$, taking action $a$, then followed the current policy until the end of the episode. Technically, $G_t = \sum_{k=0}^{T-t-1} \gamma^k R_{t+k+1}$ where $T$ is the time step for the terminal reward and state. Notably, this target value does not use any existing estimates (from other Q values) at all, it only uses a set of observations (i.e., rewards) from the environment. As such, it is guaranteed to be unbiased estimate of the true value of $Q(s,a)$, as it is technically a sample of $Q(s,a)$. The main disadvantage of bootstrapping is that it is biased towards whatever your starting values of $Q(s',a')$ (or $V(s')$) are. Those are are most likely wrong, and the update system can be unstable as a whole because of too much self-reference and not enough real data - this is a problem with off-policy learning (e.g. Q-learning) using neural networks. Without bootstrapping, using longer trajectories, there is often high variance instead, which, in practice, means you need more samples before the estimates converge. So, despite the problems with bootstrapping, if it can be made to work, it may learn significantly faster, and is often preferred over Monte Carlo approaches. You can compromise between Monte Carlo sample based methods and single-step TD methods that bootstrap by using a mix of results from different length trajectories. This is called TD($\lambda$) learning, and there are a variety of specific methods such as SARSA($\lambda$) or Q($\lambda$).
H: "concat" mode can only merge layers with matching output shapes except for the concat axis I have a function I am trying to debug which is yielding the following error message: ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 128, 80, 256), (None, 64, 80, 80)] I'm running a kernel from a Kaggle competition called Dstl Satellite Imagery Feature Detection (kernel available here) This is the function where I am having issues merging a list of tensors into a single tensor: def get_unet(): inputs = Input((8, ISZ, ISZ)) conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(inputs) conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv1) pool1 = MaxPooling2D(pool_size=(2, 2), dim_ordering="th")(conv1) conv2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(pool1) conv2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv2) pool2 = MaxPooling2D(pool_size=(2, 2), dim_ordering="th")(conv2) conv3 = Convolution2D(128, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(pool2) conv3 = Convolution2D(128, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv3) pool3 = MaxPooling2D(pool_size=(2, 2), dim_ordering="th")(conv3) conv4 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(pool3) conv4 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv4) pool4 = MaxPooling2D(pool_size=(2, 2), dim_ordering="th")(conv4) conv5 = Convolution2D(512, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(pool4) conv5 = Convolution2D(512, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv5) up6 = merge([UpSampling2D(size=(2, 2), dim_ordering="th")(conv5), conv4], mode='concat', concat_axis=1) conv6 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(up6) conv6 = Convolution2D(256, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv6) up7 = merge([UpSampling2D(size=(2, 2), dim_ordering="th")(conv6), conv3], mode='concat', concat_axis=1) conv7 = Convolution2D(128, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(up7) conv7 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(conv7) up8 = merge([UpSampling2D(size=(2, 2), dim_ordering="th")(conv7), conv2], mode='concat', concat_axis=1) conv8 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(up8) conv8 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv8) up9 = merge([UpSampling2D(size=(2, 2), dim_ordering="th")(conv8), conv1], mode='concat', concat_axis=1) conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(up9) conv9 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', dim_ordering="th")(conv9) conv10 = Convolution2D(N_Cls, 1, 1, activation='sigmoid', dim_ordering="th")(conv9) model = Model(input=inputs, output=conv10) model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=[jaccard_coef, jaccard_coef_int, 'accuracy']) return model I'm running keras with a TensorFlow backend. My thoughts are that there are some compatibility issues with versions of software (i.e. the original code is over a year old). Or, perhaps I need to reshape the data somehow. What might be causing this error? This is the full error: Traceback (most recent call last): File "<ipython-input-1-e8f13915ac9b>", line 1, in <module> runfile('/Users/aaron/temp/tmp/kaggle_dstl_v3.py', wdir='/Users/aaron/temp/tmp') File "/Users/aaron/anaconda3/envs/kaggle-dstl-env/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "/Users/aaron/anaconda3/envs/kaggle-dstl-env/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/Users/aaron/temp/tmp/kaggle_dstl_v3.py", line 513, in <module> model = train_net() File "/Users/aaron/temp/tmp/kaggle_dstl_v3.py", line 416, in train_net model = get_unet() File "/Users/aaron/temp/tmp/kaggle_dstl_v3.py", line 294, in get_unet up8 = merge([UpSampling2D(size=(2, 2), dim_ordering="th")(conv7), conv2], mode='concat', concat_axis=1) File "/Users/aaron/anaconda3/envs/kaggle-dstl-env/lib/python3.6/site-packages/keras/legacy/layers.py", line 458, in merge name=name) File "/Users/aaron/anaconda3/envs/kaggle-dstl-env/lib/python3.6/site-packages/keras/legacy/layers.py", line 111, in __init__ node_indices, tensor_indices) File "/Users/aaron/anaconda3/envs/kaggle-dstl-env/lib/python3.6/site-packages/keras/legacy/layers.py", line 191, in _arguments_validation 'Layer shapes: %s' % (input_shapes)) ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 128, 80, 256), (None, 64, 80, 80)] AI: First of all, you are correct that your code is old as some functions being used are deprecated (e.g. Convolution2D is now Conv2D see here). However, the error clearly states that you are trying to concatenate two tensors that their dimensions do not match. When concatenating two tensors along a specific axis, all other dimensions except the one being concatenated must be the same. In your case, you are trying to concatenate along axis=1, but the last dimension is different (256 for the first tensor and 80 for the second). I would recommend basing your U-Net code on a newer implementation (link1, link2, link3). These implementations replace your merge([UpSampling2D(size=(2, 2), dim_ordering="th")(conv5), conv4], mode='concat', concat_axis=1) layers with up_conv5 = UpSampling2D(size=(2, 2), data_format="channels_last")(conv5) ch, cw = get_crop_shape(conv4, up_conv5) crop_conv4 = Cropping2D(cropping=(ch,cw), data_format="channels_last")(conv4) up6 = concatenate([up_conv5, crop_conv4], axis=concat_axis)` which also crops the height and the width of the previous layer to perform the concatenation correctly (the shape of the crop is determined through an auxiliary function).
H: validation_curve differs from cross_val_score? I'm trying to see how well a decision tree classifier performs on my input. For this I'm trying to use the validation and learning curves and SKLearn's cross-validation methods. However, they differ, and I don't know what to make of it. The validation curve shows up as follows: Based on varying the maximum depth parameter, I'm getting worse and worse cross-val scores. However, when I try the cross_val_score, I get ~72% accuracy reliably: While I was using the default tree depth for clf here, it still puzzles me how the validation curve never reaches even 0.6, but the cross-val scores are all above 0.7. What does this mean? Why is there a discrepancy? Code for reference below. For the Validation curve: import matplotlib.pyplot as plt import numpy as np from sklearn.datasets import load_digits from sklearn.svm import SVC from sklearn.model_selection import validation_curve X, y = prepareDataframeX.values, prepareDataframeY.values.ravel() param_range = np.arange(1, 50, 5) train_scores, test_scores = validation_curve( DecisionTreeClassifier(class_weight='balanced'), X, y, param_name="max_depth", param_range=param_range, cv=None, scoring="accuracy", n_jobs=1) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.title("Validation Curve with Decision Tree Classifier") plt.xlabel("max_depth") #plt.xticks(param_range) plt.ylabel("Score") plt.ylim(0.0, 1.1) lw = 2 plt.plot(param_range, train_scores_mean, label="Training score", color="darkorange", lw=lw) plt.fill_between(param_range, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.2, color="darkorange", lw=lw) plt.plot(param_range, test_scores_mean, label="Cross-validation score", color="navy", lw=lw) plt.fill_between(param_range, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.2, color="navy", lw=lw) plt.legend(loc="best") plt.show() For the cross-val scores: clf = DecisionTreeClassifier(class_weight='balanced') X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) clf.score(X_test, y_test) UPDATE A comment has been asked about shuffling. When I shuffle the data by X, y = prepareDataframeX.values, prepareDataframeY.values.ravel() indices = np.arange(y.shape[0]) np.random.shuffle(indices) X, y = X[indices], y[indices] I get: Which makes even less sense to me. What does this mean? AI: First of all, you have to shuffle your data because it seems that the model has learned a special pattern in the training data which has not occurred in the test data so much. After that, suppose that you get a validation curve like the current one. As you can see, Increasing the value of depth, does not change the learning. The two lines are parallel. In cases which each of the lines may have intersection, the upper line has negative slope and the lower one has positive slope, in the future on seen levels, you may want to increase the number of levels, not in this case. Having same error, means that you are not over-fitting. but as you can see the amount of learning is not too much which means that you are having high bias problem, which means you have not learned the problem so well. In this case means that your current feature space maybe has high Bayes error which means that there are samples which have same features and different labels. Actually the distributions of different classes overlap. There is something to argue about decision trees. If you have numerical features which are continuous, you may not have exactly same input patterns but they have overlap in their range.
H: Should the output of regression models, like SVR, be normalized? I have a regression problem which I solved using SVR. Accidentally, I normalized my output along with the inputs by removing the mean and dividing by standard deviation from each feature. Surprisingly, the Rsquare score increased by 10%. How can one explain the impact of output normalization for svm regression? AI: In regression problems it is customary to normalize the output too, because the scale of output and input features may differ. After getting the result of the SVR model, you have to add the mean to the result and multiply that by the standard deviation, if you have done that during normalizing. How can one explain the impact of output normalization for svm regression? If you normalize your data, you will have a cost function which is well behaved. means that you can find local parts more easily. The reason is that you have to construct the output using the input features in regression problems. It is difficult to make large values with small normalized features but with small numbers, making a normalized output is easier and can be learned faster.
H: Why won't my SVM learn a sequence of repeated elements I recently started playing with SVMs for a one class classification, I was able to get some reasonable classifications from real data and but was trying to optimize the nu and gamma parameters when I came across this example: In the code below, I train an SVM with an array of ones, then I present the same array of ones for classification and it classifies all ones as outliers. import pandas as pd from sklearn import svm import numpy as np nu = 0.01 gamma = 1 ones = pd.DataFrame(np.ones(100)) clf = svm.OneClassSVM(nu=nu, kernel="rbf", gamma=gamma) clf.fit(ones) ones["predicted"] = clf.predict(ones) #Returns -1 for all entries My question is: why does this happen? I thought this data would be trivial for any parameter configuration. AI: What you are facing is a small but crucial definition difference: novelty detection: The training data is not polluted by outliers, and we are interested in detecting anomalies in new observations. outlier detection: The training data contains outliers, and we need to fit the central mode of the training data, ignoring the deviant observations. OneClassSVM is an Unsupervised Outlier Detection. Therefor your data needs to have outliers in order for the algortihm to detect them. My best guess, why its prediction every input as an outlier is, that if there are no real outliers, everything must be an outlier. Let me demonstrate this quickly. I adjustet the kernel to linear import pandas as pd from sklearn import svm import numpy as np nu = 0.5 gamma = 1.0 ones= pd.DataFrame(np.ones(100)) clf = svm.OneClassSVM(nu=nu, kernel="linear", gamma=gamma) clf.fit(ones) clf.predict(-1) # -1 clf.predict(1) # -1 clf.predict(1.00001) # 1 clf.predict(2) # 1 clf.predict(10) # 1
H: Understanding of naive bayes: computing the conditional probabilities For a task on sentiment analysis, suppose we have some classes represented by $c$ and features $i$. We can represent the conditional probability of each class as: $$P(c | w_i) = \frac{P(w_i|c) \cdot P(c)}{P(w_i)}$$ where $w_i$ represents each feature and $c$ is the class we have. Then empirically, we can represent $$P(w_i|c) = \frac{n_{ci}}{n_c}$$ $$P(w_i) = \frac{n_{i}}{n}$$ Our priors for each classes are then given by: $$P(c) = \frac{n_c}{n}$$ where: $n$ is the total number of features in all classes. $n_{ci}$ represents the number of counts of that feature $i$ in class $c$. $n_c$ is the total number of features for the class, and $n_i$ is the total number of features for all classes. Is my understanding of the above correct? So given these $P(c|w_i)$ probabilities for each word, I'm the naive bayes assumption is that the words are independent, so I simply multiply each word in a document for a certain class, i.e. to compute $\prod P(c|w_i), i \in N$ where $N$ is the number of words in the document. Is this correct? To actually compute the conditional probability numerically, would it suffice to do the following: $$P(c | w_i) = \frac{P(w_i|c) \cdot P(c)}{P(w_i)} = \frac{n_{ci}}{n_c} \cdot \frac{n_c}{n}\cdot \frac{n}{n_i} = \frac{n_{ci}}{n_i}$$ The last part of the equation looks a bit suspicious to me as it seems way too simple to compute for a rather complex probability. AI: Your formula is correct for one $w_i$, but if you want to classify a document, you need to compute $P(c | w_1,\ldots,w_N)$. Then you have $$P(c | w_1,\ldots,w_N) = \frac{P(c)\cdot P(w_1,\ldots,w_N|c)}{P(w_1,\ldots,w_N)} = \frac{P(c) \cdot \prod_{i=1}^N P(w_i|c)}{P(w_1,\ldots,w_N)} \neq \prod_{i=1}^NP(c|w_i)$$ where the second equation holds because of the naïve Bayes assumption. For classification purposes you can ignore $P(w_1,\ldots,w_N)$ because it is constant (given the data). The formula is still simple ("naïve") but doesn't simplify quite as much. The last part of the equation looks a bit suspicious to me as it seems way too simple to compute for a rather complex probability. Keep in mind that while Naïve Bayes is a decent classifier for many applications, the generated probabilities are usually not very representative.
H: Cleaning input data with pd.get_dummies() What is the advantage of converting a series like >>> df Color 0 Red 1 Blue 2 Green 3 Red To a multiple series like the below? >>> pd.get_dummies(df) Color_Blue Color_Green Color_Red 0 0 0 1 1 1 0 0 2 0 1 0 3 0 0 1 One could as well have a hot encoded values for the Color Column as below? >>> labels=list(set(df.Color)) >>> pd.DataFrame(df.Color.map({x:labels.index(x) for x in labels}).rename('Color_Code')) Color_Code 0 1 1 2 2 0 3 1 I know syntactically pd.get_dummies looks much simpler, but somehow I want to lean towards a lesser number of features than more number of features... AI: Categorical features need to be converted to numerical values. They are various ways to do that. I would recommend reading this blog and this one to learn what are the advantages and disadvantages of choosing each. The first method you showed is called one-hot encoding that you can get easily by pd.get_dummies as you mentioned, and frequently used by people in algorithms like XGBoost. One of the biggest downfall of the first method OHE, is increasing often unnecessarily the dimensionality of your feature space. Other cons could be not being a good representative of feature space as well as problem of missing features in test data. The second method is LabelEncoding, usually used in target variables or when you have very few categorical features. The problem surfaces when you start having a large number of categorical features (>50 % of feature space) with each feature having many sub levels. Here you need more clever way of feature encoding like Target-based mean encoding (smoothing, you can find some kernels in Kaggle like this). If you are Python user, I have found a fairly new package under scikit-learn-contrib offering a wide range of Categorical Encoding Methods. This Kdnuggets post also compares some of the aspect of using different methods.
H: How to understand incremental stochastic gradient algorithm and its implementation in logistic regression [updated]? I have a difficulty where to start the implementation of incremental stochastic gradient descent algorithm and its respective implementation in logistic regression. I don't quite understand this algorithm and there are few sources to explain it with crystal-clear interpretation and possible demo code. I am quite new with ML algorithm and I have no idea which would be efficient workaround to solve this problem. In particular, the problem I am studying is to implement hogWild! algorithm for logistic regression, which asks me to program incremental SGD algorithm with a sequential order. Can anyone give me a general idea or possible pipeline to make this happen in python? logistic loss function and gradient Here is my implementation: import numpy as np import scipy as sp import sklearn as sl from scipy import special as ss from sklearn import datasets X_train, y_train=datasets.load_svmlight_file('/path/to/train_dataset') X_test,y_test=datasets.load_svmlight_file('/path/to/train_dataset.txt', n_features=X_train.shape[1]) class ISGD: def lossFunc(X,y,w): w.resize((w.shape[0],1)) y.resize((y.shape[0],1)) lossFnc=ss.log1p(1+np.nan_to_num(ss.expm1(-y* np.dot(X,w,)))) rslt=np.float(lossFnc) return rslt def gradFnc(X,y,w): w.resize((w.shape[0],1)) y.resize((y.shape[0],1)) gradF1=-y*np.nan_to_num(ss.expm1(-y)) gradF2=gradF1/(1+np.nan_to_num(ss.expm1(-y*np.dot(X,w)))) gradF3=gradF2.resize(gradF2.shape[0],) return gradF3 def _init_(self, learnRate=0.0001, num_iter=100, verbose=False): self.w=None self.learnRate=learnRate self.verbose=verbose self.num_iter=num_iter def fitt(self, X,y): n,d=X.shape self.w=np.zeros(shape=(d,)) for i in range(self.num_iter): print ("\n:", "Iteration:", i) grd=gradFnc(self.w, X,y) grd.resize((grd.shape[0],1)) self.w=self.w-grd print "Loss:", lossFunc(self.w,X,y) return self Seems my above implementation has some problems. Can anyone help me how to correct that? Plus, I don't have a solid idea how to implement incremental SGD sequentially. How can I make this happen? Any idea? AI: Gradient Descent The idea of gradient descent is to traverse a function $f_{LR}(\textbf{w})$ and find a local maximum or minimum for a set a values $\textbf{w}$. Gradient descent is an iterative process. At each iteration you will evaluate the function given your current set of values. Then you will take the derivative of that function with respect to those values to see how much each value contributes to the slope of the function. We can then change the values in a proportionate way to move towards this optimal point. The gradient descent equation is described as $\textbf{w}^{(k+1)} = \textbf{w}^{(k)} - \rho \frac{\partial f_{LR}(\textbf{w})}{\partial \textbf{w}^{(k)}}$ where $\rho$ is the learning rate, usually small number. A constant which determines the speed at which we want to change the values we are optimizing. Initializing the values There's many ways to initialize these values. We can either set them all to zero, or set them randomly. Then you can take a random instance in your dataset $x_i$ and $y_i$ and compute the derivative $\frac{\partial f_{LR}(\textbf{w})}{\partial \textbf{w}^{(k)}} = \textbf{x}_i (-y_i \frac{e^{-y_i \textbf{x}_i \textbf{w}}}{1 + e^{-y_i \textbf{x}_i \textbf{w}}})$. Once the compute this you can put the result into the gradient descent equation and update all your weights. You then continue to go through this process until your weights $\textbf{w}$ converge to a value, or some other condition is met. You might want to limit your algorithm with some iteration counter to avoid infinite loops caused by weights that do not converge. Gradient Descent Code Here is an example using gradient descent to train weights along the logistic regression loss function. import numpy as np def update_weights(x_i, y_i, w): p = 0.8 # The learning rate yhat = predict(x_i, w) # The predicted target error = y_i - yhat return w + p * (y_i - yhat) * x_i # Update the weights def predict(x_i, w): return 1/(1+np.exp(-1 * np.dot(w.T, x_i))) def train_weights(x, y, verbose = 0): w = np.zeros((x.shape[1],)) w_temp = np.zeros((x.shape[1],)) epoch = 0 while epoch <= 1000: for i, x_i in enumerate(x): w = update_weights(x_i, y[i], w) pred = 1/(1+np.exp(-1 * np.dot(x, w))) error = np.sum(pred - y) pred[pred < 0.5] = 0 pred[pred >= 0.5] = 1 epoch += 1 if verbose == 1: print('------------------------------------------------') print('Targets: ', y) print('Predictions: ', pred) print('Predictions: ', pred) print('------------------------------------------------') # Check if we have reach convergence if np.sum(np.abs(w_temp - w)) < 0.001: print(epoch) print(error) return w else: w_temp = w return w # Create some artificial database x = np.zeros((4,3)) y = np.zeros((4,)) x[0] = [1, 1, 1] x[1] = [1, 2, 2] x[2] = [1, 10, 10] x[3] = [1, 11, 11] y[0] = 0 y[1] = 0 y[2] = 1 y[3] = 1 print(x) print(y) # Train the weights w = train_weights(x, y, 0) print('Trained weights: ', w) # Get predicitons pred = 1/(1+np.exp(-1 * np.dot(x, w))) error = np.sum(pred - y) pred[pred < 0.5] = 0 pred[pred >= 0.5] = 1 print('Predictions: ', pred)
H: How to get spike values from a value sequence? I have pile of vectors where the values could be plotted like this: Now I want to extract the "spike values" (over a certain threshold say 15,000). In this case there is fifteen. How could this be done with Python? (There is no predefined number of spikes but the threshold is a reliable filter value.) AI: This is very simple. Let's say your data in Panda format (named data_df), and extracting peaks/spikes over a certain threshold (e.g. 15000 here) is simply: data_df[data_df > 15000] If this data is sitting in a particular column, you can use this instead: data_df[data_df['column_name'] > 15000] These will return the peak values. Updated Answer: If you want local extreme points (e.g. maximum or minimum ) around each peak, check scipy.signal.argrelextrema in Scipy. A concrete example: Let's make a artificial random data with random spikes: import numpy as np import matplotlib.pyplot as plt random_number1 =np.random.randint(0,200,20) random_number2=np.random.randint(0,20,100) random_number=np.concatenate((random_number1,random_number2)) np.random.shuffle(random_number) plt.plot(random_number) Now using argrelextrema function you will find the index of relative extreme values (either minimum or maximum) c_max_index = argrelextrema(random_number, np.greater, order=5) Please make sure you understand the "order" option. It basically look around 5 neighbouring points, and it returns the maximum in this case. And you can see how it works by pinpointing the found points on the actual graph like this: plt.plot(random_number) plt.scatter(c_max_index[0],random_number[c_max_index[0]],linewidth=0.3, s=50, c='r') Note that you can retrieve peak points via random_number[c_max_index[0]], and c_max_index are just indexes of the extreme points.
H: GloVe vector representation homomorphism question In the paper GloVe: Global Vectors for Word Representation, there is this part (bottom of third page) I don't understand: I understand what groups and homomorphisms are. What I don't understand is what requiring $ F $ to be a homomorphism between $ (\mathbb{R},+) $ and $ (\mathbb{R}_{>0},\times) $ has to do with making $ F $ symmetrical in $ w $ and $ \tilde{w}_k $. Am I misunderstanding something? We want $ F $ to be unchanged if we either interchange $ w_i $ and $ \tilde{w}_k $ OR interchange $ w_j $ and $ \tilde{w}_k $, right? Is this the only way to achieve the symmetry between $ w $ and $ \tilde{w}_k $? AI: If you're asking if the group homomorphism makes the the process symmetric then no it doesn't directly. However, they use the fact that they require a group homomorphism to show that $w_{i}^{T} \tilde{w}_k = log(P_{ik})=log(X_{ik}) - log(X_{i})$ This nearly gives us symmetry. Finally by adding $\tilde{b}_{k}$ into the equation you restore symmetry. So in short $w_{i}^{T} \tilde{w}_k + b_{i} + \tilde{b}_{k} = log(X_{ik})$ is what ensures symmetry, and the group homomorphism is a tool to get there. Update: Some more details Essentially, what we want is the ability to peform a label switch. Group homomorphism helps with this process because it perseves a mapping between the $(R, +)$ and $(R, x)$. $F((w_{i}^{T} - w_{j}^{T})w_{k}^{'})=F(w_{i}^{T}w_{k}^{'}+( - w_{j}^{T}w_{k}^{'})) = F(w_{i}^{T}w_{k}^{'}) \times F(-w_{j}^{T}w_{k}^{'} )= F(w_{i}^{T}) \times F(w_{j}^{T}w_{k}^{'})^{-1} = \frac{F(w_{i}^{T}w_{k}^{'})}{F(w_{j}^{T}w_{k}^{'})}$ The group homomorphism here allows for that to occur. Therefore we can see that by setting $F(w_{i}^{T}w_{k}^{i}) = \frac{X_{ik}}{X_{i}}$ Now finally we can say that $w_{i}^{T} {w}_k^{'} = log(P_{ik})=log(X_{ik}) - log(X_{i}).$ So as far as your comment, it is the most sensible chocie for their method and of which they buld the core mathematicals to GloVE. Changing it, I imagine wouldn't be a trivial thing. I imagine if you did, much of what is derived, including the loss function would change. But with that said, I imagine there are otherwise to achieve label switching.
H: Autoencoder for anomaly detection from feature vectors I am trying to use an autoencoder (as described here https://blog.keras.io/building-autoencoders-in-keras.html#) for anomaly detection. I am using a ~1700 feature vector (rather than images, which were used in the example) with each vector describing a different protein interaction. I have a "normal" category of interactions on which I train the AE, then I feed it new vectors and use reconstruction error to detect anomalous interactions. Adjusting my threshold so I get a true positive rate of 0.95, I get a false positive rate of 0.15, which is rather high. When I trained xgboost on the normal and anomalous vectors (using both types of interactions in training and testing) I was able to get precision of 0.98 **. Does that mean that my model (or indeed my approach of using an AE) is ineffective, or maybe this is the best I could hope for when training an anomaly detector rather than a 2 category classifier (that is, xgboost in my case)? How should I proceed? ** Of course, this is merely a sanity check, and cannot be used as the solution. I need the model to detect anomalies that can be very different from those I currently have - thus I need to train it on the normal interaction set, and leave anomalies for testing alone. AI: Does that mean that my model (or indeed my approach of using an AE) is ineffective Well, it depends. Auto Encoder are a quite broad field, there are many hyperparameters to tune, width, depth, loss function, optimizer, epochs. How should I proceed? From my gut feeling I would say that you don't have enough data to train the AE properly. Keep in Mind, the MNIST database contains 50,000 image. And you need enough variance in order to not overfit your training data. Tree based approaches are, at least in my experience, easier to train. If you like to stick at the anomaly detection part, which I recommend since you don't know what anomalies you will face, try the Isolation Forest Algorithm. But for a solid recommendation I would need to know how your data looks. Btw, A good metric to use in such a case is the ROC score, which basically tells you how likely it is that your model will classify new data points correctly. Check out the link for an visual explanationROC explained So Baseline is, try less complex approaches until you a certain that they are not sufficient enough.
H: Shape classification algorithm I have a dataset of shape edges, that I am trying to make a model for with sklearn. I'm new to the machine learning world, so I am struggling to create a good model. Using SVM, I was able to get a supposed 81% precision, but when I feed it an image outside the training or test set, it consistently returns the wrong prediction, almost every time. Question: Is there is a better way of doing this than using SVM? Or are these shapes too similar? I have 90 images in the training set. Here is a link to my ML code. AI: That sounds like you're suffering from overfitting, probable duo the "curse of dimensionality", which means 2 things. When trying to measure distance between two points in high dimensional space and trying to interpret this data, the difference between the longest and the shortest distance get's less meaningful. Trying to train an algorithm with way less samples then features, like in your case, will cause that the algorithm will memorize your samples. I see 3 options. Getting more data, reduce your feature set or try greater values for C. C will regularize the decision boundary. More to read: Curse of dimensionality n >> p
H: How to validate different method of clustering I have applied K-means and Hierarchical Agglomerative clustering method on some data and clustered them into 5 groups. For validation (agreement) purpose I used this formula: c=(# of cluster shared/ Total cluster)*100 % I feel its a wrong validation technique. Can I use entropy, Rand index, Dunn number etc to measure the validation? AI: Could always try a silhouette score, although it may not perform well on high dimensional data. Silhouette scores compare a datapoint's placement relative to its own cluster to its placement relative to other cluster/clusters. The values returned will range between -1 (indicating a poor clustering) to +1 (indicating "good" clustering). Using Wikipedia's example, use 2 clusters and assume there is an observation i that has been placed in Cluster 1. The Silhouette score uses distance measures (Euclidean, for example, but others can also be used) to determine the average distance of i' to every other point in Cluster 1. Then it determines the average distance of i to every point in Cluster 2. The silhouette formula for point i would be: silhouette(i) = $\frac{b(i)-a(i)}{Max\left \{ a(i), b(i) \right \}}$ Remembering for this example that i is in Cluster 1, then: If an object is closer, on average, to the points in Cluster 1 than Cluster 2, then a(i) < b(i) and silhouette score is positive for that point. If an object is closer, on average, to the points in Cluster 2 than Cluster 1, then a(i) > b(i) and silhouette score is negative for that point. Run this comparison across all points and all clusters, then average, and you will obtain a global silhouette score. You will see most people capture silhouette score in a for loop where they vary the number of clusters; this allows quick comparison of a range of clusters.
H: ValueError while using linear regression I have loaded a dataset and converted into data frame while I am using linear regression I am receiving the following value error as shown in my code below. I am at the moment doing a tutorial and could not figure out how come the arrays are 1-D as the error shows. from sklearn.datasets import load_boston boston_dataset=load_boston() #create a pandas dataframe and store the data df_boston=pd.DataFrame(boston_dataset.data) df_boston.columns=boston_dataset.feature_names #append price, target as a new column in the dataset df_boston['Price']=boston_dataset.target #print first five observations df_boston.head() CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT Price 0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 15.3 396.90 4.98 24.0 1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 17.8 396.90 9.14 21.6 2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 17.8 392.83 4.03 34.7 3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 18.7 394.63 2.94 33.4 4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 18.7 396.90 5.33 36.2 #assign features on x-axis X_features=boston_dataset.data #assign target on y-axis Y_target=boston_dataset.target #import linear model-the estimator from sklearn.linear_model import LinearRegression lineReg=LinearRegression() #fit data into the estimator lineReg.fit(X_features,Y_target) ValueError Traceback (most recent call last) <ipython-input-16-4b5b068e587b> in <module>() 1 #fit data into the estimator ----> 2 lineReg.fit(X_features,Y_target) /usr/local/lib/python3.4/dist-packages/sklearn/linear_model/base.py in fit(self, X, y, sample_weight) 480 n_jobs_ = self.n_jobs 481 X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'], --> 482 y_numeric=True, multi_output=True) 483 484 if sample_weight is not None and np.atleast_1d(sample_weight).ndim > 1: AI: It seems in your code you build a pandas dataframe but you do not use it. I recreated your code line by line and was unable to get the same error. Try the following from sklearn.datasets import load_boston from sklearn.linear_model import LinearRegression boston = load_boston() X = boston.data Y = boston.target lineReg = LinearRegression() lineReg.fit(X, Y) lineReg.score(X, Y) This results in an error of $0.7406$. Of course, this result is kind of meaningless because you should split your data into a training and testing set in order to accurately test your results. You should do the following from sklearn.datasets import load_boston from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split boston = load_boston() X = boston.data Y = boston.target X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, shuffle= True) lineReg = LinearRegression() lineReg.fit(X_train, y_train) lineReg.score(X_test, y_test ) This will always give a different score because the data is shuffled before being split into your two sets. You MUST separate your training data are the start of your algorithm and do not pollute your training set with the testing set otherwise you risk biasing your model! This is very important.
H: What algorithm can predict structured outputs of arbitrary size? I have a collection of graph objects of variable size (input) which are each paired to another graph of variable size (output). The task is, given an input graph, produce the most likely output graph. I have been looking at 'structured output' techniques such as SSVM but as far as I can tell, they all act on outputs of fixed size (or at least a size that matches the input size, i.e. sequence tagging). Are there any tools that can map an input to a structured object of arbitrary size? AI: You better try graph embedding and for that I propose going through Horst Bunke from University of Bern who has been doing it for years. Just search him in google scholar and go through his publication network (co-authors,cited papers, citing papers,etc) for example this and this as more classic papers or this as an exact answer to your question. I am pretty confident you will find your solution as it's exactly his research direction. I am on mobile so referencing is difficult now but I can update my answer later. The other point that I would like to mention is that graph embedding has two kinds: Embedding the vertices (nodes) of one graph into a n-dimensional space i.e. each vector will be a node of the graph. (see this) Embedding a dataset of graphs into a n-dimensional space in which each vector will be an entire graph. You need the second one. The other thing worth to mention is looking at the problem from Network Science point of view in which the statistical and structural of a graph is widely studied. There are plenty of graph/network measures which are less or more tolerante to the number of nodes e.g. Clustering Coefficient. Using them as features can be fed to a classifier to see how it works. The last but not the least is what you actually asked! What you are looking for is called Graph Kernel. It is nothing but the definition of the ML-known concept Kernel but on graphs. So kernel methods can get benefit of that by skipping the feature extraction step (like what I suggested above) and directly work with structural inputs. For this purpose I would suggest that you equip yourself with the graph kernels e.g. this paper. Good luck!
H: Softmax: Different output scikit-learn and TensorFlow I'm trying to learn a simple linear softmax model on some data. The LogisticRegression in scikit-learn seems to work fine, and now I am trying to port the code to TensorFlow, but I'm not getting the same performance, but quite a bit worse. I understand that the results will not be exactly equal (scikit learn has regularization params etc), but it's too far off. total = pd.read_feather('testfile.feather') labels = total['labels'] features = total[['f1', 'f2']] print(labels.shape) print(features.shape) classifier = linear_model.LogisticRegression(C=1e5, solver='newton-cg', multi_class='multinomial') classifier.fit(features, labels) pred_labels = classifier.predict(features) print("SCI-KITLEARN RESULTS: ") print('\tAccuracy:', classifier.score(features, labels)) print('\tPrecision:', precision_score(labels, pred_labels, average='macro')) print('\tRecall:', recall_score(labels, pred_labels, average='macro')) print('\tF1:', f1_score(labels, pred_labels, average='macro')) # now try softmax regression with tensorflow print("\n\nTENSORFLOW RESULTS: ") ## By default, the OneHotEncoder class will return a more efficient sparse encoding. ## This may not be suitable for some applications, such as use with the Keras deep learning library. ## In this case, we disabled the sparse return type by setting the sparse=False argument. enc = OneHotEncoder(sparse=False) enc.fit(labels.values.reshape(len(labels), 1)) # Reshape is required as Encoder expect 2D data as input labels_one_hot = enc.transform(labels.values.reshape(len(labels), 1)) # tf Graph Input x = tf.placeholder(tf.float32, [None, 2]) # 2 input features y = tf.placeholder(tf.float32, [None, 5]) # 5 output classes W = tf.Variable(tf.zeros([2, 5])) b = tf.Variable(tf.zeros([5])) # Construct model pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax clas = tf.argmax(pred, axis=1) # Minimize error using cross entropy cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1)) # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: # Run the initializer sess.run(init) # Training cycle for epoch in range(1000): # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: features, y: labels_one_hot}) # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) class_out = clas.eval({x: features}) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("\tAccuracy:", accuracy.eval({x: features, y: labels_one_hot})) print('\tPrecision:', precision_score(labels, class_out, average='macro')) print('\tRecall:', recall_score(labels, class_out, average='macro')) print('\tF1:', f1_score(labels, class_out, average='macro')) The output of this code is (1681,) (1681, 2) SCI-KITLEARN RESULTS: Accuracy: 0.822129684711 Precision: 0.837883361162 Recall: 0.784522522208 F1: 0.806251963817 TENSORFLOW RESULTS: Accuracy: 0.694825 Precision: 0.735883666192 Recall: 0.649145125846 F1: 0.678045562185 I inspected the result of the one-hot-encoding, and the data, but I have no idea why the result in TensorFlow is much worse. AI: The problem turned out to be silly, I just needed more epochs, a smaller learning rate (and for efficiency I turned to AdamOptimizer, results are now equal. (1681,) (1681, 2) SCI-KITLEARN RESULTS: Accuracy: 0.822129684711 Precision: 0.837883361162 Recall: 0.784522522208 F1: 0.806251963817 TENSORFLOW RESULTS: Accuracy: 0.82213 Precision: 0.837883361162 Recall: 0.784522522208 F1: 0.806251963817
H: How to cross-validate a deep learning model for highly imbalanced datasets? I am working with a multi-modality classification problem (with Keras). I have 1000, 5000 and 10000 samples for three different classes. I would like to do a five fold cross-validation to select the best pre-trained deep learning model for deployment. I'm including class weights during model training to give more weightage for less-pronounced classes. For a given fold, I would be validating with 200, 1000 and 2000 samples from these three classes. Is accuracy a good prediction measure to be used in this case? Or do I have to measure the F1-score and Matthews correlation coefficients as well? Am I doing it right? AI: Accuracy is not a good indicator of success with imbalanced data. The accepted answer is correct: F1 score is commonly used. Other options include roc_auc_score (see here) and average_precision_score (see here), both defined through scikit-learn. If you're using Keras, I would recommend using class_weights (note this will not work well if you have a multi-label problem, although there are some workarounds, for example here).
H: What are some methodologies for performing feature selection for simple feed-forward neural networks? In multiple linear regression there is an F-test which can be used to evaluate whether or not a covariate has a meaningful impact on a model. This is typically done through either a forward selection or backwards selection algorithm. Does such a meaningful process exist for neural networks as well? The only reason I ask is because working with neural nets is an inherently stochastic process, so I do not know how I should try and get accurate bounds for the F-statistic. AI: I usually try to do the following process, I don't know whether it has name or not: Try to find features which an expert can say what the label is without hesitating. Plot correlation matrix for your sample to investigate whether each feature has correlation with others or not. I do this process whenever the input features are so much. Then try to reduce the correlated features or try to apply PCA. Consider the point that PCA alone does not care about the labels, so it may find principal components that your data would be so difficult to be separated but not necessarily.
H: Preparing custom dataset for object detection using ML Seeking clarity on single class object detection model using ML. I have prepared a custom database for this purpose up to 400 images which is split in 80%-20% as training and testing data-set. These are top view only images. The data collection followed the basic guidelines provided at here. The objective now is to detect the zebra crossing in below contexts. Model is failing in terms of accuracy. Although it identifies the class but fails to localize correctly in some contexts(greater extent).The desirable result is in blue whereas the model throws up red. What changes are required to training dataset to rectify this? EDIT1:- Tensorflow object detection API is used for this task. Detection accuracy attained is above 90%. Looking forward for suggestions to fix Localisation issue. EDIT2:- The illustrations here are only for outlining the issue.All my images are real world pics. AI: I have the following suggestion: The size of your data-set is small, you should increase the sample size. You also have to use data augmentation methods for improving the performance, translation, orientation and blurring are highly recommended. For your task I guess the most popular approach is what which is discussed in YOLO paper, and it is highly recommended using that if you want to have object detection not just localization and recognition for good performance. I also suggest you providing real data for your task. The images of first row are artificially made. You have to provide real data which your classifier is going to face. The distribution of your test time should be like validation and train time if you want to have good performance.
H: What do mean and variance mean for high dimensional data? I'm using Scikit Learn to guess the tag of Stack Overflow posts given the title and body. I represent the title and body as two 300-dimensional vectors of floats. The documentation for Scikit learn's SGDClassifier states: For best results using the default learning rate schedule, the data should have zero mean and unit variance. With one-dimensional data, I understand what mean and variance are but when we go into higher dimensions how would one calculate the mean and variance for a dataset? AI: It's just the expansion of one dimensional mean and standard deviation. Suppose that you are trying to estimate the weight of a person and you have two inputs, salary and height. For finding the weight, you have two inputs which are of different scales, so you try to find the mean and variance of each feature, salary and height, separately using the data samples. Suppose you have two data samples in a tuple like (salary, height). They are as follows: (10000, 180) (5000, 175) Although the first guy may seem so rich, the real point about the first and second data samples is that their scale is not the same. The approach is like this: Find the mean of first feature among data samples, salary here. (10000 + 5000) / 2 = 7500 Find the mean of second feature among data samples, height here. (180 + 175) / 2 = 177.5 Find the standard deviation of first feature, among data samples, which is 2500 for salary. Find the standard deviation of second feature, among data samples, which is 2.5 for height. Reduce the amount of mean from corresponding feature for each data sample and divide each with the standard deviation of corresponding feature. (10000 - 7500) / 2500 = 1 (5000 - 7500) / 2500 = -1 (180 - 177.5) / 2.5 = 1 (175 - 177.5) / 2.5 = -1 Data samples which have been normalized now are like as follows: (1, 1) (-1, -1) Whenever you normalize your data, your cost function would be so easier to learn. The weights don't have to struggle to reach to high values in situations where your features are not in same range.
H: Decision Tree used for Calculating Precision, Accuracy, and Recall, class breakdown question I am creating decision trees modeling data that looks like this. pelvic_radius degree_spondylolisthesis class 82.45603817 41.6854736 Abnormal 114.365845 -0.421010392 Normal When finished, I run my test data through my tree and compare the outputs from the run to the ones I am given. This will allow me to check my accuracy, precision, and recall values. TP = 0; % True Positives TN = 0; % True Negatives FP = 0; % False Positives FN = 0; % False Negatives And then once those values are calculated, I can calculate the following. precision = TP/(TP+FP); accuracy = (TP+TN)/(TP+TN+FP+FN); recall = TP/(TP+FN); However, this can be done in two ways. One considering the 'Normal' class as Positive and one Considering the 'Abnormal' class as positive. Here is the sudo code to further explain what I mean. for k=1:length(resultsOfTestSet) if(strcmp(resultsOfTestSet{k},'Normal')) if (strcmp(testSet{k}, 'Normal')) % TRUE POSITIVE TP = TP + 1; else % FALSE POSITIVE FP = FP + 1; end elseif(strcmp(resultsOfTestSet{k},'Abnormal')) if(strcmp(testSet{k},'Abnormal')) % TRUE NEGATIVE TN = TN + 1; else % FALSE NEGATIVE FN = FN + 1; end end end The above case assumes Normal as the 'Positive' resultant class. However by just flipping the compare statements, I can get alternate values. for k=1:length(resultsOfTestSet) if(strcmp(resultsOfTestSet{k},'Abnormal')) if (strcmp(testSet{k}, 'Abnormal')) % TRUE POSITIVE TP = TP + 1; else % FALSE POSITIVE FP = FP + 1; end elseif(strcmp(resultsOfTestSet{k},'Normal')) if(strcmp(testSet{k},'Normal')) % TRUE NEGATIVE TN = TN + 1; else % FALSE NEGATIVE FN = FN + 1; end end end So after running it for both cases, I get the following values. For Abnormal being my Positive case precision = 96.5517 accur = 95 recall = 87.5000 For Normal = Positive case precision = 94.3662 accur = 95 recall = 98.5294 So how do I calculate the combined result, prec, accu, and recall? Or, am I just missing the point and you just calculate it for one class at a time, like the one you are focusing on. The reason I am asking is because now lets say I have a set with multiple class outcomes in my decision tree. This is where I realized I have to pick a class to determine as my positive, or just look at classes individually. Here is a similar set with 3 class possibilities. Again, how do I calculate for the whole data set? Or is an individual class thing? Or do you calculate the individuals and then come together with a total for the whole decision tree. pelvic_radius degree_spondylolisthesis class 82.45603817 41.6854736 Abnormal 114.365845 -0.421010392 Normal 95 25 Perfect AI: The metrics you calculate are of two types, metrics that depict the entire prediction model you have built like accuracy which will be same in both the cases of your pseudo code. While the others like precision says how precise are you in explaining particular class of interest (accuracy can also be expressed this way in multi-class classification, see the diagram). This score depends on which class you had selected as a positive one. If you put positive class as face of your model, then it is called ppv or precision and npv, if vice versa. Coming to the multi-class classification, the core definition holds the same. Now the matrix will be n x n(n being number of classes). The sample matrix looks likes this. The diagonal elements explains the number of 1 class's predicted as 1. Now there are n precision values for each class. Precision for class 1 is how many values were truly predicted as 1 divided by how many are predicted as 1 (FP's included), which is the sum of the first column. Not but not the least if you adamantly want a precision like metrics for the entire model, you got micro & macro averaging methods, which are helpful in giving a combined metric. This blog post explains it pretty well. Hope this clears something.
H: How to correctly resize input images in a CNN? I have a training set composed of images having different width and height, I need to resize them to a fix dimension nxn or nxm before passing them as the input of a CNN. I would like to know which parameters I need to take into account for correctly choose the new scaled dimensions n and m. AI: I have a suggestion for you. Maybe not complete enough to be a complete solution but that is exactly what I've experienced. Suppose that you have writings in a paper and the paper is a part of a scene, or you have boys with nice mustaches which are beside a wide scene. In such cases which you have special information which are significant for classifying, the first one may help the classifier recognize book and written things and the second one can help the classifier find the gender of people in the scene, try to resize the images in a way that such significant things be recognizable to the human. If humans can understand them, you can hope that your classifier will be able too. Consider that you shouldn't resize the images to small pixels to avoid your net being with lots parameters. If you resize the images to too small ones, you may loose important information.