text
stringlengths
83
79.5k
H: How to deal with Nominal categorical with label encoding? So if my dataset looks like this: names life_style instrument times 0 sid creative piano 1.5 1 aadi artistic guitar 1.4 2 aman traveller drum 1.1 3 sid artistic guitar 1.5 4 aadi creative drum 1.4 Now i want to deal with those Nominal categorical variables , Easy and go to approach is use Label encoding , But suppose if i am using sklearn label encoder then: from sklearn.preprocessing import LabelEncoder big_data = dataset_pd.apply(LabelEncoder().fit_transform) which will output: names life_style instrument times 0 2 1 2 2 1 0 0 1 1 2 1 2 0 0 3 2 0 1 2 4 0 1 0 1 Now it is converting each column but each column have same numeric values range from 0 to 5. The instrument variable is now similar to 'names' variable since both will have similar data points, which is certainly not a right approach. I have few questions : How should i treat those values without loosing information ? better approach for this type of data points ? I am thinking to use random forest for this type of data , Any suggestion on model also will be helpful for me. If some variable are appearing only once in huge dataset should i remove those variables ? Thank you in advance ! AI: Label encoder is better suited for output classes. You are looking to handle categorical variables. The term is "one hot encoding" which means creating a binary column for each unique value of categorical data. Check get_dummies function in Pandas It is hard to give any suggestion on model to use without knowing the objective, amount of data available, accuracy requirements. You are better off trying multiple models and checking test set errors to see which one works better. Variables with high degree of missing values or low fill rate can be removed from the dataset.
H: LSTM Model for predicting the minutely seasonal data of the day I am making a model for predicting the network traffic volume for our data center. Let me describe my dataset first. At this time, we have the model of 90 days, on each day, we record the network traffic volume every minute. And this data also has the seasonality pattern: the network traffic volume fluctuates and remains at night, decreases in the morning and increases in the afternoon (as I could observe, this pattern happens on all the data in my dataset) What I want at this time is a model for predicting the network traffic volume in some next minutes(25 minutes, for example), given that the data is given for all the previous minutes. The new predicted value will be contributed for the next predicting. For example, the value at minute i will be added to a window of data with a specific length to predict the value at minute i+1. At this time, I have tried LSTM-RNN and the feature I use is the minute of the day and the network traffic volume in that minute (with normalization before inputting to the LSTM network). However, my problem is: my model could catch the wrong pattern of the data: when the network traffic volume should increase in the afternoon, my model predicts it decreases. I have tried with different structures of LSTM network (increases the LSTMs layers, change the number of nodes in the fully-connected layers after the LSTM layers...); and also the length of the window, but that problem still remains. So I want to ask if there is any problem with LSTM model for predicting the data with seasonal trend like my data? If no, could anybody suggest me an LSTM model or any other suitable model for my data? Thank you in advance :-) AI: When represented at a minute level, the pattern is going to recur after almost 1440 minutes. So your LSTM needs to learn a really long term dependency here. And LSTMs are not great at such long term dependencies. I'll have a contrary opinion here. Drop neural networks, fit a good linear model to get a benchmark. If the linear model (example outlined below) beats your LSTM error rate currently, invest in building a better (generalised) linear regression model instead. Also, start with a more coarse definition of output (predicting total for next 15 minutes / 1 hour). Create hand-rolled features for your data. Start with traffic at the same hour of the day yesterday, traffic in same hour of day for past 7 days, traffic on the same hour of the day on same weekday last week. Add more advanced features by defining better aggregation window than hour. Something like 6 - 9 PM, 12 midnight - 6 am. Add features related to recent data (ratio of cumulative traffic today compared to cumulative traffic till same time yesterday). Build a suitable linear model on this data, benchmark the error rate. If the traffic is being measured in number of hits, use poisson regression instead of simple linear regression. Complex neural net models like LSTM are best used when hand-rolled features are extremely difficult to think and implement. For many time series applications, generating features yourself is intuitive, simple to achieve (most libraries like Python pandas have great date-time functionality) and good first step. It is much better than teaching LSTM to differentiate between morning and afternoon. François Chollet's book on deep learning has a chapter on similar application. He tackles the problem of predicting weather in particular time window based on past data. He shows that many complex neural net architectures find it hard to beat simple baseline like averaging from same time period in past.
H: How to read html tables under multiple headers and combine them in a single pandas dataframe? I have an html file like this: <h1>Group 1</h1> <table> <tr> <td>Col1</td> <td>Col2</td> <td>Col3</td> </tr> <tr> <td>ValA</td> <td>ValB</td> <td>ValC</td> </tr> </table> <h1>Group 2</h1> <table> <tr> <td>Col1</td> <td>Col2</td> <td>Col3</td> </tr> <tr> <td>ValP</td> <td>ValQ</td> <td>ValR</td> </tr> </table> I want to read it into Pandas as if it had this structure: <table> <tr> <td>Caption</td> <td>Col1</td> <td>Col2</td> <td>Col3</td> </tr> <tr> <td>Group 1</td> <td>ValA</td> <td>ValB</td> <td>ValC</td> </tr> <tr> <td>Group 2</td> <td>ValP</td> <td>ValQ</td> <td>ValR</td> </tr> </table> I can do it easily with PowerQuery the language for PowerBI: let Source = Web.Page(File.Contents("multiple_tables.html")), #"Expanded Data" = Table.ExpandTableColumn(Source, "Data", {"Column1", "Column2", "Column3"}, {"Col1", "Col2", "Col3"}), #"Filtered Rows" = Table.SelectRows(#"Expanded Data", each ([Caption] <> "Document") and ([Col1] <> "Col1")) in #"Filtered Rows" Is there a way to achieve this effect in less than 10 lines of code using Python/Pandas plus some html parser opensource library? Or should I resign myself to writing lower level code to handle this? AI: With a bit of parsing via BeautifulSoup, we can get a pandas.Dataframe using pandas.read_html() like: Code: def get_tables(source): elems = iter(BeautifulSoup(source, 'lxml').find_all(['table', 'h1'])) df = pd.DataFrame( pd.read_html(str(next(elems)), header=0)[0].iloc[0].rename(h1.text) for h1 in elems) df.index.names = ['Caption'] return df Test Code: import pandas as pd from bs4 import BeautifulSoup with open('test.html', 'r') as f: print(get_tables(f)) Results: Col1 Col2 Col3 Caption Group 1 ValA ValB ValC Group 2 ValP ValQ ValR
H: the error occurred while selecting feature using recursive feature elimination in sklearn I tried to rank the feature using recursive feature elimination in sklearn. However, I got this error when using RFE. here are the error and code information. from sklearn import svm x_vals = data['all_data'][:,0:320] y_vals_new = np.array([0 if each=='Neg' else 1 if each =='Neu' else 2 for each in data['all_data'][:,320]]) clf = svm.SVC(decision_function_shape='ovo',kernel='rbf') rfe = RFE(clf, 200) rfe = rfe.fit(x_vals,y_vals_new) print(rfe.support_) print(rfe.ranking_) clf.fit(DEAP_x_train, DEAP_y_train) print("###Valence###") print("when the kernel function is rbf") print('The mean square error %10.9f ' % np.mean((clf.predict(DEAP_x_test)-DEAP_y_test)**2)) # The mean square error print('the mean accuracy on the given test data and labels %10.9f'% clf.score(DEAP_x_test, DEAP_y_test)) Did anyone know the reason of this error? some opinion showed that RFE(recursive feature elimination) only works with SVC when the kernel is chosen to be linear. Is that correct? Thanks a lot! AI: Recursive feature elimination works by considering co-efficient value of feature column (in case of linear models) or variable importance exposed, by say, random forest classifier. For example, in case of a linear model, it will eliminate feature with smallest (absolute) value of co-efficient (i.e. less impact on outcome). In case of SVMs with non-linear kernels, the classification is happening in a transformed space of features and not original features. So it is not possible to assign importance measure to original features. So RFE doesn't work with RBF kernel here. Related question on variable weight for non-linear kernel
H: What is the best performance metric used in balancing dataset using SMOTE technique I used smote technique to oversample my dataset and now I have a balanced dataset. The problem I faced is that the performance metrics; precision, recall, f1 measure, accuracy in the imbalanced dataset are better performed than with balanced dataset. Which measurement can I use to show that balancing dataset may improve performance of the model? NB: roc_auc_score is better in balanced datset than roc_auc_score with imbalanced dataset Can it be considered as a good performance measurement? after the explanation i implemented code and i got this results import pandas as pd import numpy as np from sklearn import preprocessing import matplotlib.pyplot as plt plt.rc("font", size=14) from sklearn.svm import LinearSVC from sklearn.svm import SVC from sklearn.cross_validation import train_test_split,StratifiedShuffleSplit,cross_val_score import seaborn as sns from scipy import interp from time import * from sklearn import metrics X=dataCAD.iloc[:,0:71] y= dataCAD['Cardio1'] # Split the dataset in two equal parts X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0) print(y_test.value_counts()) model=SVC(C=0.001, kernel="rbf",gamma=0.01, probability=True) t0 = time() clf = model.fit(X_train,y_train) y_pred = clf.predict(X_test) t = time() - t0 print("=" * 52) print("time cost: {}".format(t)) print() print("confusion matrix\n", metrics.confusion_matrix( y_test, y_pred)) cf=metrics.confusion_matrix(y_test, y_pred) accuracy=(cf.item((0,0))/50)+(cf.item((1,1))/14) print("model accuracy \n",accuracy/2) print() print("\t\tprecision_score: {}".format(metrics.precision_score( y_test, y_pred, average='macro'))) print() print("\t\trecall_score: {}".format(metrics.recall_score(y_test, y_pred, average='macro'))) print() print("\t\tf1_score: {}".format(metrics.f1_score(y_test, y_pred, average='macro'))) print() print("\t\troc_auc_score: {}".format(metrics.roc_auc_score( y_test, y_pred, average='macro'))) Results: Name: Cardio1, dtype: int64 ==================================================== time cost: 0.012008905410766602 confusion matrix [[50 0] [14 0]] model accuracy 0.5 precision_score: 0.390625 recall_score: 0.5 f1_score: 0.43859649122807015 roc_auc_score: 0.5 For balanced dataset X_train1,y_train1 = sm.fit_sample(X_train, y_train.ravel()) df= pd.DataFrame({'Cardio1': y_train1}) df.groupby('Cardio1').Cardio1.count().plot.bar(ylim=0) plt.show() print(X_train1.shape) print(y_train1.shape) #model=SVC(C=0.001, kernel="rbf",gamma=0.01, probability=True) model=SVC(C=10, kernel="sigmoid",gamma=0.001, probability=True) t0 = time() clf = model.fit(X_train1,y_train1) y_pred = clf.predict(X_test) t = time() - t0 print("=" * 52) print("time cost: {}".format(t)) print() print("confusion matrix\n", metrics.confusion_matrix(y_test, y_pred)) cf=metrics.confusion_matrix(y_test, y_pred) accuracy=(cf.item((0,0))/50)+(cf.item((1,1))/14) print("model accuracy \n",accuracy/2) print() #print("\t\taccuracy: {}".format(metrics.accuracy_score( y_test, y_pred))) print() print("\t\tprecision_score: {}".format(metrics.precision_score( y_test, y_pred, average='macro'))) print() print("\t\trecall_score: {}".format(metrics.recall_score(y_test, y_pred, average='macro'))) print() print("\t\tf1_score: {}".format(metrics.f1_score(y_test, y_pred, average='macro'))) print() print("\t\troc_auc_score: {}".format(metrics.roc_auc_score( y_test, y_pred, average='macro'))) Results: (246, 71) (246,) ==================================================== time cost: 0.05353999137878418 confusion matrix [[ 0 50] [ 0 14]] model accuracy 0.5 precision_score: 0.109375 recall_score: 0.5 f1_score: 0.1794871794871795 roc_auc_score: 0.5 I found no efficient results. Should i implement the model using cross validation? AI: First of all, just to be clear, you shouldn't evaluate the performance of your models on the balanced data set. What you should do is to split your dataset into a train and a test set with ideally the same degree of imbalance. The evaluation should be performed exclusively on the test set, while the balancing on the training set. As for your question, any macro averaged metric should do just fine for proving that your balancing technique is effective. To calculate such a metric (let's say accuracy for simplicity), you just need to compute the accuracies of each class individually and then average them. Example: We trained two models m1 and m2, the first without balancing the dataset and the second after using SMOTE to balance the dataset. Actual values : 0, 0, 0, 0, 0, 0, 0, 0, 1, 1 Predicted m1: 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 <-- only predicts majority class Predicted m2: 1, 0, 0, 1, 0, 1, 0, 0, 1, 1 How would we normally calculate accuracy? $acc = \frac{correct \, predictions}{total \, predictions}$ How do our two models perform on this metric? $acc_1 = \frac{8}{10} = 80\%$ $acc_2 = \frac{7}{10} = 70\%$ According to this performance metric, m2 is better than m1. However, this isn't necessarily the case as m1 just predicts the majority class! In order to show how m2 is better than m1, we need a metric that treats the two clases as equals. We' ll now try to calculate a macro-averaged accuracy. How? First we'll calculate the accuracy for each class separately, and then we'll average them: For m1: $acc_1^0 = \frac{8}{8} = 100\%$ <-- m1's accuracy on class 0 $acc_1^1 = \frac{0}{2} = 0\%$ <-- m1's accuracy on class 1 $macro\_acc_1 = \frac{acc_1^0 + acc_1^1}{2} = \frac{100\% + 0\%}{2} = 50\%$ For m2: $acc_2^0 = \frac{5}{8} = 62.5\%$ <-- m2's accuracy on class 0 $acc_2^1 = \frac{2}{2} = 100\%$ <-- m2's accuracy on class 1 $macro\_acc_2 = \frac{acc_2^0 + acc_2^1}{2} = \frac{62.5\% + 100\%}{2} = 81.25\%$ Notes: Macro averaging can be applied to any metric you want, however it is most common in confusion matrix metrics (e.g precision, recall, f1). You don't need to implement this by yourself, many libraries already have it (e.g. sklearn's f1_score has a parameter called average, which can be set to "macro")
H: What is the differences between normal equation and gradient descent for polynomial regression I'm new to machine learning and willing to study and work with machine learning. It just that I still don't get to understand the benefits of using the normal equation in some occasion in comparison with gradient descent. I use Andrew Ng's course on Coursera but the notation really makes me a hard time to understand. I want to know more about the derivation of the cost function $J(\theta)$ for polynomial regression and the reason why he uses the transpose of vector $x(i)$ AI: You want to solve $X \times \theta = Y$. Actually, you have to find the parameters $\theta$. To find it, you should multiply both sides by $X^{-1}$ but it may have not an inverse. Consequently, you multiply each side by $X^t$ due to the fact that $X^tX$ has the inverse. After that, you multiply each side by $(X^tX)^{-1}$ which leads to a closed form equation for finding $\theta$. It should be mentioned that $X^tX$ does not always have an inverse. If you want to be sure it has it, you have to exploit data samples which are more than the number of features and they should not be linearly dependent otherwise the constructed matrix will not inverse and you will not be able to use it.
H: Where can we find the application of bayes's theorem in Bayesian optimiation with gaussian processing I am trying to learn bayesian optimisation by following this tutorial. However, until now I don't get the relation between bayes's theorem to the gaussian process formalism. Any ideas? AI: It is a 49 page long paper, so following observations are based only on a cursory reading. The optimisation is for finding best value of parameters for cost function of machine learning models. Rather than finding a fixed value of the parameters, it is assumed that the parameters come from statistical distribution and the task is to find the nature/shape of this statistical distribution. Bayes theorem tells you that if you have prior beliefs and evidence (data), you can go to posterior. They start with the assumption that prior distribution of the parameters is Gaussian. The task is to then find the posterior. Since there are multiple parameters and not single variable, Gaussian process comes into picture rather than single Gaussian distributed random variable. Bayes theorem comes into picture while going from priors to posteriors. The optimisation is solved using sampling technique like Monte Carlo sampling. Reading up about MCMC sampling method will help you see the connection.
H: How to interpret Hashingvectorizer representation? I cannot really understand the logic behind Hashingvectorizer for text feature extraction. I can follow the logic of Bag of Word or TFiDF where the features are values for all/certain words/N-grams per document and as such one can compute (dis)similarity between the representation vector. But how could one imagine the arbitrary number of features in case of Hashing? Are these features purely synthetic features with no correlation of the original words? Is it able to learn from some training set (by "fit") and apply this learned features to new sentences (by "transform") like other vectorizers? Does Hashing represent any semantic distance like word2vec for words? How? AI: Basic Background Imagine the process of count vectorizer: you first create a vocabulary which maps each word (or n-gram) to an integer index (index in the document term matrix). Then, for each document, you count number of times a word appears and set that value at appropriate index to build vector representation for the document. This can potentially create a very large number of features since each n-gram/token is one feature. Even if you want to limit the total number of features by using some trick like top-N words by occurrence, you still need to calculate and hold in memory the map of all word-counts. This can be potentially prohibitive in some applications. Similar problem happens for TfIDf, where you additionally store the mapping of word to document occurance for calculating the IDf part. Either way, you are doing multiple passes over the data and/or potentially large amount of memory consumption. The problem is also with bounds or predictability: you do not know the potential memory usage upfront in first phase. Hashing vectorizer can build document representation for all documents in one single pass over the data and still keep memory bounded (not necessarily small, the size depends on size of hash-table). In a single pass, you calculate hash of a token. Based on the hash value, you increment the count of particular index in the hash-table (the array underlying the hash table implementation). You get representation of current document without looking at every other document in the corpus. This gives rise to problem with representation accuracy. Two different tokens may have hash collision. So you are in effect trading [representation accuracy and explanatory power] Vs. [space (bounded predictable memory usage) and time (no multiple passes on the data)]. Answers to your specific questions There is a (sort) correlation between input words and features: through the hash function. But this correlation is potentially defective (hash-collisions) and there's no inverse transformation (you can't say what word is represented by feature number 207). There's no fit and transform. For a fixed hash-function, no dataset specific learning is happening (ala word2vec). There's no semantic interpretation of the distance. Two words semantically similar words may not be close to each other in the representation. As long as two almost (syntactically, based on tokens) similar documents are close enough, it will work on text classification. Why Would It Work? Given these information, you are right in being skeptical: why on earth this should work? The answer is empirical: a randomized representation like hashing works reasonably well in practice (the benefits from exact count based representation are not that great). There might be some theoretical explanation too but I don't know it enough. If curious, you can probably read up this paper.
H: Understanding the filter function in Convolution Neural Networks I am trying to follow the following tutorial accessible with this link. Under the 3rd Heading, "3. Visualize the Activation Maps for Each Filter", we can see the following function: def apply_filter(img, index, filter_list, ax): # set the weights of the filter in the convolutional layer to filter_list[i] model.layers[0].set_weights([np.reshape(filter_list[i], (4,4,1,1)), np.array([0])]) # plot the corresponding activation map ax.imshow(np.squeeze(model.predict(np.reshape(img, (1, img.shape[0], img.shape[1], 1)))), cmap='gray') I understood what they are trying to do. They are applying the filters and trying to show the output after that. But, what I don't understand is the following line: model.layers[0].set_weights([np.reshape(filter_list[i], (4,4,1,1)), np.array([0])]) What does it mean to assign weights here and also, why are they reshaping the filter which is of 4*4 to 4*4*1*1? AI: The function set_weights on a Keras layer requires the shape of the inputs to match the shape of the weights which you are replacing. You can find out which dimensions these are by calling the get_weights method on the layer in which you are interested. In that tutorial, it would look something like the following. We only need the first element returned by get_weights(), hence the [0]. We then see the shape of it: In [7]: model.layers[0].get_weights[0].shape (4, 4, 1, 1) So the person who wrote that tutorial needed to match that shape and, as it is only considering one layer of the defined model, it is able to be hard-coded in their example. Here are the docstrings for the two main functions I mentioned above: set_weights() In [8]: l1.set_weights? Signature: l1.set_weights(weights) Docstring: Sets the weights of the layer, from Numpy arrays. # Arguments weights: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`). # Raises ValueError: If the provided weights list does not match the layer's specifications. get_weights() In [9]: l1.get_weights? Signature: l1.get_weights() Docstring: Returns the current weights of the layer. # Returns Weights values as a list of numpy arrays
H: difference in between CNN and Inception v3 What is the difference in between the inception v3 and Convolutional neural network? AI: The Inception models are types on Convolutional Neural Networks designed by google mainly for image classification. Each new version (v1, v2, v3, etc.) marks improvements they make upon the previous architecture. The main difference between the Inception models and regular CNNs are the inception blocks. These involve convolving the same input tensor with multiple filters and concatenating their results. Such a block is depicted in the image below. On the contrast, regular CNNs performs a single convolution operation on each tensor. Inception-v3 is Deep Neural Network architecture that uses inception blocks like the one I described above. It's architecture is illustrated in the figure below. The parts where the layers "branch off" and then are merged together again are the inception blocks described previously. You can read more about Inception-v3 here.
H: Isn't the depth of a convolutional layer, the number of colors (or colorspace size)? I have been going through a CNN tutorial and noticed that depth of a convolutional layer is equal to the number of filters. But, shouldn't the depth be the number of colors in the image? I mean, if it's RGB then, depth is 3 right? Am I missing something here? AI: Yes, the depth of an image is equal to the color channels (1 for gray-scale images, 3 for RGB). However, that is only the case for the input layer of the CNN. During the first convolution layer, the image can be passed through as many filters as we select. This number becomes the depth of the first layer. The subsequent layers can have any depth we want. For example in the CNN depicted below: The image input to the CNN initially has a depth of 1 (because it is gray-scale). The first convolutional layer passes it through 6 filters, so the depth of the first layer becomes 6. The second layer passes the output of the first through 16 filters, meaning that the second layer has a depth of 16, etc.
H: When to remove correlated variables Can somebody please suggest what is the correct stage to remove correlated variables before feature engineering or after feature engineering ? AI: You do not want to remove all correlated variables. It is only when the correlation is so strong that they do not convey extra information. This is both a function of the strength of correlation, how much data you have and whether any small difference between correlated variables tell you something about the outcome, after all. The first two you can tell before you do any model, the final one not. So, it may be very reasonable to remove variables based on the combination of the first two considerations (i.e. even if the extra variables may in principle contain some useful information, you would not be able to tell given the strength of correlation and how much data you have) before you do any modelling/feature engineering. The final point can really only be assessed after doing some modelling.
H: why this naming convention for padding as "Same" and "Valid" in keras I was going through CNN's and found that padding argument should be set to "Valid" if i need no padding and "Same" if i need padding. But, it doesn't make any sense to me. Why can't keras development team just put "Padding = True", is there a specific reason to choose this convention? AI: Using valid will essentially use as much of your input as possible, such that the dimensions continue to work. This means there is a chance some input will be trimmed (removed). same on the other hand, will add padding to allow e.g. the number of filters/convolutions you specify to be applied. The reason not to simply have padding=True as an option, is that there are more than two options (see snippet from the docs below). Also, that would not be very explicit: using same or valid makes it crystal clear, what the result should be using the given approach. This is what the documentation says: padding: One of "valid", "causal" or "same" (case-insensitive). "valid" means "no padding". "same" results in padding the input such that the output has the same length as the original input Take a look at this nice answer with a simple example.
H: How to deal with Optional Input I'm from the vision world and only worked with pixels from 0-255, ignoring any side effects. My current problem is different, in the way that I cannot rely on the input data. What my problem is: I have a number of inputs. Each input is categorical (for now) and optional. For example I have a number of user features, {male, female, [..not given]}, {single, relationship, … , [..not given]}, .. What I want: X optional Input-Features mapped to Y Output-Features, with uniform output vales across the features. I tried kernel pca, different kinds of matric factorization for guessing missing inputs and simple autoencoder networks. By sight, the last two yield "ok" results. Any advice here? AI: It sounds like you essentially have a sparse input problem, similar to doing something like a recommendation system. Imagine trying to recommend a film to somebody based on films they have already watched and rated. There would be many many films they have not seen, so you (in a sense) have missing data. This was the case in the Netflix Prize challenge. Facorisation methods such Single Value Decomposition are common approaches in these cases. There is a nice summary of such methods used in the Netflix Prize. Using a neural network Auto-Enconder will be reliant on relatively large numbers of samples, which is something to bear in mind when assessing performance. Just one other idea that might be worth trying out, depending on how many features you have and the general sparsity, could be to replace the missing values with fixed values on a feature-level. This could be as simple as replacing the missing values e.g. in "relationship status" to <RELATIONSHIP>, and then in general "feature X" to just be <X>. This is a kind of trick that is often used in NLP (natural language processing) to allow filler-words to encode some useful information. In your case, this might at least help any model distinguish between missing values across features. I have not tested something like this myself, so unfortunately I cannot cite references nor point to results.
H: 4 Class Classification - Machine Learning Model I have a data set which contains nearly 150 features and 60k data. And my target feature is continuous variable represents hours. I divided this period into 4 categories of user engagement (4 ranges of hours). Implement GA with SVM, GA with logistic regression, Random forest, GA with KNN with suitable normalization of data wherever required. Used GA for best feature subset selection. All the algorithms gave similar results of around 46% accuracy ( for nearly balanced test set). Note: Training is also done on a balanced data set. I am wondering where am I gone wrong? I believe I went wrong somewhere in input to target mapping. Could anyone please confirm that categorization of the continuous target variable (hrs) into 4 sets are reasonable? AI: Yes, binning a continuous variable so that in can take discrete values is reasonable, as long as you are OK with transforming this regression problem into a classification one. Just note that when dealing with a balanced 4-class classification problem, if an algorithm hadn't learnt anything and was predicting randomly it would achieve a 25% accuracy. Your results indicate that the algorithms are at least learning something. It might just be a very difficult task to solve, in that the features might have little to no correlation with the output variable. I'd suggest trying out more algorithms with and without GA.
H: Aggregating small values in a frequency bar plot I have a pandas Series of sorted percentage values like this : A -> 0.001 B -> 0.0012 C -> 0.0015 ... H -> 0.02 I -> 0.03 J -> 0.041 ... X -> 0.12 Y -> 0.31 Z -> 0.4 (I typed those by hand, I am not familiar with how to type those python outputs in DSSE, sorry! If anyone has a hint on that, it would be nice. If this pandas series would be named "content_distribution", content_distribution.X would return the float 0.12 .) I want to obtain a Series which looks like this : Other -> 0.07 I -> 0.03 J -> 0.041 ... X -> 0.12 Y -> 0.31 Z -> 0.4 because those small values are irrelevant to display in a plot that I want to make, and I want to keep the big values on display. Is there any pandas-ic or pythonic way to do that? I can write my own script, but anything I try out looks non-pandas-ic (even though it sort of works). (I also don't know if "aggregating" is the correct pandas term for this, if anyone has a better one, he or she is welcome to edit my title.) AI: Here is an example where I create a new row called Other, which contains the sum of all values below a given threshold. I then remove the rows that were below that threshold (and so included in the sum), so the final Pandas Series only has values above the threshold, plus the new Other row. This is all performed on a randomly generated Pandas Series, as follows: In [1]: import pandas as pd, numpy as np In [2]: X = pd.Series(np.random.random(20)) In [3]: X Out[3]: 0 0.507151 1 0.366259 2 0.444196 3 0.027280 this will be in the sum 4 0.132785 this will be in the sum 5 0.993170 6 0.614712 7 0.942894 8 0.516255 9 0.107436 10 0.710416 11 0.512221 12 0.502685 13 0.753515 14 0.894735 15 0.780213 16 0.998315 17 0.971558 18 0.504525 19 0.224767 dtype: float64 In [4]: threshold = 0.2 In [5]: X.loc['Other'] = X[X < threshold].sum() In [6]: X.drop(X[X < threshold].index, inplace=True) In [7]: X Out[7]: 0 0.507151 1 0.366259 2 0.444196 5 0.993170 6 0.614712 7 0.942894 8 0.516255 10 0.710416 11 0.512221 12 0.502685 13 0.753515 14 0.894735 15 0.780213 16 0.998315 17 0.971558 18 0.504525 19 0.224767 Other 0.267501 dtype: float64 Notice that the indices still match the original values, i.e. they are not continguously numbered - indeices 3 and 4 have been dropped/removed. If you want to get continuously numbered indices back, you can do the following: X.reset_index() This creates a new index and puts the original one shown above in a new column called index, which can then itself be dropped if desired: X.drop('index', axis=1) That will remove the label Other, which was in the original index.
H: What's the difference of stateless LSTM and a normal feed-forward NN? From what I understand, the whole point of LSTM is for the network to establish long-term dependencies in the data, i.e. an event happening now may be in some way determined by something that happened sometime in the past, and that may not be in the batch of data currently being presented, but many sequences or batches previously. So I understand that a stateful LSTM does this, and the network will retain the state until reset_states() is called. But with stateless LSTM, where the state isn't retained between batches, then how does that differ from a normal feed-forward perceptron network? I'm assuming that even though in stateless LSTM you don't control the states as you would in a stateful model, that some state is still retained at passed on between sequences? Is it passed on between batches? I can understand why someone would use stateful LSTM, but why would someone use stateless LSTM instead of a feed-forward perceptron network? Or vice-versa, why would someone use a regular FF instead of a stateless LSTM? AI: Theoretically a stateless LSTM gives the same result as a statefull LSTM, but there are few pros and cons between them. A stateless LSTM requires you to structure your data in a particular way, in turn it is vastly more performant, while a statefull LSTM you can have varying timesteps, but at a performance penalty. The stateless LSTM does have state, it's just implemented differently. Instead of managing it yourself by constantly calling reset_states() your data is structure in such a way that it is automatically reset when the end of one series of timesteps is done. Let's create an explicit example. Say you have some label data y[i] where i is the index, and input data x[i, t] where i corresponds to your label's index, and t is at what timestep this was observed. (The x and ys can be vectors, matrices, or any type of tensor really.) Say you have 4 output labels (not 4 classes), and 2 to 4 timesteps for each of them. Your data might look like this: x[0, 0], x[0, 1] -> y[0] x[1, 0], x[1, 1], x[1, 2] -> y[1] x[2, 0], x[2, 1], x[2, 2], x[2, 3] -> y[2] x[3, 0], x[3, 1] -> y[3] This is the kind of data a statefull LSTM can handle. But at the end of each label, you would have to call reset_states() to indicate that your no longer dependent on the previous series of xs. This is slow. With a stateless LSTM you would pad the x so that you have a rectangular matrix: 0 0 x[0, 0], x[0, 1] -> y[0] 0 x[1, 0], x[1, 1], x[1, 2] -> y[1] x[2, 0], x[2, 1], x[2, 2], x[2, 3] -> y[2] 0 0 x[3, 0], x[3, 1] -> y[3] In this case, there is an implicit reset of state for each label. This way of doing it is vastly more performant. But there might occur issues with bogging down your data with empty data. Personally I tend to add a binary flag to each x vector to indicate to the network in the LSTM that it should probably ignore the data. I'm not sure about the specifics of how stateless LSTMs are implemented, but I think it might be about unrolling the LSTM units. Ultimately, both stateless and statefull LSTM's do the same thing, they are just implemented differently. why would someone use stateless LSTM instead of a feed-forward perceptron network? Well, ironically it's because stateless LSTM have state, which means it can take into account the previous timestep input values when doing predictions.
H: How can I use a string as input in a neural network? I'm making a program that can determine if a user will like a car from different auctions based on the cars that he/she has bought in the past. Therefore, I want to use the make and model (which are represented as strings) as well as the year, mileage etc... but I'm having trouble to decide on how I'll pass them to the neural network I've been thinking of using a vector of different makes as such: car_make = [[10000] # Nissan [01000] # Toyota [00100] # Ford [00010] # Dodge [00001]] # Mazda I could pass those numerical values as input but making this is limited as there are tons of different makes, since I'm looking for cars in many different auctions. I have no idea how I can represent the models too as they also change in every auction. You have any ideas on how I can pass the strings onto the neural network? p.s I'm using python to build the neural network. AI: there is no other way i guess. since each one is of different type(Categorical varaible) you can only one hot encode them.
H: Resume Parsing - extracting skills from resume using Machine Learning I am trying to extract a skill set of an employee from his/her resume. I have resumes stored as plain text in Database. I do not have predefined skills in this case. How should I approach this problem? I can think of two ways: Using unsupervised approach as I do not have predefined skillset with me. I will extract the skills from the resume using topic modelling but if I'm not wrong Topic Modelling uses BOW approach which may not be useful in this case as those skills will appear hardly one or two times. (I would appreciate if you enlighten me more about topic modelling). Another approach is manually labeling the skills for resume and making it supervised learning problem. But I have around 500 resumes, manual labeling will be very tedious and very time consuming. Any suggestions are welcome. Thanks. AI: I'm not sure Topic Modelling will help you here, as it tries to extract abstract topics from text. I'm afraid resumes might be too 'dry' for it to work nicely. Here are a few sources I found that might be helpful: A resume parser The reply to this post, that gives you some text mining basics (how to deal with text data, what operations to perform on it, etc, as you said you had no prior experience with that) This paper on skills extraction, I haven't read it, but it could give you some ideas For some personal suggestions : I'm going to make the assumption you don't want to label those resumes by hand, and will try to explain how I would proceed first. It might not be the best solution, as there's a lot of heuristics in it, but it's a beginning that's mostly aimed at studying your data and gain some insight on it. You could see if the structure of the document helps: the skills you're looking for might often be appearing in a section with a specific title. You could also work with gazeteers, which are lists of key words of interest, and combine these with colloquations (groups of words frequently appearing together). If, for example, you have a keyword appearing in your gazeteer, you could use ngrams to see what words appear around it the most, and turn your single word match into a multiple word skill (e.g : machine learning (2grams), natural language processing (3grams), etc) Once you have an interesting list of keywords, you could move to ML and try word vectors, maybe use word2vec, and build vectors around your word of interests. If new skills come in, their vectors might be similar to the ones you have defined (sorry for being a bit unclear here, I only have a vague knowledge of how word2vecs work). This is no silver bullet solution, but this should get you started. Good luck !
H: getting error while installing install_tensorflow() i am trying to install tensorflow in r library. when i try to install using > library(tensorflow) > install_tensorflow() Preparing for installation (updating pip if necessary) Requirement already up-to-date: pip in c:\users\ideapad\appdata\local\programs\python\python35\lib\site-packages (18.0) Installing TensorFlow... Collecting tensorflow Using cached https://files.pythonhosted.org/packages/55/02/e07e6ff277147aa52816215cc1b96d1acd1166411fc71356c6c3160760ca/tensorflow-1.9.0-cp35-cp35m-win_amd64.whl Collecting h5py Downloading https://files.pythonhosted.org/packages/d0/2c/4572e2e495341e667c89b490ad18ea71a5f9e9fafca06109a9c7db22848b/h5py-2.8.0-cp35-cp35m-win_amd64.whl (2.3MB) Collecting pyyaml Downloading https://files.pythonhosted.org/packages/ad/d4/d895fb7ac1b0828151b829a32cefc8a8b58b4499570520b91af20982b880/PyYAML-3.13-cp35-cp35m-win_amd64.whl (205kB) Collecting requests Downloading https://files.pythonhosted.org/packages/65/47/7e02164a2a3db50ed6d8a6ab1d6d60b69c4c3fdf57a284257925dfc12bda/requests-2.19.1-py2.py3-none-any.whl (91kB) Collecting Pillow Downloading https://files.pythonhosted.org/packages/2e/5f/2829276d720513a434f5bcbf61316d98369a5707f6128b34c03f2213feb1/Pillow-5.2.0-cp35-cp35m-win_amd64.whl (1.6MB) Collecting termcolor>=1.1.0 (from tensorflow) Collecting astor>=0.6.0 (from tensorflow) Using cached https://files.pythonhosted.org/packages/35/6b/11530768cac581a12952a2aad00e1526b89d242d0b9f59534ef6e6a1752f/astor-0.7.1-py2.py3-none-any.whl Collecting tensorboard<1.10.0,>=1.9.0 (from tensorflow) Using cached https://files.pythonhosted.org/packages/9e/1f/3da43860db614e294a034e42d4be5c8f7f0d2c75dc1c428c541116d8cdab/tensorboard-1.9.0-py3-none-any.whl Collecting wheel>=0.26 (from tensorflow) Using cached https://files.pythonhosted.org/packages/81/30/e935244ca6165187ae8be876b6316ae201b71485538ffac1d718843025a9/wheel-0.31.1-py2.py3-none-any.whl Collecting six>=1.10.0 (from tensorflow) Using cached https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl Collecting grpcio>=1.8.6 (from tensorflow) Using cached https://files.pythonhosted.org/packages/2e/1c/7451288f896ce972a2d2112ed1d8106d3950a02afc5bddf307a1a5de3d73/grpcio-1.14.0-cp35-cp35m-win_amd64.whl Collecting numpy>=1.13.3 (from tensorflow) Using cached https://files.pythonhosted.org/packages/62/47/54baeff52b37be258dd97442f52d8a2a9c27c4af8fcbc5467827c5ae5eed/numpy-1.15.0-cp35-none-win_amd64.whl Collecting absl-py>=0.1.6 (from tensorflow) Collecting protobuf>=3.4.0 (from tensorflow) Using cached https://files.pythonhosted.org/packages/f0/7d/1145805ef3ac475074f8d14d1c0512a79ef709ddfd35ca89c5fa4fc94065/protobuf-3.6.0-cp35-cp35m-win_amd64.whl Collecting setuptools<=39.1.0 (from tensorflow) Using cached https://files.pythonhosted.org/packages/8c/10/79282747f9169f21c053c562a0baa21815a8c7879be97abd930dbcf862e8/setuptools-39.1.0-py2.py3-none-any.whl Collecting gast>=0.2.0 (from tensorflow) Collecting idna<2.8,>=2.5 (from requests) Downloading https://files.pythonhosted.org/packages/4b/2a/0276479a4b3caeb8a8c1af2f8e4355746a97fab05a372e4a2c6a6b876165/idna-2.7-py2.py3-none-any.whl (58kB) Collecting urllib3<1.24,>=1.21.1 (from requests) Downloading https://files.pythonhosted.org/packages/bd/c9/6fdd990019071a4a32a5e7cb78a1d92c53851ef4f56f62a3486e6a7d8ffb/urllib3-1.23-py2.py3-none-any.whl (133kB) Collecting chardet<3.1.0,>=3.0.2 (from requests) Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB) Collecting certifi>=2017.4.17 (from requests) Downloading https://files.pythonhosted.org/packages/7c/e6/92ad559b7192d846975fc916b65f667c7b8c3a32bea7372340bfe9a15fa5/certifi-2018.4.16-py2.py3-none-any.whl (150kB) Collecting werkzeug>=0.11.10 (from tensorboard<1.10.0,>=1.9.0->tensorflow) Using cached https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl Collecting markdown>=2.6.8 (from tensorboard<1.10.0,>=1.9.0->tensorflow) Using cached https://files.pythonhosted.org/packages/6d/7d/488b90f470b96531a3f5788cf12a93332f543dbab13c423a5e7ce96a0493/Markdown-2.6.11-py2.py3-none-any.whl Installing collected packages: termcolor, astor, werkzeug, numpy, markdown, six, setuptools, protobuf, wheel, tensorboard, grpcio, absl-py, gast, tensorflow, h5py, pyyaml, idna, urllib3, chardet, certifi, requests, Pillow Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'c:\\users\\ideapad\\appdata\\local\\programs\\python\\python35\\Lib\\site-packages\\numpy\\.libs\\libopenblas.CSRRD7HKRKC3T3YXA7VY7TAZGLSWDKW6.gfortran-win_amd64.dll' Consider using the `--user` option or check the permissions. then i got the following error Error: Error 1 occurred installing tensorflow package In addition: Warning message: running command '"C:\Users\Ideapad\AppData\Local\Programs\Python\Python35\/Scripts/pip.exe" install --upgrade --ignore-installed "tensorflow" "h5py" "pyyaml" "requests" "Pillow"' had status 1 Can someone help me in this issue? AI: its just because you don't have permission to save the module as it is in C drive. So, just do the following 1.Open the app as an administrator and then install it or 2.follow This link. this can help
H: Gradient descent multidimensional linear regression - does learning rate affects concurrency? I wonder if gradient descent for multidimensional regression always finds the right result? I feel like this doesn't always have to be true. I have done some calculations and actually got correct results but not for all learning rates I tried to specify. It's not the case that it is too big, cause sometimes with even having too small learning rates I get wrong results. Please tell my if I make mistake in the calculations, or I'm wrong with divergence of the hypothesis. AI: If you use a convex loss function you always have one optimum point, and you will always be able to find it. I have done some calculations and I get correct results but not for all learning rates I tried to specify. The reason is that you may have selected a large learning rate which may lead to overshoot the optimum. It also may be so small that you don't reach to the desired point and it all depends on your data. If you use the simple gradient descent algorithm, it is an incident that can be seen a lot. You can use other optimisation techniques like Adam which is much more better and you rarely see these problems.
H: Differentiating roadmap of a loss function Let's say I'm performing Stochastic Gradient Descent (SGD) on binary cross entropy error while optimizing weight $w_{2}$. Binary cross entropy error: $$L(y|p(x_{i}))=-y_{i}*ln(p(x_{i}))-(1-y_{i})*ln(1-p(x_{i}))$$ The derivative of such roadmap would be: $$\frac{\partial L(y|p(x_{i}))}{\partial w_{1}}$$ We can use a chain rule to solve this: $$\frac{\partial L(y|p(x_{i}))}{\partial w_{1}}=\frac{\partial L(y|p(x_{i}))}{p_{i}}*\frac{\partial p_i}{w_1}$$ First term is easily understandable, it's just a differentiation of a loss function with respect to the estimated probability: $$\frac{y_i -p_i}{p_i(1 - p_i)}$$ But the second term seems a little weird, from my understanding $p_i$ is just estimation of Bernoulli distribution (therefore it should be Empirical distribution). Somehow it's derivative is: $$p_{i}*(1-p_i)*x_i$$ This is definitely somehow associated with the Bernoulli distribution, but how exactly? What is the definition of $p_i$ in this particular case? Thank you! AI: Here the function used for $p_{i}$ is sigmoid function. So, $p_{i}$ = $\frac{1}{1+e^{-\Sigma w_{j} ∗ x_{i}^j}}$ $\frac{∂p_{i}} {w_{j}}$= $\frac{-1}{(1+e^{-\Sigma w_{j} ∗ x_{i}^j})^2}$ ∗ $e^{-\Sigma w_{j} ∗ x_{i}^j}$ ∗ $(-x_{i}^j)$ $\frac{∂p_{i}} {w_{j}}$ = $\frac{1}{(1+e^{-\Sigma w_{j} ∗ x_{i}^j})}$ ∗ $\frac{e^{-\Sigma w_{j} ∗ x_{i}^j}}{(1+e^{-\Sigma w_{j} ∗ x_{i}^j})}$ ∗ $x_{i}^j$ $1-p_{i}$ = $\frac{e^{-\Sigma w_{j} ∗ x_{i}^j}}{1+e^{-\Sigma w_{j} ∗ x_{i}^j}}$ $\frac{∂p_{i}} {w_{j}}$ = $p_{i}$ ∗ $(1-p_{i})$ ∗ $x_{i}^j$
H: Effect of NOT changing filter weights of CNN during backprop What is the effect of NOT changing filter weights of a CNN during backpropagation? I changed only the fully connected layer weights while training on the MNIST dataset and still achieved almost 99 percent accuracy. AI: By not changing the weights of the convolutional layers of a CNN, you are essentially feeding your classifier (the fully connected layer) random features (i.e. not the optimal features for the classification task at hand). MNIST is an easy enough image classification task that you can pretty much feed the input pixels to a classifier without any feature extraction and it will still score in the high 90s. Besides that, perhaps the pooling layers help a bit... Try training an MLP (without the conv/pool layers) on the input image and see how it ranks. Here is an example where an MLP (1 hidden & 1 output layer) reached 98+% without any preprocessing/feature extraction. Edit: I'd also like to point out to another answer I wrote, which goes into more detail on why MNIST is so easy as an image classification task.
H: Is there any logic to adding a threshold to see if two variables are related? I have two variables $X$ and $Y$ given as tuples of $(x, y)$, and I want to see if there is a relationship between the two variables. I can do so by finding the correlation coefficient. However, I found that by selecting an arbitrary subset of the data (e.g. $(x, y) | x > k$ ), I can get a higher correlation coefficient and a stronger result. Is doing so mathematically sound? I have no a priori reason to believe that certain data points are "more important" than others, to put it simply. AI: No, it's not sound. You're doing data dredging. Try this thought experiment. Suppose you generate random points (no actual relationship). Then you compute the correlation coefficient of all the points; the correlation coefficient of all points where $x>k$; and the correlation coefficient of all points where $x<k$. Realistically, one of the two latter values will be larger than the correlation coefficient of all the points. So you'll always be in a position where if you pick a threshold you can increase the correlation coefficient. This is true even if the points are randomly generated. Thus, your procedure introduces bias. This is similar to the problem of p-hacking.
H: Missing value in continuous variable: Indicator variable vs. Indicator value Most data has missing values, and as far as I'm aware, these are the options: Imputation (mean, hot-deck, etc.) Indicator variable. A categorical variable that tells what type the primary variable is. For the missing value case, this is binary. Something still has to be imputed, though. Indicator value. If the model is powerful enough, it can learn to associate a specific imputed value to certain types of predictions. In my case, a missing value reveals important information, and thus is Missing Not At Random. From what I've read, most imputation methods don't cover this scenario. Thus, I've opted for the indicator value approach. My question is: Is there any point adding an additional indicator variable, since I'm already using an indicator value? Am I completely misguided and should I be looking into some other approach? Example: | Primary variable (-50 to 50) | Indicator | |------------------------------------- |----------- | | 20.5 | 0 | | -14.2 | 0 | | 0.1 | 0 | | 500 (out of the usual distribution) | 1 | I can provide more information of my problem, if it's required to answer the question. AI: As you realise, you are introducing some form of redundancy by using both indicator values and variables; however, it might be a good starting point. Many models such as neural networks and boosted regression will be able to pick up on this, and will likely end up making use of one of the two primarily. This is because, in your particular case, the variable will be 100% correlated with the imputed indicator value. Using both will give your model as much help as possible to learn about the missing values and their relation to the rest of the data. My advice would be to as many combinations for which you have time. So try: indicator value and indicator variable indicator value imputation via one or more methods You can then compare the results of each of these and (assuming you have enough data to make things statistically trustworthy), you will be able to assess, which method is indeed the best in your case. You could try thinking about how to encode perhaps a little more information into either the value or variable, in order to distinguish them from one another. E.g. you could vary the value (don't always make it 500, as per your example), depending on a metric e.g. a function of how long it has been since the previous missing value.
H: converting column from a factor to date format YYYY-MM-DDThh:mm:ss.SSSZ i want to convert the whole column from a factor to date. The str of the dataset I am using is given below: > str(dataset) 'data.frame': 2538 obs. of 5 variables: $ X : int 1 2 3 4 5 6 7 8 9 10 ... $ SessionID: int 13307 21076 27813 8398 23118 12256 28799 11457 7542 19261 ... $ Timestamp: Factor w/ 2532 levels "2014-04-01T03:02:33.088Z",..: 2064 905 1086 1027 2419 1327 2035 1206 481 1354 ... $ ItemID : int 214684513 214718203 214716928 214826900 214838180 214717318 214821307 214537967 214835775 214706432 ... $ Price : int 0 0 0 0 0 0 0 0 0 0 ... How can I convert timestamp variable from factor to specific date format: YYYY-MM-DDThh:mm:ss.SSSZ (2014-04-06T18:42:05.823Z) AI: One way to do this is to use the PIPE operator to pass the column of the dataframe into your function, then assign it back to the dataframe. I created a dataframe with one column holding your example date string five times: > x <- "2014-04-06T18:42:05.823Z" > DF <- as.data.frame(c(x, x, x, x, x), colnames="original") > colnames(DF) <- "original" > str(DF) 'data.frame': 5 obs. of 1 variable: $ original: chr "2014-04-06T18:42:05.823Z" "2014-04-06T18:42:05.823Z" I use this PIPE symbol, which allows you to chain methods together, passing the output from one into the input of the following. Using a . symbolises the output from the previous method. To make the PIPE operator available, load in the magrittr package (I believe it is also included in the tidyverse, so perhaps via the dplyr package: library(maggritr) Now I apply the as.POSIXct function to the original column, and save it back to the dataframe in a columnn called new: DF$new <- DF$original %>% as.POSIXct(., tz = "UTC", "%Y-%m-%dT%H:%M:%OS") > DF original new 1 2014-04-06T18:42:05.823Z 2014-04-06 18:42:05.822 2 2014-04-06T18:42:05.823Z 2014-04-06 18:42:05.822 3 2014-04-06T18:42:05.823Z 2014-04-06 18:42:05.822 4 2014-04-06T18:42:05.823Z 2014-04-06 18:42:05.822 5 2014-04-06T18:42:05.823Z 2014-04-06 18:42:05.822 > str(DF) 'data.frame': 5 obs. of 2 variables: $ original: chr "2014-04-06T18:42:05.823Z" "2014-04-06T18:42:05.823Z" ... $ new : POSIXct, format: "2014-04-06 18:42:05.822" "2014-04-06 18:42:05.822" ... You can remove the original column if you like, using: df$original <- NULL
H: How can I compare classes from clusterings performed on two different data sets? I have two data sets defined by real valued vectors, and I have performed clustering on both of them. Now I want to compare the classes to see how they map to each other. If I put the data sets together into a single, bigger data set, and cluster that, I see that most of, say, class 1 of the first data set and class 2 of the second (numbers are arbitrary) fall into the same class of the total clustering, which makes sense, meaning they're probably equivalent. What would be a good rigorous way of performing this kind of analysis and giving a quantitative measure of this overlap? Thanks. AI: I suggest using Kullback–Leibler divergence (KLD) to compare the classes. The procedure is as follows: 1- Find the mean vector and covariance matrix of each class of each dataset. 2- Calculate the KLD between each class of the first dataset and all classes of the second dataset. So, for example if you have two dataset and each of them has 5 classes, you should calculate KLD, 25 times. 3- For each class of the first dataset, find the class from the second dataset with the lowest KLD between them. Doing this, you will find the most similar classes of the two datasets. Note that KLD measures the similarity between two probability distributions and by calculating the mean vector and covariance matrix of each class, you are fitting a normal distribution to it. To find the formula for calculation of KLD between two multivariate normal distribution, refer to here: https://en.m.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
H: Activation function vs Squashing function This may seem like a very simple and obvious question, but I haven't actually been able to find a direct answer. Today, in a video explaining deep neural networks, I came across the term Squashing function. This is a term that I have never heard or used. Our professor always used the term Activation function instead. Given the definitions I've been able to find, the two seem to be interchangeable terms. Are they really synonymous or is there a difference? AI: An activation function This the name given to a function, which is applied to a neuron that just had a weight update as a result of new information. It can refer to any of the well known activation funtions, such as the Rectified Linear Unit (ReLU), the hyperbolic tangent function (tanh) or even the identity function! Have a look at somewhere like the Keras documentation for a nice little list of examples. We usually define the activation function as being a non-linear function, as it is that property, which gives a neural network its ability to approximate any equation (given a few constraints). However, an activation function can also be linear e.g. the identity function. A squashing function This can mean one of two things, as far as I know, in the context of a neural network - the tag you added to the question - and they are close, just differently applied. The first and most commonplace example, is when people refer to the softmax function, which squashes the final layer's activations/logits into the range [0, 1]. This has the effect of allowing final outputs to be directly interpreted as probabilities (i.e. they must sum to 1). The second and newest usage of these words in the context of neural networks is from the relatively recent papers (one and two) from Sara Sabour, Geoffrey Hinton, and Nicholas Frosst, which presented the idea of Capsule Networks. What these are and how they work is beyond the scope of this question; however, the term "squashing function" deserves special mention. Paper number one introduces it followingly: We want the length of the output vector of a capsule to represent the probability that the entity represented by the capsule is present in the current input. We therefore use a non-linear "squashing" function to ensure that short vectors get shrunk to almost zero length and long vectors get shrunk to a length slightly below 1. That description makes it sound very similar indeed to the softmax! This squashing function is defined as follows: $$ v_j = \frac{||s_j||^2}{1 + ||s_j||^2} \cdot \frac{s_j}{||s_j||} $$ where $v_j$ is the vector output of capsule $j$ and $s_j$ is its total input. If this is all new to you and you'd like to learn more, I'd recommend having a read of those two papers, as well as perhaps a nice overview blog, like this one.
H: Implementation of a distributed data mining paper I have a project about distributed data mining and I need to do some implementations, So I've searched and found this paper. The address of dataset is mentioned in the paper and I've downloaded it. For the process I should split the dataset into 10 smaller datasets. And the other task is using Weka4WS (weka for web services) for the process (For clustering part). So my questions: 1. How can I split the dataset using python code? 2. What is Weka4WS and how should I use it, I mean the implementation (because it's not something like apps like the Weka itself)? AI: Splitting your dataset First you can load your data set to memory: # Load the data to memory with open('file.txt', 'r', encoding='utf-8') as file: header = file.readline() data = [] for line in file.readlines(): data.append(line) Next, you use KFold to split your data into $10$ splits and save to $10$ different files. # Split the data and save the data from sklearn.model_selection import KFold kf = KFold(n_splits=10) split = 1 for _, split_index in kf.split(data): # Iterate over the splits with open('file_{}.txt'.format(split), 'w+', encoding='utf-8') as file: # Open a file with name file_{split number} file.write(header) # write the header of the file for index in split_index: # Write each point file.write(data[index]) split += 1 # Increase split number Weka4WS The Weka4WS is an extension to Weka itself. Yes, it is like an application like Weka, with a Graphical User Interface. From the docs: Thus, Weka4WS also extends the Weka GUI to enable the invocation of the data mining algorithms that are exposed as Web Services on remote Grid nodes. However, it is now as easy to run as the normal Weka. You need some additional steps to use it. The official installation is here: http://scalab.dimes.unical.it/weka4ws/howtos/installation/ Here is a tutorial to run the application on Windows: http://scalab.dimes.unical.it/weka4ws/howtos/run-the-client-on-windows/ This is how the app looks like, this is the KnowledgeFlow screen:
H: Understanding the Shuffle and Split Process in a Neural Network Code I need to understand how the splitting of labels in the following code happened : import keras import librosa import librosa.feature import librosa.display import glob import numpy as np from keras.models import Sequential from keras.layers import Dense , Activation from keras.utils.np_utils import to_categorical def extract_features_song(f): y, _ = librosa.load(f) # get mfcc mfcc = librosa.feature.mfcc(y) # make value between 1 -1 mfcc /= np.amax(np.absolute(mfcc)) return np.ndarray.flatten(mfcc)[:25000] def extrac_features_and_labels(): all_features = [] all_labels = [] genres = ['blues' , 'classical', 'country' , 'disco' , 'hiphop', 'jazz', 'metal' , 'pop', 'reggae', 'rock'] for genre in genres: sound_files = glob.glob('genres/'+genre+'/*.au') print ('prcoessing %d songs in %s genre'% (len(sound_files), genre)) for f in sound_files: features =extract_features_song(f) all_features.append(features) all_labels.append(genre) # one hot encoding label_uniq_ids , label_row_ids = np.unique(all_labels, return_inverse= True) label_row_ids = label_row_ids.astype(np.int32, copy= False) onehot_labels = to_categorical(label_row_ids, len(label_uniq_ids)) return np.stack(all_features), onehot_labels features , labels = extrac_features_and_labels() print (np.shape(features)) print (np.shape(labels)) training_split = 0.8 alldata = np.column_stack(features , labels) np.random.shuffle(alldata) splitidx = int(len(alldata))*training_split train , test = alldata[:splitidx,:], alldata[splitidx:, :] print (np.shape(train)) print (np.shape(test)) # the concerned part: begin train_input = train [:,:-10] train_labels = train [:,-10:] test_input = test [:,:-10] test_labels = test [:,-10:] #the concerned part: end print (np.shape(train_input)) print (np.shape(train_labels)) the output is as follows: (1000, 25000) (1000, 10) (800, 25010) (200, 25010) (800, 25000) (800, 10) Now, when he - code instructor - stacked both arrays together and shuffled them, he can't be sure that the last ten elements are the labels, am I right? If so how did he do it like this, this would cause an error. AI: Nope, the last 10 elements are definitely the labels. np.column_stack appends the elements of the second array to the corresponding row of the first array. A quick example (Python 3.5), random one-hot encoding snippet was copied from this answer on SO >>> import numpy as np >>> np.__version__ '1.15.0' >>> rand_enc = np.random.randint(0,3,size=(10)) >>> labels = np.zeros((10,3)) >>> labels[np.arange(10), rand_enc] = 1 >>> labels array([[0., 0., 1.], [0., 1., 0.], [1., 0., 0.], [1., 0., 0.], [1., 0., 0.], [1., 0., 0.], [0., 0., 1.], [0., 0., 1.], [0., 1., 0.], [1., 0., 0.]]) >>> features = np.random.rand(10,4) >>> features array([[0.51599703, 0.24758895, 0.70951355, 0.52074341], [0.64851074, 0.14481191, 0.88837244, 0.86901571], [0.36303914, 0.93464881, 0.21238598, 0.24998789], [0.74290604, 0.96704141, 0.16971294, 0.99628383], [0.20225753, 0.78450864, 0.42730556, 0.84727098], [0.95372573, 0.38092036, 0.73628447, 0.22120431], [0.10821134, 0.54467407, 0.00452555, 0.15295404], [0.7353348 , 0.26629375, 0.96241551, 0.84573258], [0.80659848, 0.34873381, 0.12219345, 0.46671669], [0.07837654, 0.82128673, 0.79523531, 0.10652154]]) >>> np.column_stack((features, labels)) array([[0.51599703, 0.24758895, 0.70951355, 0.52074341, 0. , 0. , 1. ], [0.64851074, 0.14481191, 0.88837244, 0.86901571, 0. , 1. , 0. ], [0.36303914, 0.93464881, 0.21238598, 0.24998789, 1. , 0. , 0. ], [0.74290604, 0.96704141, 0.16971294, 0.99628383, 1. , 0. , 0. ], [0.20225753, 0.78450864, 0.42730556, 0.84727098, 1. , 0. , 0. ], [0.95372573, 0.38092036, 0.73628447, 0.22120431, 1. , 0. , 0. ], [0.10821134, 0.54467407, 0.00452555, 0.15295404, 0. , 0. , 1. ], [0.7353348 , 0.26629375, 0.96241551, 0.84573258, 0. , 0. , 1. ], [0.80659848, 0.34873381, 0.12219345, 0.46671669, 0. , 1. , 0. ], [0.07837654, 0.82128673, 0.79523531, 0.10652154, 1. , 0. , 0. ]]) As you can see, the labels get appended at the end! The shuffling code only manipulates the rows, so the column ordering is preserved. You do need to pass np.column_stack a tuple of your NumPy arrays, but otherwise this gets the job done. Hope that clears things up! Edit: I don't know how the instructor got a size mismatch between the train_input and train_labels, from what I understand from the code you shared they should be (800,25000) and (800,10)
H: Gumbel Softmax vs Vanilla Softmax for GAN training When training a GAN for text generation, i have seen many people feeding the gumbel-softmax from the generator output and feed into the discriminator. This is to bypass the problem of having to sample from the softmax which is a non-differentiable process and hence prevents training. My question is though, why not just feed the regular softmax (no argmax!) from the generator directly into the discriminator? What is the benefit of using the gumbel-softmax? Thanks. AI: Passing directly the output of the softmax is also common (among the few textual GANs out there), e.g. see the improved Wasserstein GANs (WGAN-GP). With hard Gumbel-softmax (+ straight-through estimator), you pass one-hot encoded vectors, which is the same as what you have with real data. If you pass the output of the softmax, the discriminator should be able to more easily tell apart real data (one hot) from fake data (non-one hot). That being said, in my opinion neither of the two approaches seems very promising nowadays. There seems to be far more REINFORCE-based textual GANs.
H: Mixture Density Network: determine the parameters of each Gaussian component I am reading Bishop's Mixture Density Network paper at: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/bishop-ncrg-94-004.pdf This is a good paper, but I am still confused about a few small details. I am wondering if anyone could give me some help: Basically the mixing coefficient alpha_i can be computed through a softmax function in eq (25) below. However, in eq (25), what's the upper alpha for each $(z_i)^{alpha}$? Is it a free parameter to be fitted? Similarly, in eq (26), what's the upper sigma in $(z_i)^{sigma}$? Is it a free parameter to be fitted as well? Thanks! AI: Actually, the upper alpha and the upper sigma are not free parameters to be set, they are just used to represent the output activations corresponding to the mixture coefficients and the variances, respectively. They are used to distinguish derivatives with respect to the alpha and sigma. I say it from page 275 of the book “pattern recognition and machine learning” by Christopher Bishop”:
H: Orange wont allow me to set target for corpus I would like to do a linear regression on an excel file. I have descriptions in one column and category in the other. I need to be able to set the target to do a linear regression. However, Orange does not allow me to set target variable the way it would when I use the file module. The problem with opening the data as file is that I will not be able to use bag of words (since thats only for corpus). When I do use Corpus, it automatically detects all columns as meta. Am I doing something wrong? Is there a work around? AI: Can't you do something like this? (or am I missing something?)
H: Is there an implementation of pocket algorithm in Keras? As explained in Wikipedia, the pocket algorithm is a very simple variant/addition of/to ANN which keeps a copy of the best model seen so far and returns that one as the trained model (instead of the actual final state of the model). Implementing it is very simple and straightforward. I was wondering if this algorithm is implemented in Keras. AI: Yes this is usually part of the early stopping algorithm, where you supply a cross-validation data set, and a limit on number of epochs since best result so far. In Keras, you can use an instance of the EarlyStopping class, choosing the metric that you want the best model for, and setting the patience parameter to limit the number of epochs to test after any best so far result. The instance is supplied to the fit method as a callback. See http://parneetk.github.io/blog/neural-networks-in-keras/ for an example (last section)
H: Semi Supervised Learning without label propagation I am trying to cluster some words by affinity. Using Word2Vec I obtained vector representation of every word that I can cluster with a normal unsupervised method. Of these words, though, I know the classification of some of them, for example I know: Colors group ==> red, blue, yellow Gender group ==> man, woman UNKNOWN ==> shoes, brown, beautiful. As the example shows, brown should be categorized as a Color, but shoes and beautiful should be different clusters. How could I use this information to create a semi-supervised model in order to cluster every word? While the question is generic, I actually tried to create a program in Python, I tried the scikit methods sklearn.semi_supervised.LabelSpreading and sklearn.semi_supervised.LabelSpreading. These, though, are not what I need since they only assign the known labels to the remaining objects, meaning that I would end up only with my Colors and Gender group. AI: One strategy that seems good here is instance-level constrained clustering. These methods are semi-supervised algorithm that have "must-link" and "cannot-link" constraints between instances of known labels. So in your example, you would bind the 4 pairs (red, blue), (red, yellow), (blue, yellow), and (man, woman) as "must-link", and the 6 pairs (red, man), (red, woman), ..., (yellow, woman) as "cannot-link". The results are similar to unsupervised clustering. For example, if you were to use DBSCAN (ignoring the labels/constraints), you would not need to specify the number of clusters/groups you're trying to achieve, and the algorithm would even find "outliers". In fact, there is a version of DBSCAN that supports instance-level constraints, called C-DBSCAN. It is described in the work "Density-based semi-supervised clustering" by Ruiz et al (2010). I do not know of any out-of-the-box implementations available, but I have a working version of C-DBSCAN I implemented for an experiment. However it is not documented nor is it performant/production level. You can find it at my lab's repository if you're interested (also contains the C-DenStream, which is the data streams version of it, but it does not seem to fit your problem).
H: Are view() in Pytorch and reshape() in Numpy similar? Are view() in torch and reshape() in Numpy similar? view() is applied on torch tensors to change their shape and reshape() is a numpy function to change shape of ndarrays. AI: Yes, for most intents and purposes, they can do the same job. From this link, an example: >>> import torch >>> t = torch.ones((2, 3, 4)) >>> t.size() torch.Size([2, 3, 4]) >>> t.view(-1, 12).size() torch.Size([2, 12]) If you are concerned with memory allocation, here is another answer on StackOverflow with a little more information. PyTorch's view function actually does what the name suggests - returns a view to the data. The data is not altered in memory as far as I can see. In numpy, the reshape function does not guarantee that a copy of the data is made or not. It will depend on the original shape of the array and the target shape. Have a look here for further information.
H: What is max aggregation on a set of word embeddings? In a paper I see: $\mathcal{Q}$ is a set of words. $\psi_{G^w}$ are word embeddings. so, $\{\psi_{G^w}(w_t), \forall w_t \in \mathcal{Q}\}$ gives me a set of embeddings for all words in $\mathcal{Q}$. For example, if I have: Q = {'a', 'b', 'c'} embedding_gw.shape = (1000, 8) # 1000 words in vocab, embedding size is 8 I will get: { [1,2,3,4,5,6,7,8], #embedding of 'a' [8,7,6,5,4,3,2,1], #embedding of 'b' [4,5,3,6,7,8,1,2] #embedding of 'c' } The result of $\max\{\psi_{G^w}(w_t), \forall w_t \in \mathcal{Q}\}$ is supposed to be a single vector. My question is, how do I get this single $\max$ vector? Do I sum all values in each embedding and pick the largest one? Do I pick the $\max$ value of each $i^{th}$ position, creating a completely new vector? Do I do something else? AI: The second option is the right one. From the paper: In this case, we perform a component-wise max over the start embeddings of all instances of w to obtain the best features across all occurrences of w. The previous quote can be found a few paragraphs before equation 17, second column, top of the page.
H: (Newbie) Decision Tree Randomness I'm starting at Data Science and, to get something going, I just ran the code from Siraj Raval's Intro to Data Science video. He implements a simple Decision Tree Classifier but I couldn't help but notice that, given the same training set, the classifier doesn't always yield the same prediction (nor the same fit apparently); which I happen to find terribly weird, since, from what I've learned, a Decision Tree is supposed to be deterministic. The only thing I can think of that could be causing the randomness would be that the branches are being chosen at random at some point because 2 options might be identically valued. I would say this could be corrected with a little bit more training data, but even if I add 5 more people, nothing changes. Does anybody have an explanation for what's going on? Following is the code (in Python) from the video in a for loop to count how many predictions for male and female the Decision Tree has yielded. from sklearn import tree from sklearn.svm import SVC n_male_pred_tree = 0 n_female_pred_tree = 0 n_male_pred_svm = 0 n_female_pred_svm = 0 for i in range (1,1000): # This loop tests the consistency of the CLF # The Decision Tree is not very consistent (It's 50-50) X = [[181,80,44],[177,70,43],[160,60,38],[154,54,37], [166,65,40],[190,90,47],[175,64,39],[177,70,40], [159,55,37],[171,75,42],[181,85,43]] Y = ['male','female','female','female', 'male','male','male','female', 'male','female','male'] tree_clf = tree.DecisionTreeClassifier() svm_clf = SVC() tree_clf.fit(X,Y) svm_clf.fit(X,Y) tree_prediction = tree_clf.predict([[190,70,43]]) svm_prediction = svm_clf.predict([[190,70,43]]) if tree_prediction == 'male': n_male_pred_tree += 1 else: n_female_pred_tree += 1 if svm_prediction == 'male': n_male_pred_svm += 1 else: n_female_pred_svm += 1 print(f"MALE pred Tree: {n_male_pred_tree}") print(f"FEMALE pred for Tree: {n_female_pred_tree}") print("\n") print(f"MALE pred for SVM: {n_male_pred_svm}") print(f"FEMALE pred for SVM: {n_female_pred_svm}") AI: From sklearn: The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed. If you manually set the random_state variable when you create your tree object, you'll find that it does become deterministic. In simpler terms, the data you are feeding it is a little small, and there are several splits that have the same information gain, so the split that is chosen is subject to random factors.
H: Why this model does not converge in keras? This case has an underlying story but I have essentially boiled it down to the simplest possible re-producible example I could. Essentially let us think that I have up to 1000 nodes and each node represented by a small (this case is a 3-cell vector) vector and I concatenate and represent these nodes as a padded 3*1000 input vector and need to find out which one is more suitable. So the model is trying to predict 1000 float values, one for each node. Let's imagine the function to score nodes is this arbitrary code: def score_vector(v): a, b, c = tuple(v) if a == 0 or b == 0 or a - c < 2: return float(Defs.INVALID_SCORE) return float(a * math.sqrt(a - c) / math.log(b + 2, 5)) And essentially my model is supposed to learn this function plus an argmax to find the node that has the highest score. This looks to me like a pretty simple problem compared to the problems I have solved so far (but it is different too). So my question is why doesn't this model converge? I am thinking it could be due to its differentiability but really kinda lost and started to doubt everything I know about NN (which is not a lot). Here is the repro code: import numpy as np import math from keras import Sequential, Input from keras.layers import Flatten, Activation, Dense from keras.optimizers import Adam class Defs: VECTOR_SIZE=3 NODE_COUNT=1000 MAX_REAL_NODE_COUNT=400 MIN_REAL_NODE_COUNT=20 INVALID_SCORE=0 def score_vector(v): a, b, c = tuple(v) if a == 0 or b == 0 or a - c < 2: return float(Defs.INVALID_SCORE) return float(a * math.sqrt(a - c) / math.log(b + 2, 5)) def build_vector(): a = np.random.randint(1, 100) c = np.random.randint(1, 50) if np.random.choice([False, True, True]) else 0 b = 0 if c == 0 else np.random.randint(c, c*3) return [float(a), float(b), float(c)] def build_vectorset_score(): n = np.random.randint(Defs.MIN_REAL_NODE_COUNT, Defs.MAX_REAL_NODE_COUNT) vectorset = [] for i in range(0, n): vectorset += build_vector() # pad it vectorset += [0. for i in range((Defs.NODE_COUNT-n) * Defs.VECTOR_SIZE)] scores = [score_vector(vectorset[i*Defs.VECTOR_SIZE:(i+1)*Defs.VECTOR_SIZE]) for i in range(0, Defs.NODE_COUNT)] index = np.argmax(scores) scores = [1. if index == i else 0. for i in range(0, len(scores))] return vectorset, scores def build_model(): model = Sequential() model.add(Dense(Defs.VECTOR_SIZE * Defs.NODE_COUNT, input_dim=Defs.VECTOR_SIZE * Defs.NODE_COUNT, activation='relu')) model.add(Dense(Defs.NODE_COUNT, activation='relu')) model.add(Dense(Defs.NODE_COUNT)) model.add(Activation('softmax')) print(model.summary()) model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=['categorical_accuracy']) return model if __name__ == '__main__': SAMPLE_SIZE = 1 * 1000 X = [] Y = [] for i in range(0, SAMPLE_SIZE): x, y = build_vectorset_score() X.append(np.array(x)) Y.append(np.array(y)) model = build_model() model.fit(np.array(X), np.array(Y), batch_size=100, epochs=200, verbose=1) AI: I'm not totally sure exactly what you're doing with your scoring equation, but the first thing you need to look at is your loss function. Categorical Crossentropy is for multilabel classification, and you're trying to predict a float value. So, you should have your network output be a single value (and don't squash it through a sigmoid unless the range of your function is (0,1)). You should be using a regression loss function - I'd definitely start with mean squared error. Check out the regression example here (under Regression Predictions) for some sample code. Edit: Further to discussion below, the network cannot predict the index of the highest-valued node because each node's score has no dependency on the node's position among the 1000 nodes being scored. As the probability of the best node being in each position is the same, there is no optimum prediction to be learned.
H: lengthy criteria in dataframe selector I'd like to get records with country codes not in a long list , something along the lines of country_list = ['BR','MX','CO',...] df = df[df['country_code'] not in country_list] ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). what's an elegant way to do it? AI: df = df[~df['country_code'].isin(country_list)] https://pandas.pydata.org/pandas-docs/stable/gotchas.html#using-the-in-operator
H: Multi target classification I am working on traffic violation data set which contains 36 columns(variable). I have two dependent variables out of this. Example id, description, age, sex alcohol vehicle violation level Accident 120 speed 28 M Yes car severe yes 122 win-tint 35 F No truck Medium no 123 left-turn 26 M No car light no 124 failure-veh 28 M No car severe yes The dependent variables are "violation level" and "accident". violation level has 3 classes -- sever medium light Accident has 2 levels -- Yes No I want to predict both violation level and accident. I think this can be multitarget prediction. Can someone help me which algorithm is good for this? I have seen some articles suggesting scikit- multiout put classification and neural network with multiple out put layers. Or Can I go ahead with two models? predicting the traffic violation level. Predicting the accidents using the violation levels. Basically I am trying to predict violation levels and then predicting how these violations contribute to accidents. Any help would be appreciated. AI: At first the way you explained it, it sounded to me it is a multilabel classification (like e.g. CelebA) and approaches I have used for there was coming to my mind. Also I thought it is hierarchical classification because you have subcategories in your dependent variable. But at the end you mention that you want to Predicting the accidents using the violation levels. Aren't then your dependent variables (violation level , and accident) already correlated/related to each other? It seems to me that you do not have two independent dependent variable. I would try to reduce the problem to a simple multiclass classification, in your case 6 as follows (assuming you have these explicit labels for each data point): Severe-Yes Severe-No Medium-Yes Medium-No Light-Yes Light-No Then you have 6 classes/labels to build a model e.g. Gradient Boosting Tree or Neural Network.
H: Does MLPClassifier (sklearn) support different activations for different layers? According to the documentation, it says the 'activation' argument specifies: "Activation function for the hidden layer" Does that mean that you cannot use a different activation function in different layers? AI: One can see from the code (look at uses of self.activation) that the same function is used for all the hidden layers. You might want to consider the comments to this question for alternative approaches, generally being a move away from sklearn and towards a deep learning framework.
H: How does ML algorithms treat unseen data, a conceptual discussion I want to predict the occurrence of certain events, but these events only occur say 5% of the time in my data, hence in 95% of the data there is nothing to learn. In order to teach the ML algo something I have learned to single out the 5% and drop the rest of the data. Let us say that I want to predict if a picture is of a dog or a cat. In my data 2.5% of the pictures are of dogs and 2.5% of cats, the rest are just random pictures. So, I single out the cat and dog pictures and label them so that the ML algo can learn from that. Am I broadly right so far? So, if I train my algo on only cat and dog pictures and get a satisfactory accuracy, what will then happen in live usage when 95% of the pictures are not of cats or dogs? I.e. I show my model a picture of a house, what does it predict? Will my algo always predict either cat or dog, or will it somehow tell me that it has no clue what this picture is? Any thoughts? AI: Define two flag variables: flag_is_cat and flag_is_dog, which take on values of 1 if the picture shows a cat or dog, respectively, and 0 otherwise. Define another flag that takes on the value of 1 if the picture contains either a cat or a dog. In a word, label the data. If you train the model using all of the pictures, even those with neither a cat nor a dog, then the model outputs a probability that the picture contains a cat, a probability that it contains a dog, and another that it contains either. This is the approach mentioned by @marco_gorelli . Dividing the probability that the picture has a cat by the probability that it has either a cat or a dog gives the probability that the picture has a cat conditional on the picture having at least one of them. Alternatively, if you train a model using only those pictures that contain either a cat or a dog, then the model would output the probability that a cat is contained in the picture and that a dog is contained in the picture conditional on at least one of them being in the picture.
H: How to make it possible for a neural network to tune its own hyper parameters? I am curious about what would happen to hyperparameters when they would be set by a neural network itself or by creating a neural network that encapsulates and influences the hyperparameters of the network it encapsulates. The goal for me here is to experiment and get some more in-depth knowledge about neural networks. But I had a hard time finding information to be able to execute such an experiment. Which point in a direction that either it has never been done before or the idea is just really dumb. Now what I would like to know is, does anyone of you know where I would be able to find information (e.g. books, web-articles, papers, et cetera) to conduct such an experiment? AI: I am curious about what would happen to hyperparameters when they would be set by a neural network itself In general this is not possible as many hyper-parameters are discrete, so they are not differentiable with respect to any objective. For example, this applies to layer sizes, number of layers, choices of transfer functions. This prevents using any form of gradient descent to tune them directly as learnable parameters. In fact the separation between parameters and hyperparameters is exactly that hyperparameters are not learnable by the model type. This applies to other ML models, not just neural networks. or by creating a neural network that encapsulates and influences the hyperparameters of the network it encapsulates. This is more feasible. You could use one neural network to try and predict the results from another. Then prefer to run tests on target networks that look like they will do well. However, using a "meta" neural network like this has some major drawbacks: Neural networks require a lot of training data. Getting enough samples to make good predictions would require that you train your primary neural network (a time-consuming process) many times Neural networks are bad at extrapolating to data outside of areas already experienced, so not so great at making creative predictions of new parameters to try Neural networks have a lot of hyper-parameters to tune. Would you need a "meta meta" neural network to predict the performance of your "meta" network? either it has never been done before or the idea is just really dumb This is a real issue that comes up repeatedly. In general the search for best hyper-parameters is a chore. It is an active area of research and experimentation to find efficient ways of automating it, or avoiding it by making some hyperparameters less important or not necessary. The reason you are not finding neural networks that tune neural networks is due to the issues listed above. So the main areas of research focus on different approaches, that can work with limited data and don't have so many hyperparameters themselves. Or models that are robust to large ranges of hyperparameters, so precise tuning is not a big deal. Here are a few pointers to help with automated searches: You could use a variety of hyperparameter optimisation schemes, including random search, grid search, genetic algorithms, simple gradient methods etc. Random searches, perhaps constrained by previous experience or second-hand knowledge from similar problems, can be reasonably effective. The quality of any search is limited by the quality and amount of cross-validation data. There is not much point tuning the cv loss value to the point that you care about changes that are much less than the standard error in its estimate. Response to hyperparameters is typically non-linear over the search space, which makes things harder. Outside of automation, expert analysis is often a good starting point, especially if you want to assess success of regularisation. Typically you can look at learning curves for training and cross-validation data, and based on that you can make a reasonable guess as to whether to increase or reduce regularisation hyperparameters and/or learning rate, even from observing results from a single training run. There have likely been attempts to automate some parts of reading learning curves, since sometimes it is relatively easy to detect over-fitting and under-fitting scenarios. However, I could not find any examples when searching just now.
H: Under what circumstance is lemmatization not an advisble step when working with text data? Disregarding possible computational restraints, are there general applications where lemmatization would be a counterproductive step when analyzing text data? For example, would lemmatization be something that is not done when building a context-aware model? For reference, lemmatization per dictinory.com is the act of grouping together the inflected forms of (a word) for analysis as a single item. For example, the word 'cook' is the lemma of the word 'cooking'. The act of lemmatization is, for example, replacing the word cooking with cook after you have tokenized your text data. Additionally, the word 'worse' has 'bad' as its lemma, and as the previous example replacing the word 'worse' with 'bad' is the action of lemmatization. AI: NLP tasks that would be harmed by lemmatization: 1) Tense classification sentence | tense ------------------------------------ He cooked a nice meal | past He cooks a nice meal | present The sequence of characters at the end of verbs can help in this task. The verbs cooked and cooks differ at the last characters ed and s repectively. With lemmatization, this information is lost. Both verbs become cook, making both sentences seem (in this case) in the present tense. 2) Author identification Given a set of documents $\mathcal{P}$ written by author $a$, a set of documents $\mathcal{Q}$ written by author $b$, a set of documents $\mathcal{S}$ written by either author $a$ or $b$, classify if a document $s\in\mathcal{S}$ is written by author $a$ or $b$. One way to achieve this is by looking at the histogram of words present in $s$ and compare it to documents from $\mathcal{P}$ and $\mathcal{Q}$ and select the most similar one. This works because different authors use certain words with different frequencies. However, by using lemmatization, you distort these frequencies impairing the performance of your model.
H: How to utilize user survey answers and the actual usage in forecasting power usage using LSTM? I have the pre-trial survey and post-trial survey conducted of around 5000 users for Smart Meter installation. With this I have power usage reading recorded every 30 min for around one and a half years. survey csv is like this User Question 1 Question 2 ....... 1000 a a ....... 1001 b a ....... . . . ....... . . . ....... . . . ....... 5000 b a ....... power usage csv is like this User date usage 1000 20001 0.003 1000 20002 0.025 .... ..... ..... .... ..... ..... .... ..... ..... .... ..... ..... .... ..... ..... 1000 65047 0.52 1000 65048 0.14 I want to forecast power usage of the user based on the past power usage and the survey using LSTM. How to start with this? AI: Check out the example "Multi-input and multi-output models" here in the documentation. It gives an example how to combine an LSTM time series model with additional input. You would use separate input layers for the power usage and for the survey data. After running the power usage data through the LSTM you combine the results using "concatenate". (You can ignore the Embedding layer in the example since your power usage data isn't text data.)
H: How to do face recognition without using any kind of Neural Networks like CNN? Is/was there any way to perform face recognition, instead of using the Convolution Neural Network which uses the technique of mapping(encoding) the face using 128-D vector and then using classifier (like knn/SVM) on it? Before the invention of the CNN approach, what did we use for face recognition? From a comment by @rahulreddy: Custom object detection is possible using Voila-Jones Haar cascade or using the histogram of oriented gradients (HOG) technique or now by CNN. AI: As it stands, NN-based approaches are the current state of the art. It outperforms the method described below, but in the spirit of answering the question, here it goes... The trick for not using CNNs is to find a good representation of images with faces on them. CNNs are great because they learn good features. If you don't want to use it, you can use CNNs, you can use Eingenfaces as your features. The main assumption is that most face images lie on a low-dimensional subspace determined by the first k (k<d) directions of maximum variance. You can Use PCA to determine the vectors or “eigenfaces” u1, ... ,uk that span that subspace. Then you can represent all face images in the dataset as linear combinations of eigenfaces. For example, given these images for training: You can learn the following eigenfaces: These eigenfaces highlight different features in a person's face. If you average all of them you find a mean face, which would be the most common features among all people in your training set. Then, this face: Can be reconstructed by a linear combination of the mean face and some other eigenfaces: Now, the weights of each component can be used as features for your classification algorithm. Which at this point can be anything, like KNN or SVM.
H: May the training set and validation set overlap? May the training set and validation set overlap? Similarly, may the testing set and validation set overlap? AI: Definitions, so we are on the same page: Training set: the data points used to train the model. Validation set: the data points to keep checking the performance of your model in order to know when to stop training. Testing set: the data points used to check the performance once training is finished. May training and validation sets overlap? They should not. The validation set is used to know when to stop training your model. The idea is that you check your model performance often and when there seems to be no more improvement, you stop. Take a look at the plot below. It is plotting the loss of the model. If the loss is still decreasing means that you can keep on improving your model, but if the loss stops decreasing you stop training. Notice how the valid loss stops decreasing before the train loss. That is because the model could keep improving to improve the performance on the training set, but if you do that, you will get overfitting. So, by having an unseen validation set, you will stop training earlier and not overfit your mode, otherwise, if you keep on training, the accuracy on the validation set will start to decrease. That means that if the two sets overlap, the validation loss will become more similar to the training loss, so you model will keep on training and you will overfit your model. May validation and testing sets overlap? They should not. You already used your validation set to stop training your model. That means that you already know the performance of your model again the validation set. Now your model is trained and you want to test your model with unseen data points, i.e. with the testing set. If your sets overlap, you are biasing your test results towards the performance which you already know your model has.
H: sklearn nmf - question about its use This is a complete noob question, I'm new to Python, and I understand the basics of NMF, but when implementing it in sklearn it seems a bit convoluted and I wanted to ask whether anyone knows how to exploit the results. I've imported data from a file, and created and shaped an array, and now have implemented some initial NMF code I took from the sklearn guide online. nmf = NMF(init='nndsvd', max_iter=2000, random_state=0, shuffle=False, solver='cd',verbose=0) W = nmf.fit_transform(myfile); H = nmf.components_; #X_new = np.array(?) #W_new = model.transform(X_new) But here's the bit I'm getting lost - and why I commented out the last two lines - where does X_new come from, am I meant to be imported another matrix for comparison - I thought the point of NMF was to identify key features of the matrix under analysis? Any help greatly appreciated. AI: The API allows two types of operations: Learn and transform a matrix under analysis. Or alternatively, learn the factorization on one matrix (myfile / X) and then apply the same transformation on a new matrix (X_new), having the same number of columns (but potentially different number of rows). Think of it as learning the basis transformation on one set of input data (.fit method) and then apply the same basis transformation on new set of data (.transform method). If you are interested in transforming the same matrix (myfile in your case), use only the fit_transform method instead of transform.
H: Why are optimization algorithms slower at critical points? I just found the animation below from Alec Radford's presentation: As visible, all algorithms are considerably slowed down at saddle point (where derivative is 0) and quicken up once they get out of it. Regular SGD itself is simply stuck at the saddle point. Why is this happening? Isn't the "movement speed" constant value that is dependent on the learning rate? For example, weight for each point on regular SGD algorithm would be: $$w_{t+1}=w_t-v*\frac{\partial L}{\partial w}$$ where $v$ is a learning rate and $L$ is a loss function. In short, why are all optimization algorithms slowed down by the saddle point even though step size is constant value? Shouldn't a movement speed be constantly same? AI: In that simulation, the movement speed is a proxy for step size. The step size is a function of learning rate (v) and the approximate gradient of the function at that point ($\frac{\partial L}{\partial w}$). The learning rate can be constant. However, the approximate gradient is not constant. The approximate gradient is typically smaller closer to a critical point (i.e., the valley starts to gradually level out). Thus, the computed update value is smaller and the movement speed slows down.
H: is numpy isclose function returning bad answer? Using numpy isclose in the following example: import numpy as np np.isclose(1533761040,1533748023.0, atol=1) Returns True, which is False. AI: No, it uses both an absolute and relative tolerance, and the default of rtol is nonzero, adding some more tolerance. See https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.isclose.html
H: Why use two LSTM layers one after another? In the example on the Keras site, seq2seq_translate.py on line 189, there is a LSTM layer after another (the first with return_sequences=True), but another example lstm_seq2seq.py which does the same thing but letter-by-letter uses only one LSTM in the encoder. My code looks like: encoder = LSTM(latent_dim, return_sequences=True)(encoder_inputs) encoder_outputs, state_h, state_c = LSTM(latent_dim, return_state=True)(encoder) My question is why does the word-by-word version use two LSTM layers? And why is the return_sequences used? AI: About your first question: It is because word-by-word NLP model is more complicated than letter-by-letter one, so it needs a more complex network (more hidden units) to be modeled suitably. About your second question: When you want to use two-staged LSTMs, the hidden sequence of first LSTM must be used as input of the second LSTM and the return_sequences option is used to do this.
H: In RNNs why do networks always use the last output vs the last input? All the descriptions of RNNs introduce some equation like: $\ h_t = f_W(h_{t-1},x_t)$ and I'm wondering why we don't just go straight to the "source", ie the last input like : $\ h_t = f_W(x_{t-1},x_t)$ The only thing I can maybe think of is maybe the RNN can store some sort of "state" in ht-1 But then, along the same vein, what would happen in an LSTM architecture if we used previous inputs vs previous outputs? It seems to me that the presence of the memory functionality should be able to avoid this potential problem (if it even is one) AI: That would work if the Markov Assumption was in place here (i.e. the last state was enough to determine the new state). However, that is exactly what RNNs try to break. RNNs try to encode the whole history of inputs and predictions to determine the next output. So in the formula, the $h_{t-1}$ is the last output, but it is a hidden state which encodes all the previous inputs, outputs, and calculations. So when you have new information coming in (i.e. $x_t$), your model can make use of all the history and the new input to determine the new input. The same applies to LSTM, you are working with a sequence that the new output depends not only on the last input but on more things from the past, and this information is encoded in $h_{t-1}$.
H: Is regularization included in loss history Keras returns? I'm getting to know Keras. Right now, I'm testing with regularization and how to use them. Comparing the results of loss history for a training session with and without regularization, it seems to me that the loss history reported by Keras has the regularization term added to it, is that right? When my model has no regularization term, the loss value starts from something less than 1 but when I retrain the model with regularization (L1L2), the same problem's loss value starts from 500. The only logical explanation I've got for it is that Keras is reporting loss value after it added the regularization term to it. And I believe the loss value without regularization is as valuable as with it, if not more valuable. Don't you think? AI: the regularisation values are by default computed in the loss and so you cannot see the regularisaiton values separately (as far as I know). For a brief explanation of an example using the cross-entropy loss with regularisation, have a look at the example in Michael Nielsen's book. You can see what is returned and available by saving the results from calling model.fit(). You can train a model then also check out: print(history.history.keys()) which will show you all available metrics for analysis after training the model. You could perhaps define a custom function that compute the regularisation terms you are using, and execute that as either your own metric or as a callback function in a Keras model. If you use your own metric, those are by default recorded separately from the training/validation loss value.
H: Macro- or micro-average for imbalanced class problems The question of whether to use macro- or micro-averages when the data is imbalanced comes up all the time. Some googling shows that many bloggers tend to say that micro-average is the preferred way to go, e.g.: Micro-average is preferable if there is a class imbalance problem. On the other hand, micro-average can be a useful measure when your dataset varies in size. A similar question in this forum suggests a similar answer. However, this seems quite counter-intuitive. For example if we have a data set with 90%-10% class distribution then a baseline classifier can achieve 90% mico-averaged accuracy by assigning the majority class label. This is corroborated by books, e.g. An Introduction to Information Retrieval says (page 282) "Microaveraged results are therefore really a measure of effectiveness on the large classes in a test collection. To get a sense of effectiveness on small classes, you should compute macroaveraged results." In the end the real decision about which measure to use should be based on the relative mis-classification costs for the classes. But a quick look at the internet seems to suggest use of micro-averaging. Is this correct or misleading? AI: The choice of a metric depends on how you rank the importance of your classes and what you value from a classifier. Let's look at your example: For example if we have a data set with 90%-10% class distribution then a baseline classifier can achieve 90% accuracy by assigning the majority class label. One minor correction is that this way you can achieve a 90% micro-averaged accuracy. If your goal is for your classifier simply to maximize its hits and minimize its misses, this would be the way to go. However, if you valued the minority class the most, you should switch to a macro-averaged accuracy, where you would only get a 50% score. This metric is insensitive to the imbalance of the classes and treats them all as equal. In many applications the latter is preferable. Imagine a classification problem aiming at diagnosing a disease that appears in 1% of the population. What good is a classifier that would always predict that the patient was healthy, even if it could achieve a 99% micro-averaged accuracy on the task? The reason why micro-averaging is prevalent is because in most tasks, we would be interested in simply maximizing the number of correct predictions the classifier makes. In these tasks no class is more important than the others.
H: How to add confidence to model's prediction? I am a newbie in ML working on a time series prediction project. The objective is to predict the future outcome of a time series (int valued, with different upper bound, think of it as different sized parking lot availability prediction) based on its historical value. I'm currently using a regression approach using slide windows algorithm. I tried different ML models and they seem to be working okay(better than my baseline at least). Now I'm trying to add confidence to my prediction, something like "I have 95% confidence that the outcome would be 2". I'm thinking about using the prediction mean squared error as a metric. The problem is, Is it feasible to assume the prediction error follows gaussian distribution and add confidence based on that? What distribution should I use for highly discrete state space? For example, when there is only 4 possible states {0,1,2,3}, and I predicted 2.5 with mse 1, how can I distribute the possibility over those states? Any advice on the general model architecture and confidence set up will be appreciated! AI: It is common to say the error term follows a standard Guassian distribution. If you assume that to be true, then your squared errors follow a Chi-squared distribution: In probability theory and statistics, the chi-squared distribution (also chi-square or χ2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. Have a look here for some ideas about how to implement a quasi-confidence metric, based on your (mean squared) errors. It assumes the errors follow a chi-aquared distribution and then uses the normalised RMSE to define a set of confidence boundaries for a given confidence level, $\alpha$, as follows: $$ \left[\sqrt{\frac{n}{\chi_{1-\frac{\alpha}{2},n}^{2}}}RMSE,\sqrt{\frac{n}{\chi_{\frac{\alpha}{2},n}^{2}}}RMSE\right] $$ See the link for the steps involved. Here is the coded simulation taken from that post, with some added comments (requires python 3): from scipy import stats import numpy as np s = 3 # a constant to scale the random distribution n = 4 # number of samples/states per prediction alpha = 0.05 # confidence interval # distribution with confidence intervals ɑ = 0.05 c1, c2 = stats.chi2.ppf([alpha/2, 1-alpha/2], n) # we will take this many samples (this pre-allocates the y-vector) y = np.zeros(50000) # Loop over each sample and record the result mean sample # This would be your prediction vector - here it is random noise for i in range(len(y)): y[i] = np.sqrt(np.mean((np.random.randn(n)*s)**2)) # Use the chi-squared distributed confidence intervals to see when predictions fall # finds percentage of samples that are inside the confidence interval conf = mean((sqrt(n/c2)*y < s) & (sqrt(n/c1)*y > s)) print("1-alpha={:2f}".format(conf)) Here is another answer on CrossValidated, which gives more information around the area. Additionally, if you assume your predictions lie within a Gaussian distribution, you could use the variance of your predictions as the confidence (welcome to Bayesian learning!). There are packages that will help you do this, such as BayesOptimization. There are lots of examples on that webpage. Essentially, you will be able to make predictions and automatically get robust estimates for condifence... and some cool plots to show where your model is quite sure, and where it isn't:
H: How to display the value of activation? I have built my network and would like to see how the activation of a particular layer change after each epoch of training. For example, as code shown below, I want to see the activation values of "act_layer_1". What should I do so that I can see the activation values? Thanks in advance! W1 = tf.get_variable('W1', [n_input,n_hidden_1], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.Variable(tf.constant(0.0, shape=[n_hidden_1], dtype=tf.float32), trainable=True, name='b1'), act_layer_1 = tf.nn.relu(hidden_layer(x_image, W1) + b1) drop_layer_1 = tf.nn.dropout(act_layer_1, keep_prob) AI: Although you have not made it clear in your code snippet but the activation output could be inside or outside a function: Outside Function : Just run a = sess.run(act_layer_1) print(a) Inside Function: You have to return the value of this activation bunched with other values you need to print and then again use sess.run() to get all the values and print the index which contains your value. A longer but better in debugging would be to use tf.InteractiveSession(). More details can be found here: How to print the value of a Tensor object in TensorFlow?
H: What does it mean by “t-SNE retains the structure of the data”? I was learning about t-SNE when I was told that t-SNE retains the structure of the data in the embeddings. What exactly does this mean ? How does the algorithm achieve this ? So far I have understood this -> t-SNE is an unsupervised learning algorithm that is used for dimensionality reduction and visualization of high dimentioanal data. The algorithm works by measuring the similarity between one point and all the other points using the t-curve. The width of the curve is dependant on the density of the cluster the point belongs to. t-SNE retains the structure of the initial data. My question is what does it mean by retaining the structure of the data? Shouldn't there be some loss in the structure of the data seeing as it is transformed into a lower dimensional space ? Also what does "strucutre of the data mean"? Please ask for any further details that are required. AI: You should break this down one step further: retaining local structure and retaining global structure. Other well-understood methods, such as Principal Component Analysis are great at retaining global structure, because it looks at ways in which a dataset's variance is retained, globally, across the entire dataset. t-SNE works differently, by looking at locally appearing datapoints. It does this by computing a metric between each datapoint and a given number of neighbours - modelling them as being within a t-distributed distribution (hence the name: t-distributed Stochastic Neighbourhood Embedding). It then tries to find an embedding, such that neighbours in the original n-dimensional space, are also found close together in the reduced (embedded) dimensional space. It does this by minimising the KL-divergence between the before and after datapoint distributions, $\mathbb{P}$ and $\mathbb{Q}$ respectively. This method has the benefit of retaining local structure - so clusters in the low dimensional space should be interpretable as datapoints that were also very similar in the high dimensional space. t-SNE works remarkedly well on many problem, however there are a few things to watch out for: Because we know have some useful local structure retained, we essentially trade that off for ability in retaining global structure. This equates to you not being able to really compare e.g. 3 clusters in the final embedding, where 2 are close together and 1 is far away. This does not mean they were also far away from each other in the original space. t-SNE can be very sensitive to its perplexity parameter. In fact, you might get different results with the three-cluster example in point 1, using an only slightly different perplexity value. This value can indeed be roughly equated to "how many points shall we inlude in the t-distribution to find neighbours of a datapoint" - it essentially gives the area which is encompassed in the t-distribution. I would recommend watching this lecture by the author of t-SNE, Laurens van der Maaten, as well as getting some intuition for t-SNE and it's parameters using this great visual explanation. There are also some good answers here on CrossValidated with a little more technical information.
H: Dataframe looks the same but the structure is different when loop I am generating a dataframe from a JSON file, this JSON file can come from 2 different sources, so the internal structure is slightly different, so what I am doing is first detecting the source and from there I do a set of operations that gives me a Dataframe Everything is good until here (I thought), as when I print it in jupyter it shows me the way I wanted they look the same (structure), the problem goes when I loop through them, I get completely different results (this df have each same number of columns, 7 columns) When I loop: In 1 I have only 2 columns in the other one I get all the columns. I am looping: for i, (index, row) in enumerate(df_trans.iterrows()): print(row) Is there a way to see how is structure, I am quite confused of why the print of the df loops the same but when looping is not EDIT I notice that when I print the dataframe after a grouping I get the followin df_summary_trans_cs.groupby(['Date'])['sale', 'refund','Balance'].agg('sum') I get all the columns but when I add the column df_summary_trans_cs.groupby(['Date'])['sale', 'refund','Balance', 'Trans'].agg('sum') I only get that column, the other 3 dissapears AI: I finally got what was wrong, it has to do with the type in the series, so when i printed the whole dataframe the values where there, but the grouping was not been apply as it seems they were not number I apply df.MissingColumn = df.MissingColumn.astype(np.float64) And this fixed the issue
H: Hindsight Experience Replay: what the reward w.r.t. to sample goal means Referring to the paper on Hindsight Experience Replay Is it right that sampled goals which are visited states should be followed by a positive (or non-negative) rewards in order to allow an agent learn? On page 5 of the paper, a "Algorithm 1 Hindsight Experience Replay (HER)" scheme reads in particular: for t = 0, T-1 do r_t := r(s_t, a_t, g) Store the transition (s_t || g, a_t, r_t, s_(t+1) || g) in R Sample a set of additional goals for replay G := S(current episode) for g' ∈ G do r' := r(s_t, a_t, g') Store the transition (St||g', a_t, r', s_(t+1)||g') in R end for end for where: g : current goal R : replay buffer All other symbols with a dash indicate that they were sampled in addition to the actual current goal within the current episode. It means (as long as I understand) that for the sampled goals (g') the reward is now a function of action taken in state given the sampled goal. It is not very clear whether the agent will learn the task in case the reward is still the old function (which is non-positive for all states that are different from the final goal). As an example, in a grid-world an agent gets -1 reward while not in the final cell of its destination, but with the new goals introduced, the agent's reward with respect to its current state is not r, but r' (reward after reaching goal). Illustrating example (grid-world): AI: ... the reward is now a function of action taken in state given the sampled goal. I believe the action taken is that from the original goal, not from the newly sampled goal (as you say you understand). Otherwise I think you have everything more or less correct. We see in the first block of the algorithm, that each action $a_t$, given the current goal, g, results in the reward, $r_t$ (as usual). This is stored, along with the new state $s_{t+1}$ concatenated with the current goal (shown by the || symbol). This is highlighted as being standard experience replay. In the second block, using the sampled (virtual) goals $g'$, we receive a virtual reward for our performance using the same action as previously $a_t$. This is repeated for some number of simulated goals, selected by a sampling strategy, of which several are discussed in Section 4.5. I myself was wondering how many replays are sampled, as it seems that the key there is to sample enough, so that the buffer itself sees the right balance of additional goals (to reduce the reward density), but not so much that the virtual HER recordings from the second for-loop do not outnumber the real performed goal-action pairs from the first loop. In the paper (Section 4.5), this seems to be around the $k=8$ mark, where $k$ is the ratio of sampled/virtual goals to the original goals. So I believe the sampled goals that are indeed visited states from the original goal, would indeed receive a non-negative reward. I think the following is a key statement to help explain the intuition: Notice that the goal being pursued influences the agent's actions but not the environment dynamics and therefore we can replay each trajectory with an arbitrary goal assuming we have an off-policy RL algorithm like DQN ... This is very true in life. Imagine you try to throw a frisbee straight across a field to a friend. It doesn't make it, instead flying off to the right. Although you failed, you could learn that the wind is perhaps blowing left to right. If that had just so happened to be the task at hand, you would've received some positive reward for it! The authors sample many additional goals, which in my analogy, may be the flight dynamics of that particular frisbee, the air density/humidity etc. The main contribution of this paper, is a method to increase the density of the reward function i.e. to reduce how sparse the reward is for the model while training. Sampling these additional goals after each attempt (failed or otherwise) gives the framework the opportunity to teach the model something in each episode. In the grid-based example, if for example the agent doesn't reach the final goal (as its original goal), it records -1 to the replay buffer. Then other goals are sampled from the possible next steps according to a sampling strategy, $\mathbb{S}$. For If you were close to the goal, it would make sense that sampling from future states selected at random from the same episode - after the transition - that you would likely end up at the goal. It is important here to realise that the goal has changed, which allows reward to be received. I point this out because the goal usually doesn't change in grid-based games; however, the experiments in the paper were performed on a robotic arm with 7-DOF in continuous space (only the reward was discrete). EDIT Below is a sketch of an example path, where we reach the final goal after 10 transitions (blue arrows). I set $k = 4$, so in each of the states $s_t$, we also have 4 randomly selected goals. We then take the corresponding action $a_t$ for the current state, which is the blue arrow. If the randomly sampled goal, $g'$, happens to be the same as $s_{t+1}$, we get a non-negative reward - these are the orange arrows. Otherwise, a negative reward is returned: the green arrows. This is an example of the random sampling strategy, as my sampled goals $G$, are states that have been encountered in the whole training procesdure (not just the current episode), even though you cannot see it in my sketch. So here we see there are 4 sampled goals, whch do indeed return non-negative reward. That was chance. The authors do say that: In the simplest version of our algorithm we repla each trakectory with the goal $m(s_T)$, i.e. the goal which is achieved in the final state of the episode. In that case, it would mean $k=1$ and is always simply where the episode ended. This would mean negative rewards in the HER portion of the algorithm for all time steps exluding the final, $t=T$, where we would reach the sampled goal. That would indeed equate to the model having learned from an otherwise failed episode. In every single episode!
H: Decision tree where identical set of features results in different outcomes I am following the example described in this page to test my decision tree program. The initial data set is age astigmatism, tp-rate, contact-lenses --------------------------------------------------- young, no, normal, soft young, yes, reduced, none young, yes, normal, hard pre-presbyopic, no, reduced, none pre-presbyopic, no, normal, soft pre-presbyopic, yes, normal, hard pre-presbyopic, yes, normal, none pre-presbyopic, yes, normal, none presbyopic, no, reduced, none presbyopic, no, normal, none presbyopic, yes, reduced, none presbyopic, yes, normal, hard age, astigmatism, tp-rate are features and the type of contact lenses is the outcome. When I trace the example (where features with lowest entropies are used as the decision nodes), we reach the following situation a little further down the page: The blue circles are obviously mine to make tracing easier. As we see in the constructed tree up to this point, the path [Astigmatism = yes] -> [TP-Rate = normal] -> [Age = PP (pre-presbyopic)] will lead to the outcome of [Contact-lenses = None]. But when you look at the data, we see that this path can lead to either a hard or a none type of contact lens. So has the example made the right decision? And if not, what should we do in situations where the split data-set ends up, under an identical set of features, with more than one outcome? AI: The task in hand can be modeled as a classification problem. Meaning based on some given features we aim to predict the 'right' class of the outcome. Once trained, most classification models, decisions tree included, will predict the most probable class given a set of input features. Learning what is the most probable class, is usually done in the 'fitting' step of model training. In your example the set of features results in 2 possible classes(hard, none) meaning that there is a 50% chance being either. The classifier is just as wrong as its right in predicting the 'none' label. ** Some decisions tree implementations include an option to predict the probability for each class, and predict a class only if a specific confidence threshold is defined.
H: Please explain input shape in Keras My data consists of 2-dimensional arrays with shape (2,3). The whole dataset (emp) consists of 12 items (I know that it is too small of a number for an NN training, but it is just a test), so it has shape (12,2,3) truevals has shape (12,) When I try to add Keras layer: model = Sequential() model.add(Dense(4, input_shape=(2,3), activation='tanh')) model.add(Dense(1, activation='sigmoid')) model.compile(SGD(lr=0.5), 'binary_crossentropy', metrics=['accuracy']) model.fit(emp, truevals, epochs=200) and then when I fit the model I get the exception: Error when checking target: expected dense_30 to have 3 dimensions, but got array with shape (12, 1). What am I doing wrong? AI: Dense layers doesn't reduce dimensions of inputs, so if you provide a (12,2,3) input, it expects a (12,2,1) output (for your case). If you want to make it work, you should flatten your input (or output of first layer) and provide a (12,1) output.
H: how to save deep learning model and test it after training? I have a CNN model written using tensorflow for python, the model is for classifying lung CT images (cancer/no-cancer), after training the model with training and validation data and get a reasonable accuracy, after all, that I need to test the model with test data, but I don't know how to do that? how to save the model and using it for testing? AI: You can find the details in this tutorial: Save CNN model To summarize: Tensorflow variables are only alive inside a session. So, you have to save the model inside a session by calling save method on saver object. import tensorflow as tf saver = tf.train.Saver() sess = tf.Session() sess.run(tf.global_variables_initializer()) saver.save(sess, 'my_test_model') For saving the model after 1000 iterations, call save by passing the step count: saver.save(sess, 'my_test_model',global_step=1000) To use pre-trained model for fine-tuning: with tf.Session() as sess: saver = tf.train.import_meta_graph('my-model-1000.meta') saver.restore(sess,tf.train.latest_checkpoint('./')) print(sess.run('w1:0')) ##Model has been restored. Above statement will print the saved value of w1. To add more operations to the graph by adding more layers and then train it. sess=tf.Session() #First let's load meta graph and restore weights saver = tf.train.import_meta_graph('my_test_model-1000.meta') saver.restore(sess,tf.train.latest_checkpoint('./')) # Now, let's access and create placeholders variables and # create feed-dict to feed new data graph = tf.get_default_graph() w1 = graph.get_tensor_by_name("w1:0") w2 = graph.get_tensor_by_name("w2:0") feed_dict ={w1:13.0,w2:17.0} #Now, access the op that you want to run. op_to_restore = graph.get_tensor_by_name("op_to_restore:0") #Add more to the current graph add_on_op = tf.multiply(op_to_restore,2) print sess.run(add_on_op,feed_dict) #This will print 120.
H: Interpreting the Root Mean Squared Error (RMSE)! I read all about pros and cons of RMSE vs. other absolute errors namely mean absolute error (MAE). See the the following references: MAE and RMSE — Which Metric is Better? What's the bottom line? How to compare models Or this nice blogpost, or this question in stats.stackexchange containing interesting responses, and this one in datascience.stackexchange Still I can not get my head around something about RMSE: Scenario: Let's say we have a regressor for predicting house prices with a MAE of 20.5\$ and a RMSE of 24.5\$. Based on MAE, I can certainly interpret that the average difference between the predicted and the actual price is 20.5\$. How can I interpret RMSE? Can we still safely say the predicted and the actual price are off by 24.5\$ at the same time base on RMSE (upper-bound of prediction error)? In the first medium post, it says: RMSE does not describe average error alone and has other implications that are more difficult to tease out and understand. It confuses me a little. And I could not find any reliable reference to also clearly state that one can safely interpret RSME as one does MAE. Is RMSE is simply a only mathematically more convenient for optimization etc., and we are better off with MAE for the interpretation? Any detailed explanation is highly appreciated. AI: How can I interpret RMSE? RMSE is exactly what's defined. $24.5 is the square root of the average of squared differences between your prediction and your actual observation. Taking squared differences is more common than absolute difference in statistics, as you might have learnt from the classical linear regression. It confuses me a little. And I could not find any reliable reference to also clearly state that one can safely interpret RSME as one does MAE. Is RMSE is simply a only mathematically more convenient for optimization etc., and we are better off with MAE for the interpretation? I think this post should help you. I'll answer your questions directly: RMSE is easier mathematically and also practically. Have you heard of derivative? The derivative for MAE is undefined when your prediction is equal to observation but well defined everywhere for RMSE. In machine learning, a well defined gradient function is generally better. Both RMSE and MAE are useful, but they are two very different metrics. In regression, it's generally about choosing between linear regression and quantile regression. They are two very different models! As stated in the link, if you don't want your residuals affect your model too much, MAE could be better. Otherwise, if your data set is well defined (not many residuals), RMSE could be better. There is no right or wrong on which one is better. Think like two different algorithms for doing modelling.
H: Knowledge of statistics To be a data scientist I am an undergraduate student pursuing my BSc in Maths. I have taken statistics classes in first year. I want to know how much statistics do I need to know to get a decent place in the data science world? Also, please do provide me with suggestions where to begin if I start my preparation now. AI: Very good question! Statistics is the main core. Not only theoretical understanding but practical use of statistical concepts are very important (Applied Statistics). Moreover working with statistical Softwares to have some ideas how they work and what are usual input-outputs and visualizations. More important is the main fields of Data Science namely Data Mining, Machine Learning and Pattern Recognition. They are all built on statistics (try to get a course on Statistical Learning Theory to get some ideas). After statistics, computer knowledge in general is important. Get to know the concepts of programming (what is a good program and why, which language is proper for which task, what is algorithm and how you should evaluate, produce or read them, etc.). In Data Science, reporting the results are important. Get to know some visualization techniques/libraries which are handy and commonly used. Then you come to the concept of Database and even more general "What is data in real-world?" Try to get your hands dirty with some SQL-like query languages on some toy examples to learn what the data is and how you load, transform or extract them. The last but not least are Machine Learning/Data Mining/Pattern Recognition skills. I recommend a MSc. in this field (however it's not necessary but my personal experience as an interviewer shows that usually people with this background stand higher). These are the main things came to my mind. Please note that This is just my opinion and based on my experience. There are more details that I skipped to keep the answer as generally valid as possible. Good Luck!
H: Adding the input layer - units with a decimal I took the course Machine Learning A-Z from Udemy and am trying to apply what I learned in the tutorials. Theye taught us in the "Adding the input layer" portion of an ANN that the units is based off of the input_dim. Normally the "units = (input_dim + 1) / 2". In the dataset that I am working with my input_dim=754. (754 + 1) / 2 = 377.5. Should I use 377.5 or should I round up or down to a whole number? #Adding the input layer and the first hidden layer classifier.add(Dense(units=377.5, kernel_initializer='uniform', activation='relu', input_dim=754)) AI: Round up or down to a whole number. Keras documentation specifies that units should be a positive integer, and I'm not sure what a fractional unit would even mean. Does this work when you try to run it?
H: CNN to learn and visualize 2d features In the paper Large-Margin Softmax Loss for Convolutional Neural Networks the author has a figure as below: He is claiming that he is using only 2d features to classify MNIST with a CNN. How is he doing that? Is he reducing the dimension of MNIST images to 2 and how? How do you visualize such figures in 2D if the classification is for 10 classes? AI: I think I got my answer from the following post: http://unixnme.blogspot.com/2018/04/dimensionality-reduction-and-scattered.html
H: Why does adding a dropout layer improve deep/machine learning performance, given that dropout suppresses some neurons from the model? If removing some neurons results in a better performing model, why not use a simpler neural network with fewer layers and fewer neurons in the first place? Why build a bigger, more complicated model in the beginning and suppress parts of it later? AI: The function of dropout is to increase the robustness of the model and also to remove any simple dependencies between the neurons. Neurons are only removed for a single pass forward and backward through the network - meaning their weights are synthetically set to zero for that pass, and so their errors are as well, meaning that the weights are not updated. Dropout also works as a form of regularisation, as it is penalising the model for its complexity, somewhat. I would recommend having a read of the Dropout section in Michael Nielsen's Deep Learning book (freely available), which gives nice intuition and also has very helpful interactive diagrams/explanations. He explains that: Dropout is a radically different technique for regularization. Unlike L1 and L2 regularization, dropout doesn't rely on modifying the cost function. Instead, in dropout we modify the network itself. Here is a nice summary article. From that article: Some Observations: Dropout forces a neural network to learn more robust features that are useful in conjunction with many different random subsets of the other neurons. Dropout roughly doubles the number of iterations required to converge. However, training time for each epoch is less. With H hidden units, each of which can be dropped, we have 2^H possible models. In testing phase, the entire network is considered and each activation is reduced by a factor p. Example Imagine I ask you to make me a cup of tea - you might always use your right hand to pour the water, your left eye to measure the level of water and then your right hand again to stir the tea with a spoon. This would mean your left hand and right eye serve little purpose. Using dropout would e.g. tie your right hand behind your back - forcing you to use your left hand. Now after making me 20 cups of tea, with either one eye or one hand taken out of action, you are better trained at using everything available. Maybe you will later be forced to make tea in a tiny kitchen, where it is only possible to use the kettle with your left arm... and after using dropout, you have experience doing that! You have become more robust to unseen data.
H: NLP - How to perform semantic analysis? I'd like to perform a textual/sentiment analysis. I was able to analyse samples with 3 labels: (positive, neutral, negative) and I used algorithms such as SVM, Random Forest, Logistic Regression and Gradient Boosting. My script works correctly and with the cross validation I can take the best algorithm among the 4. I use supervised algorithms with the python function "Countvectorizer" But my boss typed "NLP" on the internet and looked at some articles. He told me : "These 3 outputs are not enough, I want a complete semantic analysis that can explain the global meaning of the sentence" He didn't seem to have a preference between supervised and unsupervised algorithms. He told me that he wanted an algorithm able to tell that "The company president is behind bars" is equivalent to "the CEO is in jail". So do you have any idea how one could perform that ? And how to implement it in Python? I guess we need a great database full of words, I know this is not a very specific question but I'd like to present him all the solutions. What scares me is that he don't seem to know a lot about it, for example he told me "you have to reduce the high dimension of your dataset" , while my dataset is just 2000 text fields. Thank you very much for your answers :) AI: With your three labels: positive, neutral or negative - it seems you are talking more about sentiment analysis. This answer the question: what are the emotions of the person who wrote this piece of text? Semantic analysis is a larger term, meaning to analyse the meaning contained within text, not just the sentiment. It looks for relationships among the words, how they are combined and how often certain words appear together. To gain a deeper insight into your text, you could read about topics such as: Semantic Analysis in general might refer to your starting point, where you parse a sentence to understand and label the various parts of speech (POS). A tool for this in Python is spaCy, which words very nicely and also provides visualisations to show to your boss. Named Entity Recognition (NER) - finding parts of speech (POS) that refer to an entity and linking them to pronouns appearing later in the text. An example is to distinguish between Apple the company, and apple the fruit. Embeddings - finding latent representation of individual words e.g. using Word2Vec. Text is processed to produce a single embedding for individual words in the form of an n-dimensional vector. You can then compute similarity measures (e.g. cosine similarity) between the vectors for certain words to analyse how they are related. Lemmatisation - this method reduces many forms of words to their base forms, which means they appear more regularly and we don't consider e.g. verb conjugations as separate words. As an example, tracking, tracked, tracker, might all be reduced to the base form: track. Your next step could be to search for blogs and introductions to any of those terms I mentioned. Here is an example parse-tree from spaCy: Reducing dimensions This is something that would then refer to the vectors, which describe each of your words. Generally, the Word2Vec vectors are something like 300-dimensional. You might want to visualise the words, plotting them in 2d space. You can try a method like t-SNE, which will map the 300d vectors to 2d space, allowing nice plots showing relationships, while retaining as much of the original relationships described in the 300d space. There will, of couse, be some information loss, but you could not have visualised the 300d vectors in the first place! Using the vectors for your words, you can compute things like the similarity (on a scale between 0 and 1) between president and CEO is something like 0.92 - meaning they are almost synonyms!
H: finding start and end time of each session id in r suppose i have a dataframe suppose i have a dataframe > data session id timestamp item id 1 2014-04-0618:42:05.822 1 1 2014-04-0618:42:06.800 1 1 2014-04-0618:42:06.820 1 2 2014-04-0315:27:48.118 1 2 2014-04-0315:27:49.440 2 3 2014-04-0315:27:49.550 1 3 2014-04-0315:27:50.240 0 3 2014-04-0315:27:50.540 3 3 2014-04-0315:27:51.530 2 and i want to find out each session start and end time and unique items in each session mean i want output like this > result session id session start and end time distinctitems in each session 1 2014-04-0618:42:05.822, 2014-04-0618:42:06.820 1 2 2014-04-0315:27:48.118, 2014-04-0315:27:49.440 2 3 2014-04-0315:27:49.550, 2014-04-0315:27:51.530 4 how can i do this? AI: Assuming you have access to dplyr package, you can do the following. data %>% group_by(session_id) %>% summarise(start_time = min(timestamp), end_time = max(timestamp), unique_items=length(unique(item_id)))
H: CNN, which layer to choose for a similarity measure I built a model (InceptionResnet v2) to classify images and I would like to use it to measure similarity between objects. One way to measure that similarity is to catch an intermediate layer's results from two vector and look their distance. I don't know how to choose the layer, my last layers are : Conv -> Flatten(98304,) -> Dense(256,) -> Dense (softmax) I hesitate between Flatten and first Dense, how to choose ? AI: For similarity computations, you should generally prefer the last (before softmax) dense layer. This layer will generally capture the most higher level representation of the images. It is also preferable to have the size of vector as small as possible when similarities are to be computed for efficiency reasons.
H: Keras backend function equivalent for str.format How to perform the following function using Keras backend? "{0:b}".format(37) AI: Why do you need that? You shouldn't need to do anything like that using the backend, as Keras will take strings as arguments, or you can use a regular print function. There are certain special formatting options in Keras, for example, saving model weights during training, inserting the current epoch and validation loss into the name to help give them meaning. This is shown in the documentation for the ModelCheckpoint callback If you really want to do something special, the Tensorflow backend does have a tf.Print function. I am not sure if you can call this directly from keras.backend.Print or whether you need to directly use tensorflow, with tf.Print - either way it would be the same operation being performed.
H: Topic Segmentation - should it be done in Raw, TfIdf or Semantic Space? Let's assume we have a collection of documents and wish to perform some unsupervised topic segmentation. As always, we will perform some preprocessing (including tokenization, accent-removal, lowercasing, lemmatizing and such) and transform the lists of tokens into either raw-counts or tfidf-vectors. We'll call this matrix M. Now we have several possible approaches to perform some simple bag-of-words topic segmentation: Apply a matrix decomposition method (LSI, LDA, NMF) directly to M and use the resulting components as the topics. Embed each vector of M into a semantic space (LSI, word2vec) and then apply a matrix decomposition method on the semantic space. Apply a clustering method (kM, DBSCAN, MSC, GMM) directly to M. Embed each vector of M into a semantic space and then apply a clustering method on the semantic space. I have two questions: Are there any other alternatives to bag-of-words topic segmentation that I have not considered yet? What are the conceptual differences between the methods described above and which one(s) are recommended? Thanks in advance! AI: Clustering method can applied directly to TfIdf matrix (which will be generally sparse) or to documents in the derived semantic space like LSI. Since LSI embedding is a dimensionality/noise reduction step, it is a good idea to cluster on documents embedded in semantic space. Clustering methods will generally use Euclidian distance and will probably perform better on dense matrix. If the clustering algorithm implementation that you use doesn't support sparse data out of the box (most implementations don't) then definitely use a transformation like LSI first and then apply clustering. LSI method itself is a matrix decomposition operation on the original TfIdf matrix. So applying matrix decomposition on top of LSI is not needed.
H: Is it a good idea to normalize the outputs of a Neural Network for Regression, when the different outputs vary in magnitude? I understood that it is not necessary to scale the output of a neural Network when I predict a single value via regression. Is it necessary do normalize the Outputs of my neural Network if I have multiple outputs that vary in magnitudes between 10^-2 and 10^4? My intuition would tell me that the loss function might ignore the smaller values and only focus on the values of a bigger magnitude. AI: Yes, if proper weights were not introduced in cost function or target variables were not normalized, optimization process would be driven by the target variable which had the largest scale. Therefore it's a good practice to normalize outputs if they have huge differences in terms of scale.
H: Logistic Regression or regression SVM for probability of outcome I am working on a prediction question: what's the percentage of Y = 1 using a number of features? The output Y values I have for training are in binary. In this case, should the prediction be treated as classification or regression? Would logistic regression that returns the probability be suited or the probability of SVM, e.g., this? AI: Applying regression methods would render useless, as we usually feed binary (or multiple) categories as target variable in classification problems. However, if you have somehow got probability values for each sample, you can perform regression as well - I can't think of such a rare case, though. If you are looking for prediction probabilities, Logistic Regression is the best call. Having some experience with sklearn, I can tell that SVM, by default, does not compute probabilities. If you wish to obtain those prediction probabilities, SVM uses another set of algorithms, which cause serious slowdown in process and extra computational cost.
H: calculating average of each session by unique itemID i have a data frame > data session id item id 1 1 1 1 1 1 2 1 2 2 3 1 3 0 3 3 3 2 and i want to calculate avg of each session id like this uniqueitemsinsession/totalitems= ans (1/3=0.333)and get result like this > result session id avg 1 0.333 2 1.000 3 1.000 how can i do this?? AI: Assuming you can use dplyr package data %>% group_by(session_id) %>% summarise(avg = length(unique(item_id))/length(item_id)) You have asked many questions on basic grouping and summarise operations in R. It would be better to go through tutorials like this to learn all these tricks in one go.
H: How can ReLU ever fit the curve of x²? As far as I understand (pardon me if I am wrong) the activation functions in a neural network go through the following transformations: Multiplication by constants(weights) to x ( $f(ax)$ , $f(x)$ being the activation function). Recursive substitution $f(f(x))$. Now with the above transformations a ReLU activation function should never be able to fit a x² curve. It can approximate, but as the input grows the error of that approximated function will also grow exponentially, right? Now x² is a simple curve. How can ReLU perform better for real data which will be way more complicated than x²? I am new to machine learning. So please pardon me if there are any blunders in anything I am assuming. AI: It can approximate, but as the input grows the error of that approximated function will also grow exponentially, right? You are right, the neural network uses ReLu functions to approximate the output $f(x) = x^2$ and there is some error incurred, but you are forgetting that Non-linear functions also do approximation since we use only some subset of activation functions and incur some error which can be as huge as with ReLus There is also a number of neurons to consider. The more neurons/layers you have - the better the approximation can be (since more small-interval ReLus are fit to better approach the shape of $x^2$). If your input size grows, you should also consider increasing the number of neurons/layers, in this way the approximation will always be close enough. Now x² is a simple curve. How can ReLU perform better for real data which will be way more complicated than x²? The main reason for this is that even though other activation functions can approximate $x^2$ or something else better, they are not optimised as fast. Generally speaking, non-linear activation function decision surface is more complex than the loss created by ReLus and can contain better global minimum but this minimum is harder to find, and, thus, with the less sophisticated loss created by ReLus, we can find a better optima using some gradient descent procedure, and it will be much faster.
H: What is wrong with this reinforcement learning environment ? I'm working on below reinforcement learning problem: I have bottle of fix capacity (say 5 liters). At the bottom of bottle there is cock to remove water. The distribution of removal of water is not fixed. we can remove any amount of water from bottle, i.e. any continuous value between [0, 5]. At the top of the bottle one tap is mounted to fill water in the bottle. RL agent can fill [0, 1, 2, 3, 4] liters in the bottle. Initial bottle level is any value between [0, 5]. I want to train the agent in this environment to get optimal sequence of actions such that bottle will not get empty and overflow which implies continuous supply of water demand. Action space = [0, 1, 2, 3, 4] Discrete Space Observation Space = [0, Capacity of Bottle] i.e. [0, 5] Continuous Space Reward logic = if bottle empty due to action give negative rewards; if bottle overflow due to action give negative rewards I have decided to use python to create an environment. from gym import spaces import numpy as np class WaterEnv(): def __init__(self, BottleCapacity = 5): ## CONSTANTS self.MinLevel = 0 # minimum water level self.BottleCapacity = BottleCapacity # bottle capacity # action space self.action_space = spaces.Discrete(self.BottleCapacity) # observation space self.observation_space = spaces.Box(low=self.MinLevel, high=self.BottleCapacity, shape=(1,)) # initial bottle level self.initBlevel = self.observation_space.sample() def step(self, action): # water qty to remove WaterRemoveQty = np.random.uniform(self.MinLevel, self.BottleCapacity, 1) # updated water level after removal of water UpdatedWaterLevel = (self.initBlevel - WaterRemoveQty) # add water - action taken UpdatedWaterLevel_ = UpdatedWaterLevel + action if UpdatedWaterLevel_ <= self.MinLevel: reward = -1 done = True elif UpdatedWaterLevel_ > self.BottleCapacity: reward = -1 done = True else: reward = 0.5 done = False return UpdatedWaterLevel_, reward, done def reset(self): """ Reset the initial bottle value """ self.initBlevel = self.observation_space.sample() return self.initBlevel import random from collections import deque from keras.models import Sequential from keras.layers import Dense from keras.optimizers import sgd class DQNAgent: def __init__(self, state_size, action_size): self.state_size = state_size self.action_size = action_size self.memory = deque(maxlen=2000) # memory size self.gamma = 0.99 # discount rate self.epsilon = 1.0 # exploration rate self.epsilon_min = 0.01 # minmun exploration rate self.epsilon_decay = 0.99 # exploration decay self.learning_rate = 0.001 # learning rate self.model = self._build_model() def _build_model(self): # Neural Net for Deep-Q learning Model model = Sequential() model.add(Dense(256, input_dim=self.state_size, activation='relu')) model.add(Dense(256, activation='relu')) model.add(Dense(self.action_size, activation='linear')) model.compile(loss='mse', optimizer=sgd(lr=self.learning_rate)) return model def remember(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) def act(self, state): if np.random.rand() <= self.epsilon: return random.randrange(self.action_size) act_values = self.model.predict(state) return np.argmax(act_values[0]) # returns action def replay(self, batch_size): minibatch = random.sample(self.memory, batch_size) for state, action, reward, next_state, done in minibatch: target = reward if not done: target = (reward + self.gamma * np.amax(self.model.predict(next_state)[0])) target_f = self.model.predict(state) target_f[0][action] = target self.model.fit(state, target_f, epochs=1, verbose=0) if self.epsilon > self.epsilon_min: self.epsilon *= self.epsilon_decay # create iSilo enviroment object env = WaterEnv() state_size = env.observation_space.shape[0] action_size = env.action_space.n minibatch = 32 # Initialize agent agent = DQNAgent(state_size, action_size) done = False lReward = [] # carry the reward upto end of simulation rewardAll = 0 XArray = [] # carry the actions upto end of simulation EPOCHS = 1000 for e in range(EPOCHS): #state = np.reshape(state, [1, 1]) # reset state in the beginning of each epoch state = env.reset() time_t = 0 rewardAll = 0 while True: # Decide action #state = np.reshape(state, [1, 1]) action = agent.act(state) next_state,reward, done = env.step(action) #reward = reward if not done else -10 # Remember the previous state, action, reward, and done #next_state = np.reshape(next_state, [1, state_size]) agent.remember(state, action, reward, next_state, done) # remembering the action for perfrormace check XArray.append(action) # Assign next_state the new current state for the next frame. state = next_state if done: print(" episode: {}/{}, score: {}, e: {:.2}" .format(e, EPOCHS, time_t, agent.epsilon)) break rewardAll += reward # experience and reply if len(agent.memory) > minibatch: agent.replay(minibatch) lReward.append(rewardAll) # append the rewards After running the 1000 epoch, I observed that agent has not learned anything. Unable to find out whats going wrong. AI: I can see two issues: Your environment is not tracking changes to state, just random success/fail based on self.initBlevel which is never modified to reflect changes. Although you calculate and return the new state (as variable UpdatedWaterLevel_), this is not fed back into the system. You store it as the next "state" in the DQN replay table, but don't actually store it in the environment as the current state. You should do that - without it the replay table will be filled with incorrect values. There should be a variable that the environment has access to which represents the current state. You are running the system as an episodic problem, but do not re-set the environment for the start of a new episode. This is a "hidden" bug due to the issue above, but would immediately become a problem for you if you let the state go outside the bounds of the problem you have defined. Given the problem setup, I would expect the agent to learn to always fill the container to the maximum possible capacity (and it would then get drained by the amount of the random request). That would lead to infinitely long episodes, so you do still need discounting. Possibly your NN is over-complex for this simple task, which could make learning slower. But that's harder to tell. The relationship to expected future discounted reward based on current state and action is complex, so you might need a moderate size of network to capture that.
H: How to auto tag texts Suppose we have predefined list of tags Tag #1, Tag #2, ..., Tag #N and we want to assign tags to sample texts based on keywords or semantic analysis. A text can have multiple Tags. I don't want to consider it as classification problem assuming each tag as a separate class. What algorithm we can use here, Any suggestions would be of great help. AI: This is a multi-label classification task. Several labels can be assigned to each text. You can do this task using LSTM and sigmoid functions. The sigmoid functions are in the output layers and number of them is equal to number of your tags.
H: Predicting Composition of Chemical Compounds I have a dataset which has names of compounds and their compositions. Like below Sulphuric Acid=>[H,S,O] (Hydrogen, sulphur, oxygen) Oxalic Acid=>[H,C,O] Sodium Oxalate=>[Na,C,O] Potassium Sulphate=>[K,S,O] ... Now I would need to train a model which can tell me Sodium Sulphate as [Na,S,O]. Note that sodium sulphate represents something not in training dataset. I have tried searching for possible ideas but nothing came up. Then I thought it could be hierarchical clustering like Sodium-Oxalate | | Oxalate Sodium | | | C O Na But then in hierarchical clustering the base/leafs are different. But here they are shared. Its like a graph. So What machine learning algorithm can help? Any other clustering? NLP/word clustering (if yes How)? Another approach I could think of is like Word2Vec where I generate embeddings for each word (C,H,Sodium), all will have embeddings. And based on what is closer to the word I am asking I will give output. But this needs huge amount of data. I only have around 1k common compounds. And the approach won't generalize to any problem like this with less data. AI: Try a one-hot encoding of the elements in your training set (H, C, I etc.) and the same for the chemical descriptors ('acid', 'oxalic' etc.). Then try feeding the data to a simple feed-forward neural network, mapping the one-hot encodings of the descriptors as your x_train and x_val to the chemical elements, which are your y_train and y_val. For the encodings, take a look at the Scikit learn encoders. I think such a simple approach may work, as the chemical names are logically named, so any experienced chemist knows exactly which elements to expect in a compound, given its name. Your problem doesn't require further information in the predictions, e.g. the actual chemical makeup, like $CH_3CH_2OH$ (ethanol). Regarding a small amount of data: look into something like K-fold cross validation. Using this, you select some portion of the data to be your validation data, and train the model. You then repeat this process, selection a different portion of the data. This will help make the most of a limited dataset, although it may introduce overfitting because your model will eventually have seen all the data! Here is a schematic of this method: Scikit Learn has a class that implements it for you.
H: K-fold crossvalidation: how do MSE average and variance vary with K? I'd like to get an intuition about how varying k impacts k-fold validation. Is the following right? Average of the OOS MSEs should generally decrease with k Because, a bigger "k" means the training sets are larger, so we have more data to fit the model (assuming we are using the "right" model). Variance of the OOS MSEs should generally increase with k. A bigger "k" means having more validation sets. So we have have more individual MSEs to average out. Since the MSEs of many small folds will be more sparse than MSEs of few large folds, variance will be higher. AI: Average of the OOS MSEs should generally decrease as k increases. This is right but the difference is much less then on your chart. Suppose we have a dataset where the error will halve if we increase the data 10 times (approximately true for the paper Scaling to Very Very Large Corpora for Natural Language Disambiguation). Then the difference between 5-fold and 20-fold validation will be about 5% (1/(2^log10(0.95/0.8)) not halving like on your graph. And the difference between 20-fold and infinity-fold will be only about 1.5% (1/(2^log10(1/0.95)) For the chart you could use the formula: Average OOS MSE = 1/(2^log10(1-1/k))*MSE_inf. This will assume that you have MSE = MSE_inf at infinity. Variance of the OOS MSEs should generally increase as k increases. MSE is an average and according to the Central Limit Theorem (if squared errors (SE) are independent and identically distributed which, in my opinion, is supposed for most of the machine learning algorithms) variance should equal to Var(SE)/N, where N is the number of data points used to calculate MSE. So for 5-fold you will have variance Var(SE)/(Npop/5) Where Npop is the total number of points that you have. For the average MSE between all k-folds the variance will be the same and equal to Var(SE)/Npop. So the answer is that the variance of individual MSE numbers of each k-fold increases with k, but the variance of the final average MSE does not depend on the number of folds. To calculate the variance of the final MSE based on MSE of folds: Var(MSE_final) = Mean(Var(MSE_folds))/k = Sum((MSE_folds - Mean(MSE_folds))^2)/k^2 MSE change based on number of folds Here MSE at infinity is assumed 0.05. Variance of individual k-fold MSEs change based on number of folds Here Var(squared error for one point) is assumed 10 and number of observations is 1000.
H: How to use Cross Entropy loss in pytorch for binary prediction? In the pytorch docs, it says for cross entropy loss: input has to be a Tensor of size (minibatch, C) Does this mean that for binary (0,1) prediction, the input must be converted into an (N,2) tensor where the second dimension is equal to (1-p)? So for instance if I predict 0.75 for a class with target 1 (true), would I have to stack two values (0.75; 0.25) on top of each other as input? AI: Actually there is no need for that. PyTorch has BCELoss which stands for Binary Cross Entropy Loss. Please check out original documentation here. Here is a quick example: m = nn.Sigmoid() # initialize sigmoid layer loss = nn.BCELoss() # initialize loss function input = torch.randn(3, requires_grad=True) # give some random input target = torch.empty(3).random_(2) # create some ground truth values output = loss(m(input), target) # forward pass output.backward() # backward pass
H: When memory errors occur with model.fit(), is it due to GPU RAM or normal RAM? With respect to this question, https://stackoverflow.com/questions/51895278/how-to-know-when-to-use-fit-generator-in-keras-when-training-data-gets-too-big when memory errors are reported due to excessively large training data, are the memory errors caused by lack of normal PC RAM or lack of GPU RAM? I would like to know which one so that I can buy the right kind of RAM. Keras was configured to use GPU for training. AI: Yes. The error is for GPU memory. You should look at training in batches options if you haven't already. This thread has short explanation on possible benefits and implications. Quick batch size tutorial
H: High RMSE and MAE and low MAPE I have used a few regression models on the same dataset and obtained error metrics for them as shown below, The RMSE(Root Mean Squared Error) and MAE(Mean Absolute Error) for model A is lower than that of model B where the R2 score is higher in model A. According to my knowledge this means that model A provides better predictions than model B. But when considering the MAPE (Mean Absolute Percentage Error) model B seems to have a lower value than model A. I would really appreciate it if someone could explain why it is so. Thanks in advance. AI: The reason is the wider range of your output variable. Consider the following two cases, Real value was 99, prediction is 101 Real value was 5520, prediction is 5522 In both cases, the absolute error is 2, but relative error in first case is much larger (2% - 2/101) than second case (0.035% 2/5520). Absolute and relative metrics are measuring different aspects of the prediction. So one model is not better than the other in absolute sense (pun intended). Which metric to value depends on your application. When the outcome range is wide (probably your case) and skewed, relative error measurements are better than absolute error measurements.
H: splitting of training examples into the mini batch: what to do with the rest tiny mini-batch? Lets assume I have 103 training examples. I want a mini-batch to be of the size 16. That means that there will be 6 mini-batches of the size 16 and one mini-batch of the size 7. In the tensor flow one needs to specify the shape of the input: x = tf.placeholder(tf.int32, shape=[batch_size], name='x') which of course led to the following error: Cannot feed value of shape (7,) for Tensor 'x_10:0', which has shape '(16,)' So, what do I do with the mini-batch of size 7? Should I find the corresponding mini-batch size, that will create equal mini-batches? If so, how can I follow then the advice to create mini-batches of the power of two? or I should disregard the last mini batch? AI: A few basic approaches come to mind. It depends on your context and intention which might work best for you. Here are some ideas assuming that working with a partial batch is not an option: Drop the remainder. This might make more sense when you have many complete batches, and you consider the set of examples to be enough. Complement the final batch with duplicated samples from the other batches. E.g. randomly get one example from each batch. If something like that is feasible at all, generate additional (possibly synthetic) examples to complete the final batch.
H: How does action get selected in a Policy Gradient Method? As I understood, in Reinforcement-Learning a big difference between a Value-based method and a Policy-gradient method is how the next action is selected. In Q-learning (Value-based method), each possible action gets a score. We then select next action that has a highest score, with a slight possibility to select any other action at random. How does a policy gradient method decide upon an action? We are no longer allowed to select the highest-scored action, as this will leave us with a non-differentiable policy, that can't be improved. The actions still have scores - do we select them at random ...using the "cummulative distribution function"? Also, am I using the right term? In other words, do actions that have a higher score occupy more space and are more likely to be selected? Edit: there is a really awesome lecture about policy grads. Unfortunatelly the video is deliberately unlisted, so that normal people can't really get to it - but I am for free education, so here it is: CS294-112 9/6/17 AI: You are on the right track. We no longer select an action that we think maximizes the score. Rather we predict what the best action to take is. This can be very effective in large or continuous state spaces where taking an argmax of the score over all possible actions would be prohibitively expensive. Our policy network computes the probability of taking an action, and during training we sample actions from the network. This continues until we receive a reward signal, which we multiply by the predicted log probabilities to propagate the error back through the network over past actions leading up to that reward. This will tend to adjust the past actions' predicted probabilities down for more negative rewards, and up for more positive rewards. Of course not all actions leading up to the reward necessary lead to that good or bad reward, but over many training iterations good actions will tend to lead to more positive reward and negative actions will lead to more negative reward. For more discussions see Andre Karpathy's blog post Deep Reinforcement Learning: Pong from Pixels and RL Course by David Silver - Lecture 7: Policy Gradient Methods
H: Poker tournament winner prediction I am trying to solve poker tournament winner prediction problem. I’ve millions of historical records in this format: Players ==> Winner P1,P2,P4,P8 ==> P2 P4,P7,P6 ==> P4 P6,P3,P2,P1 ==> P1 I want to find the best algorithm to predict winner from set of known players. So far I have tried decision trees, XGboost without much success. I’ve done my research and could not find answer anywhere else.My apologies in advance if same problem is answered in different terms on stack-overflow. AI: As oW_ mentioned, this can be done with the Elo rating system. More specifically, given a record of wins and losses for some player pool, you can fit ratings for each player such the expected result of player $A$ rated $r_A$ vs player $B$ rated $r_B$ is given by: $$ E_{A\ vs\ B} = \frac{1}{1 + 10^{(r_B-r_A)/400}} $$ When the only possible outcomes are win or lose then this is also the probability of $A$ beating $B$. If we define the quantity $\gamma_i = 10^{r_i/400}$ then the above can be expressed as: $$ E_{A\ vs\ B} = \frac{\gamma_A}{\gamma_A+\gamma_B} $$ Furthermore, Elo ratings generalize to multiplayer competitions, such that the probability of player $A$ beating players $B$, $C$, and $D$, for example, is given by: $$ E_{A\ vs\ B,C,D} = \frac{\gamma_A}{\gamma_A+\gamma_B+\gamma_C+\gamma_D} $$ The simplest method for computing ratings of a pool of (previously unrated) players given their game histories is to give them all the same initial rating, say $0$. The number doesn't matter because the relative rating differences (which are used to estimate win probabilities) will be the same. Next, for each result, $S$, of player $i$ vs player $j$ in the data set make the following update: $$ r_i' \leftarrow r_i + K \times (S - E_{i,j}) $$ where $S = 1$ if $i$ beat $j$, otherwise $0$, and $K$ is a factor chosen to determine how quickly ratings change. For simplicity you can use $K=32$. Remember to perform the update in both directions, so that if $i$ beats $j$ then $i$'s rating increases while $j$'s decreases. Use the pre-updated rating for player $i$ when computing player $j$'s rating change, so that the result is symmetric. It's also typical to use the pre-updated ratings for all rating update calculations in a single tournament. So you would keep track of all deltas for each player and apply them at the end. To handle multi-player tournaments, simply log one win for the winner against every other player. In your case you only have the winner, so this is the best you can do. (If you had a complete ranking you could try Tom Kerrigan's Multiplayer Elo method or Microsoft's TrueSkill.) You also need to divide $K$ by the number of players minus $1$ in your update rule: $$ r_i' \leftarrow r_i + \frac{K}{n-1} \times (S - E_{i,j}) $$ Here is a Python library that performs the calculation I have described. An alternative approach is the TrueSkill rating system, which handles multiplayer games nicely. It expects a ranking of everyone in the tournament, however you could try it with the winner ranked first and everyone else tied for second. Here is a Python TrueSkill library.
H: Understanding data normalisation So I know that when we have different parameters with different value ranges we have to standardise these values. Also, I read that when a parameter does in fact require higher values then we should not normalise it. However, isn't this always the case? For example, if we have age and salary, isn't salary always going to be greater than age? Wouldn't normalising these values lose possible significant information? Also, should you always normalise the entire dataset, or can one simply normalise a number of specific columns? AI: Normalisation is performed to balance weights and make parameters universal. In the cases you mention this is interval normalisation (thus numbers, thus a set that has standard order and distance metric). Just for the contrast, string normalisation might involve stemming, converting to uppercase or a number of other techniques, depending on the context. Since scaling (multiplying by the same factor) is a linear operation, it preserves order and magnitude relations, among other properties. Thus, in the general case, normalisation does not loose information about relationships within the parameter/dimension. If we use a simple clustering algorithm for the sake of the example, consider that age might have a range (roughly) of (0-100) and salaries could be something like the interval (1-10^6). A naive approach would be not to perform normalisation and to use the default Euclidean distance metric. However, variances of salaries of up to $100 are negligible. Thus your salary information would dominate your age information, resulting distances between data points (and thus clusters) being mostly determined by the salary only. The secondary role of age would become negligible for the clustering. However, if we normalise the two to the range of (0-1), we might get somewhat comparable scales and the factor influence would be incomparably smaller (while still to be considered). Due to the last remark in the brackets, most advanced machine learning algorithms (including neural networks, support vector machines) actually handle different scales. When you use these, you don't need to normalise data, because the algorithm learns normalisation along with other properties of the data. The decision of when to normalise your data depends on your problem context and on the approach you are planning to take. Let's say that normalising data makes it easier from the first view to be able to tell whether a value is low or high.
H: How to remove rows from a dataframe that are identical to another dataframe? I have two data frames df1 and df2. For my analysis, I need to remove rows from df1 that have identical column values (Email) in df2? >>df1 First Last Email 0 Adam Smith [email protected] 1 John Brown [email protected] 2 Joe Max [email protected] 3 Will Bill [email protected] >>df2 First Last Email 0 Adam Smith [email protected] 1 John Brown [email protected] AI: You can try this: cond = df1['Email'].isin(df2['Email']) df1.drop(df1[cond].index, inplace = True) >>df1 First Last Email 2 Joe Max [email protected] 3 Will Bill [email protected]
H: Why convolution over volume sums up across channels? A simple question about convolution over volume . Say we have an image with dimensions $(n, n, 3)$ and we apply a filter of dimension $(k, k, 3)$ this outputs an matrix of dimension $(n-k+1, n-k+1)$. Why do we sum across channels in this case. Don't we lose information by mixing different channels. In case of images, this implies mixing information in R, G, B channels? For ex. when trying to detect traffic signal lights, such mixing can be fatal. AI: CNN filters are used for edge detection only. These edges are basically detected by a mathematical functions and as a result get more and more complex in deeper layers (cascading functions) enabling it to detect complex features. In your question 2 points need to be noted: Traffic lights are not pure red, green or yellow. They may have intensity differences which will get reflected in the image matrix. CNN can capitalise on those differences. The weights of CNN filters are randomly initialised, so given you have a large data-set CNN will eventually learn which out of the R,G,B channel is contributing to a specific colour. Also intuitionally a Traffic Signal is not a light only. It consists of a 3 light device. Consider this, a B/W image is shown to you of a Traffic Light in which a particular colour is on and the colour label is known. You will associate the colour label with the position of the bulb, even though you do not know the colour. Same will happen in a CNN, if the channels have identical values for each colour CNN will learn to recognise from the position of the bulb.
H: Orange Data Mining load saved models I am planning to use the Orange Data Mining Tool for easy data exploration and model generation. What is still unclear to me is: after finding a good model, what can I do with it, how can I use or deploy it in production? I already found out that there is no Orange server which can run the Orange workflow. But is it at least possible to load the model in Python and use it there which was generated and saved via the Orange UI? AI: Yes, this is what the Save Model widget does under the Model tab. Create your model, then click save model and your model will be saved to a pickle file. Then just load your pickle file in Python. Here is a similar question on Stack Overflow:
H: Combining outputs of ridge regression models? I am facing an issue where I have 7 sets of different variables/columns/predictors. I am trying to predict same target variable and I want to observe the importance/effect of all the sets according to their importance in an ordered manner. (I am trying to use ridge regression models for each of the 7 individual set as I want to keep all the variables and I want to combine the output of these 7 models, each set has more than 20 variables) AI: If you mean predictors by "variables", You may combine your 7 models by using "rbind or cbind" and save it into a data frame or data table. But, remember to first save your each of the 7 models into a datatable or dataframe. About ranking, I agree with @Valentin you may want to make the predictions first, then rank it . However , if you want to take all the predictors into account there is no point to apply the ranking of all the sets. You will see their importance once you are done predicting. Additional note: Glmnet is way more sensitive library/method when performing predictions, make sure all the columns while creating a model and doing a prediction are in the same order and have the same names. This caused me much trouble while predicting. Hopefully, this might help you.
H: Why would a validation set wear out slower than a test set? On this page of Google's Machine Learning Crash Course, we find the following statement: "Test sets and validation sets "wear out" with repeated use. That is, the more you use the same data to make decisions about hyperparameter settings or other model improvements, the less confidence you'll have that these results actually generalize to new, unseen data. Note that validation sets typically wear out more slowly than test sets." I don't get the last sentence. Why would a validation set wear out slower than a test set? To me it would wear out faster as the validation set is used a lot more frequently than the test set. What am I missing? AI: It is difficult to say without access to the original author. However, I expect this refers to the ability of using each set to realise its purpose. A validation set's purpose is to select hyperparameters that perform the best according to some metric. The best measurement on the validation set should always have the highest expectation of being the best in reality. If you make very many measurements, then the absolute probability of the best measurement being the real best could be low, but the chances of a generally poorly performing set of hyperparameters winning overall do not increase as fast. You can be reasonably certain that you have picked "one of the best" plus "the one with highest probability of being the best" even though that might be e.g. just a 10% chance if you have run 100s of validations. A test set's purpose is to measure a metric without bias. If you use this for model comparison or selection, then this can be affected by maximisation bias - because there is uncertainty in the measurement, focusing on the relative values and picking a "best" almost certainly over-estimates the true value. This effect happens very quickly. If you measure metrics for two sets of hyperparameters and pick the best one, you should already expect that the value you got for the metric is an over-estimate. Note you still expect on average that you have picked the better option, but you cannot trust the measurement as much.
H: Which type of clustering algorithm to use to identify the "same" item in different data sets? I'm trying to find a solution for a data quality problem - specifically, identifying which items in different data sets are used to represent the same things. As an example, assume that we're a retailer and we buy out a couple of other retailers. In the process, we also get their systems and databases. This might lead to some overlap - different systems can represent the same items, customers, etc. in different ways, but with no single unique identifier. What would the best approach be to determine which rows represent the same thing across data sets in order to come up with a single view of 'unique' entities? I've done a machine learning course, and I understand the bare minimum. I believe that the solution to this problem requires a clustering algorithm, but what type? I may be dealing with a multitude of features in the data - dimensions, names, dates, contact details - and some of those would probably require a higher 'weight' in matching. Examples (Items): A: ABC Notebook, Large, Released: 2018-02-20, 150mm x 100mm. B: Notebook (ABC), L, Date: 18/02/2018, 150mm x 100mm I'd expect these to be treated as the same item. Examples (Customer): A: Doe, Jane, DOB 1970-06-23, 123 ML Ave, F B: John Doe, DOB 1971-04-33, 123 ML Avenue, M C: J. Doe, Born '71 I'd expected B and C to be identified as the same person, but not A. For this scenario, I don't expect to come up with something with a 100% accuracy, but I would like to be able to come up with a (narrow) list of possible matches that someone can check. Can some please point me in the right direction? Any case studies that I should look at? AI: You should focus on methods that fall under the scope of Record Linkage rather than clustering, as Record Linkage methods are specifically targeted to solve this type of problem. The manuscript 'Automatic Record Linkage Using Seeded Nearest Neighbour and Support Vector Machine Classification' written by Peter Christen, provides a good introduction to using machine learning for Record Linkage link to the pdf. There is also a RecordLinkage Package available in R, which provides a good starting point to practice with techniques from stochastic record linkage, and supervised and unsupervised machine learning link to manuscript which describes the package
H: What is the meaning of term Variance in Machine Learning Model? I am familiar with terms high bias and high variance and their effect on the model. Basically your model has high variance when it is too complex and sensitive too even outliers. But recently I was asked the meaning of term Variance in machine learning model in one of the interview? I would like to know what exactly Variance means in ML Model and how does it get introduce in your model? I would really appreciate if someone could explain this with an example. AI: It is pretty much what you said. Formally you can say: Variance, in the context of Machine Learning, is a type of error that occurs due to a model's sensitivity to small fluctuations in the training set. High variance would cause an algorithm to model the noise in the training set. This is most commonly referred to as overfitting. When discussing variance in Machine Learning, we also refer to bias. Bias, in the context of Machine Learning, is a type of error that occurs due to erroneous assumptions in the learning algorithm. High bias would cause an algorithm to miss relevant relations between the input features and the target outputs. This is sometimes referred to as underfitting. These terms can be decomposed from the expected error of the trained model, given different samples drawn from a training distribution. See here for a brief mathematical explanation of where the terms come from, and how to formally measure variance in the model. Relationship between bias and variance: In most cases, attempting to minimize one of these two errors, would lead to increasing the other. Thus the two are usually seen as a trade-off. Cause of high bias/variance in ML: The most common factor that determines the bias/variance of a model is its capacity (think of this as how complex the model is). Low capacity models (e.g. linear regression), might miss relevant relations between the features and targets, causing them to have high bias. This is evident in the left figure above. On the other hand, high capacity models (e.g. high-degree polynomial regression, neural networks with many parameters) might model some of the noise, along with any relevant relations in the training set, causing them to have high variance, as seen in the right figure above. How to reduce the variance in a model? The easiest and most common way of reducing the variance in a ML model is by applying techniques that limit its effective capacity, i.e. regularization. The most common forms of regularization are parameter norm penalties, which limit the parameter updates during the training phase; early stopping, which cuts the training short; pruning for tree-based algorithms; dropout for neural networks, etc. Can a model have both low bias and low variance? Yes. Likewise a model can have both high bias and high variance, as is illustrated in the figure below. How can we achieve both low bias and low variance? In practice the most methodology is: Select an algorithm with a high enough capacity to sufficiently model the problem. In this stage we want to minimize the bias, so we aren't concerned about the variance yet. Regularize the model above, to minimize its variance.
H: Unblanced classes: classifier only predict one class I am trying to use a multiclass classification using python. For that I used few algorthims like Random Forest, Xgboost, Logitic regression. My problem is simple, I have users, Images, and people ratings on those images. I devided the ratings into 3 classes: class 1: good marks class 2: bad marks class 3: medium evaluations At first I got these results I have 70% of bad marks (class2) 18% of good marks (class1) 10% of bad marks (class 3) so all was good I did the classification and I got a good accuracy (75%). I have collected more data and more (good marks) and the accuracy kept decreasing. So I understood that the accuracy was good only because the algorithms where predicting that almost all the marks are bad, so basically only one class- class 2, and when I got more data in class 1 and class 3 the accuracy decreased. This is example of the confusion matrix I was having Predicted Marks 1 2 3 Actual Marks 1 48 85 3 2 17 250 4 3 10 89 1 I understood that I was getting this problem because I have unbalanced data, so I was predicting bascially the highest frequency class. I did some researches and I found that there are option called ` class_weight='balanced' So I used it in the classification algorithms. And I got this result Predicted Marks 1 2 3 Actual Marks 1 53 61 29 2 66 161 53 3 24 40 17 So my question is: First I know that the option balanced try to requilibrate the data but I don't understand how. I found in the explanations that it"considers each class as important as the other"` but still don't understand how. Does it duplicate rows of the minor categories ? When I used that option, it was obvious that the classifier were predicting other Mark classes more frequently, but there are more number of right predictions than the wrong ones. ( example in the second confusion matrix the class 1 was predicted as class 2 66 times , but in the confusion matrix 1 only 17 times) Am I analysing the problem right? and what are your suggestions for such a problem? Sorry for making this long, any help will be appreciated !! AI: I found in the explanations that it "considers each class as important as the other" but still don't understand how. Does it duplicate rows of the minor categories? No, it adds a weight to each example, depending on its class. The majority class will be assigned a small weight while the minority ones will be assigned larger weights. This weight are considered during the training phase of your model, so that each example of a minority class impacts the parameter updates more than one from the majority class. For example, suppose class 1 has a weight of $0.5$, class 2 has a weight of $1.2$ and class 3 has a weight of $1.5$. Each example from class 3 will impact the parameter updates 3 times more than an example from class 1, etc. You can see what weights scikit-learn has selected through this function. You can also select your own weights (if you feel that some class should be more important than what scikit-learn has selected) by modifying the dictionary. When I used that option, it was obvious that the classifier were predicting other Mark classes more frequently, but there are more number of right predictions than the wrong ones. I'm not sure what the question is, but if you are asking "why do I have more misses now than before?". Because you used that option, your model has learned to classify examples while treating the classes of equal importance. But the classes don't have the same frequencies. For example if you miss-classify 1 out of 4 examples, you would miss more predictions from the majority class than from the minority ones. Is this good or bad? Depending on your goal. If you want to simply maximize your algorithm's correct predictions, then you might get a better result without balancing your classes. In this case you should use a metric like accuracy. If your goal something else, for instance to have the same accuracy in each class, you should select a more appropriate metric (e.g. a macro-averaged one). Is there an alternative to class_weights? Yes, in most cases it is preferable to balance the number of samples in your classes. This is done by either over-sampling the minority classes, under-sampling the majority ones, or a combination of both. If you want to try out this approach, I'd suggest using imbalanced-learn.
H: Why is the softmax function often used as activation function of output layer in classification neural networks? What special characteristics of the softmax function makes it a favourite choice as activation function in output layer of classification neural networks? AI: The softmax function is simply a generalisation of the logistic function, which simply squashes values into a given range. At the final layer of a neural network, the model produces its final activations (a.k.a. logits), which we would like to be able to interpret as probabilities, as that would allow is to e.g. create a classification result. the reason for using the softmax is to ensure these logits all sum up to 1, thereby fulfilling the constraints of a probability density. If we are try to predict whether a medical image contain cancer or not (simply yes or no), then the probability of a positive result (yes) and a negative result (no) must sum up to one. o the model produces a probability vector for each outcome, in pseudo-code: p = [yes, no]. If the final logits in this binary classification example were p = [0.03, 1.92], we can see that they sum to 1.95. This is not interpretable as a probability, although we can see the value is much higher for no. In other examples where there might be 1000s of categories, not just two), we can no-longer assess so easily, which is the largest logit. The softmax gives us some perspective and (quasi-) interpretable probabilities for the categories. EDIT As described by @Neil Slater in his comment below, using the softmax followed by a log loss function does indeed lead to a model that predicts values in the range of true probabilities, thus making them interpretable, as well as providing some nice statistical properties (see the Properties section of Maximum Likelihood Estimation). Also note: minimising log-loss is equivalent to maximising the Maximium Liklihood Estimation. See a mathematical explanation here. Implementations of the log-loss include: the log_loss from Scikit Learn binary_crossentropy in Keras (for binary problems) categorical_crossentropy in Keras (for cases with > 2 classes) These first two method names also highlight that log-loss is also the same as cross-entropy (in the general machine learning context of computing error rates between 0 and 1).
H: When should one use L1, L2 regularization instead of dropout layer, given that both serve same purpose of reducing overfitting? In Keras, there are 2 methods to reduce over-fitting. L1,L2 regularization or dropout layer. What are some situations to use L1,L2 regularization instead of dropout layer? What are some situations when dropout layer is better? AI: I am unsure there will be a formal way to show which is best in which situations - simply trying out different combinations is likely best! It is worth noting that Dropout actually does a little bit more than just provide a form of regularisation, in that it is really adding robustness to the network, allowing it to try out many many different networks. This is true because the randomly deactivated neurons are essentially removed for that forward/backward pass, thereby giving the same effect as if you had used a totally different network! Have a look at this post for a few more pointers regarding the beauty of dropout layers. $L_1$ versus $L_2$ is easier to explain, simply by noting that $L_2$ treats outliers a little more thoroughly - returning a larger error for those points. Have a look here for more detailed comparisons.
H: Character Level Embeddings I am working on a problem that current depends on word level embeddings created using Word2Vec. I am researching new methods to apply to this model and one was a character level embedding. I have not found much information on it, and I don't imagine Word2Vec but at a character level would be effective. Is there any insight on giving vector representations to characters for an overall classification model? AI: It completely depends on what you're classifying. Using character embeddings for semantic classification of sentences introduces unnecessary complexity, making the data harder to fit. Although using n-grams would help the model deal with word derivatives. Classifying words based on their derivative would be a task that would require character embeddings. If you're asking whether it would be useful to train a model to embed characters like you would with word2vec- then no. And in fact would probably yield bad results. We use embeddings to implicitly encode that two data points are close together and therefore should be treated more similar to the model. The letter 'd' shouldn't be semantically closer to 'e' than 'q'.
H: What are kernel initializers and what is their significance? I was looking at code and found this: model.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu')) I was keen to know about kernel_initializer but wasn't able to understand it's significance? AI: The neural network needs to start with some weights and then iteratively update them to better values. The term kernel_initializer is a fancy term for which statistical distribution or function to use for initialising the weights. In case of statistical distribution, the library will generate numbers from that statistical distribution and use as starting weights. For example in the above code, normal distribution will be used to initialise weights. You can use other functions (constants like 1s or 0s) and distributions (uniform) too. All possible options are documented here. Additional explanation: The term kernel is a carryover from other classical methods like SVM. The idea is to transform data in a given input space to another space where the transformation is achieved using kernel functions. We can think of neural network layers as non-linear maps doing these transformations, so the term kernels is used.