text
stringlengths
83
79.5k
H: Can neural network represent a relation that is a derivative? Let's say I measure people hiking uphill, and it happens that their hiking speed is related to the slope so they hike slower where the hill is steeper. My input variable is the hill's elevation and my output variable is their hiking speed. I fit a neural net to this data, so that I can later predict the hiking speed given new elevation measurements. Can a neural net represent the input's first derivative and so do a good job capturing this relation? What about representing a second derivative, i.e. if hiking speed was related to the curvature of the hill? AI: Interesting question. What you are asking for is if a neural network is able to represent and/or approximate functions. The answer is: Yes, the Neural Networks can represent derivatives of functions. In this paper, the possibility is explained. Approximation of functions and their derivatives: A neural network implementation with applications Other author take the same conclusion: Multilayer feedforward networks are universal approximators Generalization and Approximation Capabilities of Multilayer Networks Approximation Capability of Layered Neural Networks with Sigmoid Units on Two Layers
H: Should I use coordinates as features? I am building a regression model. Each sample/object in my dataset has some numerical and categorical features. Let's call those $f_1, f_2, f_3$ for now. The output that I'm trying to learn is a $2D$ grid of numerical values. Kind of like an $3\times 3$ image with values in it. The output variables are correlated. For my first attempt at the problem, I thought of using $f_1, f_2, f_3$ as input features and then flatten my image into 9 output features $out_1,...,out_9$. There are multiple techniques that can optimize for those variables independantly/jointly so this is fine. The other option that I thought of was to use the output grid coordinates as input features. So I would have $5$ input features: $f_1, f_2, f_3$ (as before) and $f_4, f_5$ are my $x, y$ coordinates for the target variable. In this case I would have more samples and my output would consist of only $1$ output target variable. My problem is that I'm not sure whether any of the options is fundamentally wrong or just bad practice. Is there anything I should watch out for in training and testing if I proceed with either approach? AI: There is a couple of separate questions I can identify here: one is whether it is appropriate to include coordinate features and the other is whether it is appropriate to convert a multivariate regression (more than one outcome variable - 9 in your example) into a regression with single dependent variable. Coordinate features If you think that the dependent variable value in a particular cell of your grid is somehow affected by the location of the cell within the grid (in other words, that values in the same cell are correlated), then you should probably include coordinate features (which can be simple cell identifiers or something more complex, like a more specific identification of where the cell is located within the grid relative to a particular point). It is possible to even use target-based encoding (i.e., use the mean of dependent variable values in the same cell as a new feature), though that gets into issues like data leakage and overfitting (which can be addressed separately). You may also possibly want to create interaction terms, allowing for the effects of non-coordinate features to vary by level of a coordinate feature. This is different than modeling correlation between values across different cells. If you expect that there is some spatial autocorrelation between cells in this 2-d grid, e.g., a value in cell (1,1) is correlated with values in cells near it: (1,2), (2,1), (2,2), you should probably model that relationship by including some set of spatial features (not just coordinate identifiers), like spatially clustered covariates, or by using a spatially lagged dependent variable model. If you are not sure if spatial correlation is present, you can estimate a model without any spatial features and then test for spatial autocorrelation (spatial dependence between residuals of regression) using Moran's I (a positive Moran’s I hints at data is clustered, a negative Moran’s I implies data is dispersed.) Multiple vs single dependent variable If you go with option 2 as you described it, with one output variable, you will be modeling a completely different variable than if you were modeling 9 (in your example) separate variables. Each of the original 9 variables likely has its own statistical properties, but if you combine them into one, you will have a new variable that has its own properties (some combination of the 9). So if you consider the grid to be 9 separate variables, you should stick with modeling 9 separate variables.
H: Trying to predict extreme values corresponding to rare events I need some advice on methodology. I need to predict a numeric value (claim amount) being as good as possible on high values corresponding to rare events (corporal damage, technological disaster...). Any suggestion on methodology would be appreciated. Thanks. AI: My suggestion: There is a technique called Quantile Regression, which allows you to determine the highest (or lowest) values of a variable. This is valid only if you have regressors (exogenous variables).
H: How to split a large set of coordinates into smaller sets using something like nearby selection? I have a dataset filled with 3817 coordinates (latitude, longitude). What I want to do is create groups of ~500 coordinates so that smaller group can be used to solve Vehicle Routing Problem. The reason I want to do this is because the original dataset is large the solving the VRP takes too long. I've tried splitting the map into grids and simply grouping the coordinates based on which grid they are in. But this is not the most efficient method. I've read one of OptaPlanner's blog posts where they write about Nearby Selection and would like to do this with my dataset. Any ideas on how to accomplish this? AI: Your best choice could be clustering. Hierarchical clustering may help you get your solution as it is based on which other individuals are close in distance and you can select your desired number of groups. K means clustering may help you achieve this kind of solution too, with k=500
H: Why does it not need to set test group when using 'rank:pairwise' in xgboost? I'm new for learning-to-rank. I'm trying to learn the Learning to rank example provided by xgboost. I found that the core code is as follows in rank.py. train_dmatrix = DMatrix(x_train, y_train) valid_dmatrix = DMatrix(x_valid, y_valid) test_dmatrix = DMatrix(x_test) train_dmatrix.set_group(group_train) valid_dmatrix.set_group(group_valid) params = {'objective': 'rank:pairwise', 'eta': 0.1, 'gamma': 1.0, 'min_child_weight': 0.1, 'max_depth': 6} xgb_model = xgb.train(params, train_dmatrix, num_boost_round=4, evals=[(valid_dmatrix, 'validation')]) pred = xgb_model.predict(test_dmatrix) Group data is used in both training and validation sets. But test set prediction does not use group data. I also looked at some explanations to introduce model output such as What is the output of XGboost using 'rank:pairwise'?. Actually, in Learning to Rank field, we are trying to predict the relative score for each document to a specific query. My understanding is that if the test set does not have group data, no query is specified. How does the model output the relative score to the specified query? And I've tried adding test_dmatrix.set_group(group_test). The output results of the two methods are in good agreement like: [ 1.3535978 -2.9462705 0.86084974 ... -0.23594362 0.712791 -1.633297 ] So my question as follows: Why does it not need to set test group when using 'rank:pairwise' in xgboost? How can I get label to the specified group query based on the forecasting score results? Can anybody explain it to me? Thanks in advance. AI: The output is a score that can be used to rank the samples, and the point in this sort of ranking problem is that you'll only care about ranking samples within the same group (which you think of as being results from a given query). But that can be safely left to you on the testing set. (Indeed, you might as well only run the prediction for each group separately. You might think about the output in your case as assuming that the test set is all from a single query.) For scoring on the test set, it might matter what the specified groups are, but not for just making predictions. For training, the group data is needed so the algorithm knows not to calibrate the rankings for intergroup comparisons. See also: How fit pairwise ranking models in xgBoost? https://github.com/dmlc/xgboost/blob/master/doc/tutorials/input_format.rst#group-input-format
H: Why is not AUC or other metrics used for splitting nodes in decision trees? There are common ways to split a tree in decision trees and all their variants: Gini Index Entropy Misclassification Why there is not a method which uses directly AUC or accuracy (or whichever the modeler need) to split the nodes. Is it because of common use, or there is a mathematical explanation for it? AI: On accuracy: Why we use information gain over accuracy as splitting criterion in decision tree? AUC has been explored; it seems to work well, but is slower: https://www.semanticscholar.org/paper/Learning-Decision-Trees-Using-the-Area-Under-the-Ferri-Flach/46e40f487e555277033f188778d6c5c05df8daa4 http://proceedings.mlr.press/v7/doetsch09.html
H: How does DQN solve Open AI Cartpole - v0? Context I am confused about how a DQN is supposed to solve the cart pole problem since the rewards are so dense. I have been using pytorch example. I am aware of some solutions, but I have issue with the basic principle of the env. Unlike the tutorials, I converted the state space representation into just the 1x4 returned state as opposed to an image. Also, I converted the action output to be a binned output. So the action 1x2 becomes 3x2 when binning is set to 3. So instead of getting the max action row-wise, I get a max action column-wise. I am using fixed targeting (training a primary and target DQN). Question / Concern My main issue with the env in general is that keeping the pole vertical is no different to the DQN as holding the pole to near-failure. How does the DQN get better if it is getting +1 reward regardless? My hypothesis is that keeping the pole tilted creates more samples in the memory. Then after, when optimizing the model, the tilted pole states get higher reward than the vertical pole states since there is a higher distribution of tilted pole states. How are we expected to expect a DQN to do well with this kind of reward set up? Wouldn't it be better to have Cart pole produce +1 reward only for if the pole is near vertical? Extra Information The goal here is to use the cart-pole for debugging a RL model, then shift it to multi-joint robot control. The state is normalized to the expected min/maxes: Current State [[0.45564668 0.51196048 0.53126856 0.52450375]] The action input is Raw Action tensor([[0.9477, 0.9471]]) Bin Action [1.0, 0.0]. I am just using simplest action rep for testing below. I have also tested double dqn's, dueling dqns, and the use of PER. I have also been testing dropping the state space down to 1x1 via just inputting the angle of the pole. X axis is the number of steps during an episode. Y axis is the number of episodes. AI: You are mixing up two concepts from reinforcement learning, reward and return (aka utility) Rewards are used to identify or specify goals of the agent. Whilst you can change them to help an agent focus on useful heuristics of the problem, it is more usual, especially in test/toy problems to have them very simple. In the case of CartPole, there is a positive reward for "not falling over" which importantly ends when the episode ends. Returns (or utility) are what the agent learns to maximise. A return is typically the sum of all rewards, and might be discounted to prevent infinite results. In the case of CartPole, this means that the longer the agent can balance the pole into the future, the larger the return is. With Q-learning, the action values predict the expected future return. So it doesn't matter that the rewards are dense. It matters how long the agent can keep balance going into the long term, the longer the better, because the return will be higher. A combination of state and actions that the agent associates with longer-lasting not failing will predict a larger return and be chosen in preference to shorter-term success. This is how the Q-learning agent handles a situation with dense positive rewards that may end on a mistake. In practice, the OpenAI Gym CartPoleV0 environment does take a small liberty. Episodes will end at a fixed step in future. This is not available in the state information, and technically makes the problem non-Markov. However, it is possible to get away with this provided the timespan for maintaining stability is shorter than the maximum possible episode.
H: How to choose an optimal threshold for binary discretization We know that we usually do discretizations to continuous features to remove extra information and unwanted regularities, which makes the model robust and well-predicted. But I am wondering except based on the context of feature how to choose an optimal threshold for data discretization, binary or multi. AI: Inspecting the histogram chart of that feature by different numbers of bins can be helpful. the maximum width of the bins in which the distribution of the data does not change noticeably can be a suitable choice.
H: How to perform polynomial landmark detection with deep learning I am trying to build a system to segment vehicles using a deep convolutional neural network. I am familiar with predicting a set amount of points (i.e. ending a neural architecture with a Dense layer with 4 neurons to predict 2 points(x,y) coords for both). However, vehicles come in many different shapes and sizes and one vehicle may require more segmentation points than another. How can I create a neural network that can have different amounts of output values? I imagine I could use a RNN of some sort but would like a little guidance. Thank you For example, in the following image the two vehicles have a different number of labeled keypoints. AI: Landmarks are nice when you have a fixed amount for every image but I don't think it is the right approach for your problem. Instead I think you should look into models that segment images by applying a mask. A place to start could be to look into Mask R-CNN. Here is the paper: Mask R-CNN Here is how to train a ready made Mask R-CNN implementation with your own dataset using Keras: Tutorial
H: Detect a pattern in time series data I have the following time serie data and I need to detect the following pattern: - a 1st spike above a given value - a 2nd spike close to the given value What defines "a spike" is that its value is x% above the surrounding spikes. I would like to use python (numpy and pandas modules) to detect these. As a background, the goal is to trigger an action when the second spike is detected (outlined in red in the attached image). AI: scipy.signal's finds_peaks should have most of what you need. With the height argument you can set a minimum threshold to get the first peak. With the prominence argument you should be able to approximate the second peak. This requires peak_prominences. The prominence of a peak measures how much a peak stands out from the surrounding baseline of the signal and is defined as the vertical distance between the peak and its lowest contour line.
H: Unskewing the Data with the PCA's Help I'm making some RFM Analyses (Customer Segmentation) and, in order to feed the RFM data to K-Means, I need to unskew the data, as K-Means works best when dealing with symmetrical distributions. One of the best practices I've found so far is to use log transforms in order to unskew the data. However, after a bit of experimenting (and, hopefully, correct intuition), I've also found that PCA is working really well at unskewing the data, without the need of performing a log transform. Is this supposed to work or am I in danger of making incorrect assumptions? On a (cough) safer bet, I was planning on using PCA after a log transform and the scaling of the data, since it would help to orthogonalize it and, I suppose, improve K-Means's performance. Is this better or worse than simply applying PCA without the log transform? AI: On a (cough) safer bet, I was planning on using PCA after a log transform and the scaling of the data, since it would help to orthogonalize it and, I suppose, improve K-Means's performance. Is this better or worse than simply applying PCA without the log transform? This is your solution, this is the normal, most used and best practice on what to do with skewed data
H: How should I treat these non-English documents in the NLP task? So I have a small corpus of about 30k documents and about 50 documents in this corpus are in other languages (Persian, Chinese, Arabic, German, Spanish etc). I will be using this corpus for training a machine learning model. Now the question is: How should these non-English documents be treated? Should I exclude them from the final corpus and from training the model? or should I manually translate them (Requesting natives from each language to translate it for me) and include them in the final corpus? or should I use Google translate/DeepL to translate these non-English documents into English and then include them in the final corpus? Each document in the corpus under question is not larger than 500 words each. Any help or hint will be appreciated. AI: You can use these tips : Should I exclude them for the corpus and from training the model? You can do this if you don't have a lack of data. But I think excluding 500 docs from 30K docs won't make a big difference in training. The model's generalisation power won't be compromised. should I manually translate them (Requesting natives from each language to translate it for me) and include them in the final corpus? You should do this only when you need the 500 docs as they hold significant information. I would not recommend this method personally. should I use Google translate/DeepL to translate these non-English documents into English and then include them in the final corpus? That is something you can do. Using Google Translate could be a help if the structure of the sentences in the docs is simple and sober. You can get good translations without the need of any natives of other countries. I think you should opt for this method. Conclusion: Translate the docs which hold importance using Google Translate. Else, omit them if you have an extra of 500 docs. They willl not affect the model's performance significantly. Tip: I don't think that you can paste the 500 docs in the Google Translate console. This will be time consuming if the docs belong to different languages. Hence, try to omit these documents. Otherwise, you will require a mini application which could use some translation API to transform the docs.
H: Question about sklearn's StratifiedShuffleSplit I'm reading through the book Hands-On Machine Learning with Scikit-Learn and Tensorflow by Aurélien Géron. In a regression project on California Housing Prices, he goes over the concept of stratified sampling. I think I understand the concept as his explanation "the population is divided into homogeneous subgroups called strata, and the right number of instances is sampled from each stratum to guarantee that the test set is representative of the overall population." So in my own words, simply splitting the dataset with sklearn's train_test_split leaves the train and test set vulnerable to misrepresenting the ratios of categorical variables (ie population has 40% one category, 60% another, but the train/test set are totally different ratios of these categories), so stratifying ensures the sample is 'random', but still maintaining proper ratios within test and train splits. Please correct me if I'm wrong. Here's the code to his stratified sampling based on income categories (housing is the main dataframe): split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing['income_cat']): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] I'm pretty confused with this code: 1) What does the variable 'split' represent? Does it comprise both the train and test split...? 2) In the 2nd line of code, what does split.split mean? I guess I'm confused with most of how StratifiedShuffleSplit divides the train and test set and why he needed to create this 'for' loop in order to create strat_train_set and strat_test_set. Thanks, Greg AI: split Is the object that allows us to do stratified split, and split.split is the split’s object method/function called split that can be used to perform stratified split. Depending on how many splits you want using the n_split parameter you can split, hence the for loop.
H: How does the meta Random Forest Classifier determine the final classification? I am trying to understand exactly how the meta random forest classifier determines the final prediction, I understand that there is a voting system and an aggregation from the decision trees is used to find the final prediction, I have read from here: Classification Random Forests in Python: "Random forest is an ensemble decision tree algorithm because the final prediction, in the case of a regression problem, is an average of the predictions of each individual decision tree; in classification, it's the average of the most frequent prediction" I read in the RFC source code: Ensemble/Forest: "The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability estimate across the trees" Does this mean that for example: We have 3 Classes (A, B and C) with 4 estimators, each tree has a certain amount of final leaf nodes with a class prediction; Probabilities of Class A, B or C are a result of the number of the class prediction from the leaf nodes, i.e Tree1: Class A is predicted 3 times out of 10 possible times, the other 7 times a different class was predicted. Class A: [0.30, 0.25, 0.30, 0.25] | mean = 0.275 Class B: [0.10, 0.40, 0.10, 0.40] | mean = 0.250 Class C: [0.70, 0.65, 0.70, 0.65] | mean = 0.675 So the Meta Classifier would predict Class C as it has the highest mean probabilities. Is this correct? Or am I looking at this completely the wrong way? AI: Each data point $x$ that you are predicting necessarily only lands in one leaf node for each tree. So if I have N trees, and $K_A$ is the number of trees in which $x$ ends up in a leaf node which predicts A, then we can say $x$ is class $A$ with "probability" $K_A/N$, and similarly for other classes. Then you can just make an argmax prediction for which class $x$ belongs to based on the "probabilities" (I keep putting this in quotes because it isn't an actual probability, but it's sometimes ok to think of it this way.)
H: In sklearn Pipeline, why are all parameters fit_transform(), but the last one can be just fit()? I am reading through Hands-On Machine Learning with Scikit-Learn & TensorFlow. We are going over scikit-learn Transformation Pipelines for preparing data for the machine learning algorithms. Here is the code (housing_num is a dataframe of the numerical attributes of the main dataframe housing): num_pipeline = Pipeline([ ('imputer', Imputer(strategy='median')), ('attribs_adder', CombinedAttributesAdder()), ('std_scaler', StandardScaler()) ]) housing_num_tr = num_pipeline.fit_transform(housing_num) The text explains: The Pipeline constructor takes a list of name/estimator pairs defining a sequence of steps. All but the last estimator must be transformers (i.e., they must have a fit_transform() method). Why do we fit_transform() some, but only fit() without transforming others? I am guessing this has to do with the difference between estimators and transformers, which I am confused on in this context. So when using Pipeline, you can only include one estimator that won't be transformed, and it has to be the last input? ("all but the last estimator must be transformers") AI: To center the data (make it have zero mean and unit standard error), you subtract the mean and then divide the result by the standard deviation. $$z' = \frac{x-\mu}{\sigma}$$ or $$z = ln(x)$$ You do that on the training set of data. But then you have to apply the same transformation to your testing set (e.g. in cross-validation), or to newly obtained examples before forecast. But you have to use the same two parameters $\mu$ and $\sigma$ (values) that you used for centering the training set. Hence, every sklearn's transform's fit() just calculates the parameters (e.g. $\mu$ and $\sigma$ in case of StandardScaler) and saves them as an internal objects state. Afterwards, you can call its transform() method to apply the transformation to a particular set of examples. fit_transform() joins these two steps and is used for the initial fitting of parameters on the training set $x$, but it also returns a transformed $x'$. Internally, it just calls first fit() and then transform() on the same data.
H: predicting next observation using HMMLearn.multinomialhmm(discrete hmm) I have implemented a HMM using hmmlearn: states = ['healthy','sick'] observations = ['sleeping','eating','pooping'] model = HMM(n_components=2) model.n_features = 3 model.startprob_ = [0.7, 0.3] model.transmat_ = [ [0.8, 0.2], [0.4, 0.6] ] model.emissionprob_ = [ [0.2, 0.6, 0.2], [0.4, 0.1, 0.5], ] Also, I have a sequence of observations: obs = np.array([0,0,1,0,2,0,1,2,0,1,0,2,0,1,1,2,0]) obs = obs.reshape(-1, 1) Now, I would like to predict the next observation (at t+1), but don't know how to do this. (I've read the documentation but haven't found anything) AI: i found out that there is no function to do this just can use model.predict() to get hidden states probabilities and then find out the next state(and observation) using Viterbi algorithm.
H: why does transform from tfidf vectorizer (sklearn) not work I'm transforming a text in tf-idf from sklearn. I made the model: from sklearn.feature_extraction.text import TfidfVectorizer corpus = words vectorizer = TfidfVectorizer(min_df = 15) tf_idf_model = vectorizer.fit_transform(corpus) And now I'm making vectors for different sets of words (documents), like: word_set = ['dog', 'cat', 'foo'] v = vectorizer.transform(word_set) But I want just one vector of these words, to compare to other documents. But when I use transform, the shape of v becomes: <3x56492 sparse matrix of type '<class 'numpy.float64'>' with 3 stored elements in Compressed Sparse Row format> I want a vector with shape 1x56492, and not 3x56492.. I'm certainly missing something here. Maybe you guys have some tips? Thank you very much in advance. AI: As far as I can tell it's interpreting your new word_set word_set = ['dog', 'cat', 'foo'] as three separate documents containing one word each, whereas if you did word_set = ['dog cat foo'] it would interpret this as a single new document containing those words. What behavior are you expecting from this function? Is corpus = words a list of document strings, or a list of single words? If it's the latter, this is likely not doing what you think it is doing, and you should instead make corpus a list of document strings.
H: How to design batches in a stateful RNN I am using TF Eager to train a stateful RNN (GRU). I have several variable length time sequences about 1 minute long which I split into windows of length 1s. In TF Eager, like in Keras, if stateful=True, "the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch." (source) Thus, how should I design my batches? I obviously cant sample random windows from random sequences. I also cant split a sequence into windows and place adjacent windows in the same batch (e.g. batch 1 = [[seq1 0-1s], [seq 1 1-2s], [seq1 2-3s], ...]), as the state from the previous window wont get passed to the next window, as is the point of a stateful RNN. I was thinking of mixing sequences in the same batch as in: batch 1 = [[seq1 0-1s], [seq2 0-1s], [seq3 0-1s], ...] batch 2 = [[seq1 1-2s], [seq2 1-2s], [seq3 1-2s], ...] ... However, there the issue is that the sequences have different length, and thus some will finish before others. So what is the best way to implement this? (FYI, I couldn't find anything in the academic literature or blogoshhere which discusses this, so refs would be great) Thanks! AI: Your specific case After [seq1 0-1s] (1st sec of long sequence seq1) at index 0 of batch b, there is [seq1 1-2s] (2nd sec of the same sequence seq1) at index 0 of batch b+1, this is exactly what is required when we set stateful=True. Note that the samples inside each batch must be the same length, if this is done correctly, difference in sequence-length between (not inside) batches causes no problem. That is, when all samples from batch b are processed, then next batch b+1 will be processed, and so on and so forth. A general example As a general example, for stateful=True and batch_size=2, a data set like seq1: s11, s12, s13, s14 seq2: s21, s22, s23 seq3: s31, s32, s33, s34, s35 seq4: s41, s42, s43, s44, s45, s46 where sij denotes j-th time step, must be structured like batch 1 batch 2 batch 3 batch 4 0 s21, s22 s23, <pad> s31, s32, s33 s34, s35, <pad> ... 1 s11, s12 s13, s14 s41, s42, s43 s44, s45, s46 or like (with overlap) batch 1 batch 2 batch 3 0 s21, s22 s22, s23 s23, <pad> ... 1 s11, s12 s12, s13 s13, s14 where, for example, long sequence s21, s22, s23 (3 time steps) is broken down to two sub-sequences s21, s22 and s23, <pad>. Also, as you see, it is possible to have batches with different sequence lengths (by using a custom batch generator). Note that <pad> (padded values) should be masked to prevent RNN from considering them as actual values (more info in this post). We can also avoid using padded values by opting for batch_size=1 which might be too restrictive (more info in this post). Here are two examples of a sequence with 5 time steps: s11 s12 s13 s14 s15 example 1 23, 25, 27, 24, 28 # 5 temperature readings for t to t+4 example 2 I, like, LSTM, very, much # 5 128-dim word embeddings Some resources You may find this article on stateful vs stateless LSTM helpful. Some quotes from the article: The stateless LSTM with the same configuration may perform better on this problem than the stateful version. and When a large batch size is used, a stateful LSTM can be simulated with a stateless LSTM. This blog on Stateful LSTM in Keras by Philippe Remy Some opinions on Keras github, like this.
H: Is converting a numerical target to binary helpful? I'm trying to solve an ML problem where the target variable is numeric, let's say the pollution level in a city. But the client is not interested in predicting the actual amount of pollutants, they are just interested in knowing whether the pollution level is high or low based on an agreed upon threshold. (High if the PM2.5 level is above 200, Low otherwise). Should I treat it as a regression problem and take the numeric PM2.5 levels as target or as a classification problem where I make another feature of high/low pollution level based on the threshold and use that binary variable as a target? What are the advantages and disadvantages of both and What impact it can have on accuracy, if any? AI: When you convert a variable from numerical to binary, what happens is that you lose information about the magnitude of the variable and summarise it into a >=X variable. It has advantages and disadvantages depending on the use you will give to your model: If your model has to determine the exact amount of "how high" is your pollution, then is not a good idea to convert it because you are loosing crucial information, but if the city major declares a curfew if the pollution is bigger than x, then a binary output is not only good idea but necessary for the decision you are achieving with it. When you convert the variable to binary, the hypothetical model you are achieving has different kind of information that the model with numerical variables: You will not be comparing exact values (measuring RMSE), you will be measuring membership to a set (measuring AUC). If your client wants to measure wheter you trespassed or not a threshold, you should explain to him what is achievable with that model and ask him it that meets the requirements of his job.
H: Prediction interval for general non-linear regression I have been working on a predictive model. With each prediction, we need to provide a score to express the confidence about our prediction. So I am looking at prediction interval (PI). In linear regression, I believe these can be obtained and well-documented. However, I am yet to find much reference for non-linear regression (such svr, gbr or other blackbox method for regression). Two methods that I have seen are given below: 1) Using bagging, we can generate many point prediction of each new data point, and then we get the interval from the distribution of these predictions around each new point. 2) using Quantile regression to get the upper and lower bound of the new point. Personally, I do like the bagging method, although I don't feel very convinced. Hence, I am reaching out to the community to get a general opinion or some other ideas that I haven't seen so far. AI: Bootstrapping (which I think you are referring to as bagging, a specific algorithm that incorporates bootstrapping) is your friend here. Simply train N models using N datasets, where these N datasets are created by sampling from your original dataset with replacement. Using these N different models, generate N predictions on an observation of interest. You can now use any sort of method to derive the confidence interval of interest with these N predictions, whether that is using a normality assumption, empirical percentiles (simple bootstrapping), etc. The one drawback I see with this method is that if you have many observations to predict, this will quickly become computationally expensive. That being said, bootstrapping is easily made parallel. EDIT: It should be made clear that what I have described in this answer from many years ago are confidence intervals around the mean, not prediction intervals. There are other ways to obtain actual prediction intervals, however, without making parametric assumptions. One such interval that I have come across is called the jackknife+/cv+, which can be explored in the following paper: https://arxiv.org/abs/1905.02928 To summarize the cv+ algorithm: Train k models using k fold cross validation. For each observation in your training dataset, using your k models compute: $$R_i = |Y_i - \text{prediction}_i|$$ using the one model that did not have observation i in its training set. In other words, stitch the training set (that will now contain a new column R_i) back together using the k validation folds, like when stacking. For a new observation, use the $R_i$ values above and your K models, and compute $$W_i^- = \text{prediction}_i - R_i$$ and $$W_i^+ = \text{prediction}_i + R_i$$ where the prediction is again generated from the one model that did not have the ith observation in its training set. The (1-$\alpha$)% prediction interval will be the $\alpha$ and $1 - \alpha$ empirical quantiles of $W_i^-$ and $W_i^+$, respectively.
H: what does "blocks" mean in the context of HDFS ecosystem? is it the same concept in the context of a single hard disk drive? According to Hortonworks HDFS has demonstrated production scalability of up to 200 PB of storage and a single cluster of 4500 servers, supporting close to a billion files and blocks. what does "blocks" mean in this post? AI: Essentially yes but the size is very different. A hard drives default block size is 512 bytes but Hadoops default block size is 128 Megabytes. Hadoop also replicates blocks to maintain integrity of the data.
H: finding optimal solution $w$ and classification accuracy Suppose you are given $6$ one-dimensional points: $3$ with negative labels $x_1 = −1$, $x_2 = 0$, $x_3 = 1$ and $3$ with positive labels $x_4 = −3$, $x_5 = −2$, $x_6 = 3$. In this question, we first compare the performance of linear classifier with or without kernel. Then we solve for the maximum margin classifier using SVM. Consider a linear classifier of form $f(x) = sign(w_1x+w_0)$. Write down the optimal value of $w$ and its classification accuracy on the above 6 points. There might be more than one optimal solution, writing down one of them is enough. My attempt: I understand that the data isn't linearly separable and that there will be some error, but I don't get how to get the optimal value of $w$. Do I minimize $f(x)$? But how do I take the derivative of $f(x)$? Any guidance would be appreciated, I'm a little lost. AI: The best that we can do is to classify $5$ points correctly and sacrifice one point. We want to classify $-1,0,1$ as negative and $-3,-2$ as positivie. (we have to sacrifice $3$). The boundary with maximum margin would be in the middle of $-2$ and $-1$. That is $-\frac32$. $$sign(-(2x+3))=sign(-2x-3).$$
H: Default value of learning rate in adam optimizer - Keras I am working on a image classification problem using Transfer Learning. Parameters used given below: Adam optimizer with learning rate - 0.0001 adamOpti = Adam(lr = 0.0001) model.compile(optimizer = adamOpti, loss = "categorical_crossentropy, metrics = ["accuracy"]) For testing I used adam optimizer without explicitly specifying any parameter (default value lr = 0.001). With the default value of learning rate the accuracy of training and validation got stuck at around 50%. And when I use learning rate = 0.0001 in the first epoch itself I could see that the accuracy is going to 90%. Could you please help me understand 1. why with lower value of learning rate the accuracy is increasing rapidly? 2. Also which of the above used learning rate is the better? 3. How could I make use of decaying learning rate in Keras? Thank you AI: Learning rate is a very important hyperparameter, and often requires some experimentation. There are some good Related questions here, make sure to check those out. With too large a learning rate, you might bounce around an optimum, or you might start off by sling-shooting out to a part of the parameter space where the gradients vanish. (With too small, you might take too long to converge to an optimum, or you might find a poor local optimum. These effects are lessened by adams momentum effect.) That said, the best learning rate generally depends on the problem. Presumably the default was chosen pretty well for general use, but your finding isn't surprising. There is a parameter decay for simple decay, or you can set up a callback for step decays.
H: Soft SVM solving for $b$ I don't really understand how to approach this problem. I know that $w=\sum_{n=1}^{N}{a_ny_nx_n}$ and $y_n(w^T\cdot x_n+b)=1. $ So I can solve for $b$ from that equation but I can't figure out how to apply equation(1) that I'm supposed to use. AI: From $$y^{(n)}(w^T\cdot x^{(n)} + b)=1,$$ since $y^{(n)}$ is binary, we have $$w^T\cdot x^{(n)} + b = y^{(n)}$$ That is $$b=y^{(n)}-w^T\cdot x^{(n)}.\tag{2}$$ Now, let's examine \begin{align}&\sum_{m \in S} \alpha_m y^{(m)}\langle x^{(n)}, x^{(m)}\rangle\\&=\sum_{m \in S} \alpha_m y^{(m)}\langle x^{(n)}, x^{(m)}\rangle + \sum_{m \notin S} 0\cdot y^{(m)}\langle x^{(n)}, x^{(m)}\rangle \\ &=\sum_{m \in S} \alpha_m y^{(m)}\langle x^{(n)}, x^{(m)}\rangle + \sum_{m \notin S} \alpha_m\cdot y^{(m)}\langle x^{(n)}, x^{(m)}\rangle\\ &=\sum_{m=1}^N \alpha_m y^{(m)}\langle x^{(n)}, x^{(m)}\rangle \\ &=\langle \sum_{m=1}^N \alpha_my^{(m)}x^{(m)}, x^{(n)}\rangle\\ &= w^Tx^{(n)} \tag{3}\end{align} Using $(2)$ and $(3)$, you should be able to obtain the conclusion.
H: Model comparison with CV using standard error Discovering the ML world with sklearn, I'm testing a large panel of models onto my dataset. This is for learning purpose but also for work so I want the final model to be as accurate as possible, while I can progress in my understanding of ML. I've separated my dataset (16k rows) into 80% training and 20% testing, and I'm testing at least KNN, Logistic, DecisionTree, RandomForest, NaivesBayes and maybe SVC (if my computer can handle it), with some Bagging and Boosting when I find out how to. My training sample comes in 4 ways: 2 sets of features (95 and 11), standartised (with a StandardScaler) or not. My outcome is binary and I'm using a custom scorer "amelioration" which maximalise the number of positives for the 30th percentile (easier to get it with the code at the end of my post), along with specificity and roc_auc. For each dataset, I cross validate (stratified and shuffled with a random state, 5 folds, with repetition when possible) all hyperparameters I find relevant and repeat this for each training sample. For each crossvalidation, I refit using my scorer so I can compare the results amongst models and datasets. For models I've tested so far, my mean_test_amelioration range from +42% to +114%. Finally, I'll measure the performances of the selected model on the testing sample and report results. I guess this can feel quite cumbersome to a pro (and I'm probably building a tank to kill a fly), but I've already learned so much in this manner. I'm only comparing all these models on the mean_test_amelioration and do not take into account standard error (for instance). Could this lead to overfitting so that my final model won't generalize well? If yes, how could I take variability into account ? Any educationnal link is also very welcome. PS: As this could be relevant, here is my custom scorer code: def get_amelioration(y_true, y_pred, **kwargs): """ If I select 30% of my sample with this algorithm, I will have `amelioration`% more positives in my selection than without :use as: make_scorer(get_amelioration, needs_proba=True, N=30) """ N = kwargs.pop('N', False) if kwargs: raise TypeError('Unexpected **kwargs: %r' % kwargs) decisions = (y_pred > np.percentile(y_pred, 100-N)).astype(int) tn, fp, fn, tp = metrics.confusion_matrix(y_true, decisions).ravel() v = (fp+tp)/(tn+fp+fn+tp) r = tp/(fp+tp) r_base = np.mean(y_true) #around 15% in my sample, expected to be stable amelioration = 100*(r/r_base-1) # print("N=%i, v=%0.3f, amelioration=%0.3f" %(N,v, amelioration)) if v<0.75*N/100: return 0 return amelioration AI: It is possible, however, this is usually not of great concern. If you would like to take into account the standard error, one possibility is the following: You compute the standard error of the best performing model (using your amelioration metric) Consider all models that perform within one standard error (or less if you prefer) Select the model with the simplest decision boundary This is a heuristic(!) based on Occam's razor that simpler models tend to generalize better. However, keep in mind that for complicated problems simpler models are probably just that: simpler. They don't have to be more accurate. Alternatively, instead of 3. you can select the model with the smallest standard error, if your main goal is to limit your exposure to this kind of variability. Also keep in mind that there is No Unbiased Estimator of the Variance of K-Fold Cross-Validation, so this method brings with it its own uncertainty. In general, I would just keep an eye on the standard error and if it's not too concerning just go with the numerically best-performing model.
H: Using a feature in prediction even if it gets zero as p-value? I created two binary classification based logistic regression models and I got these results: Model 1: Accuracy: 67.51% AUC: 65.21% Model 2: Accuracy: 67,99% AUC: 65,70% The second model is created based on an additional column (Col_x). But, when I try to use the following code to show the correlation, I got the result: (0.16402709962654458, 0.0) from scipy import stats stats.pearsonr(data['y'] , data['Col_x'] ) I would like to know if I can considerate Model2 an improvement of Model1 even if I have a p-value = 0.0 AI: For the correlation problem, this basically sums up why I dislike univariate feature selection. Just because a feature has low correlation with your response in isolation, does not mean that the variable itself is not informative. The variable could relate in a non linear way (so transforming it could be useful), it could work well with other variables in a multivariate setting, and also could be used to generate other variables through feature engineering. With respect to a "low p value", note that a lower p value suggests that the variable is significant, that is, the effect of the variable on the response is statistically different from 0. So if you were to use p values as a means of feature selection, you would select variables with smaller values. However, this is not at all a good way to select variables if your goal is to improve predictive performance. P values are meant to test very specific statistical hypotheses related to statistical inference, not prediction. Removing variables with high p values will likely lead to a loss in predictive performance because p values don't even answer the right question in this context. Also, realize that by conducting multiple hypothesis (looking at individual p values for coefficients) you further increase your chance of a false positive beyond what you set your significance level at (i.e. the cutoff that you use to deem a variable to be significant). If you want to do feature selection within the context of a linear model, look into the LASSO/Elastic Net. These models will shrink the coefficients of variables to zero if they do not improve the fit of the model. Otherwise, if you only have a handful of variables then go with model 2 since it is the highest scoring model (on this specific test set anyway). Possibly repeat model evaluation on different partitions of the dataset to see if model 2 is consistently better than model 1. This would allow you form confidence intervals which could be useful in deternining if the extra column gives a significant performance increase over a simpler model. Finally, I recommend not using AUC or accuracy for comparing the performance of models. These metrics are not proper scoring rules and may lead you to selecting models that don't discriminate between your classes as well as others.
H: Need a little help Understanding how to build model's in Keras I am trying to make a CNN in Keras, and to test the validity of my model i am trying to get it to train on MNIST dataset, so i am sure that everything is working fine, but unfortunately model is barely training and i suspect that nothing updating. My model is : model=Sequential() #conv1_1 model.add(Conv2D(128,kernel_size=3, strides=1, padding='SAME', use_bias=False, activation='relu',name='conv1_1',input_shape=(28,28,1))) #conv1_2 model.add(Conv2D(128, kernel_size=3, strides=1, padding='SAME', use_bias=False, activation='relu',name='conv1_2')) model.add(MaxPooling2D(pool_size=2,strides=2)) #conv2_1 model.add(Conv2D(64, kernel_size=3, strides=1, padding='SAME', use_bias=False, activation='relu',name="conv2_1")) #conv2_2 model.add(Conv2D(64, kernel_size=3, strides=1, padding='SAME', use_bias=False, activation='relu',name='conv2_2')) model.add(MaxPooling2D(pool_size=2,strides=2)) model.add(Flatten()) model.add(Dense(1024, activation='relu',name='Dense1')) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu',name='Dense2')) model.add(Dense(10, activation='softmax',name='output')) Compiled with: model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) model.fit(X_train,y_train,batch_size=10,validation_split=0.2,epochs=10) My X_train and y_train look like: plt.imshow(X_train[0].reshape(28,28)) plt.show() y_train[0] array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]) Here are the Results of first 3 epochs: Epoch 1/10 48000/48000 [==============================] - 45s 927us/step - loss: 14.2813 - acc: 0.1140 - val_loss: 14.4096 - val_acc: 0.1060 Epoch 2/10 48000/48000 [==============================] - 44s 915us/step - loss: 14.2813 - acc: 0.1140 - val_loss: 14.4096 - val_acc: 0.1060 Epoch 3/10 48000/48000 [==============================] - 44s 924us/step - loss: 14.2813 - acc: 0.1140 - val_loss: 14.4096 - val_acc: 0.1060 Epoch 4/10 48000/48000 [==============================] - 45s 930us/step - loss: 14.2813 - acc: 0.1140 - val_loss: 14.4096 - val_acc: 0.1060 This is my first Keras Model, and i think i am missing something important here. AI: There are two things I can suspect. First, the dropout rate at the last layer seems way to high. Its better to have a lower dropout rate after each CNN layer. Secondly, you should use a bias in your CNN layers. Try out this code as a starting point and then you can start tuning your model from here. Load the data from keras.datasets import mnist import numpy as np (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. print('Training data shape: ', x_train.shape) print('Testing data shape : ', x_test.shape) Import Keras stuff import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.callbacks import ModelCheckpoint from keras.models import model_from_json from keras import backend as K Now we reshape the data such that it can fit with the tensorflow backend. This requires the channel to be the last dimension. We will also set up our one-hot encoded outputs # The known number of output classes. num_classes = 10 # Input image dimensions img_rows, img_cols = 28, 28 # Channels go last for TensorFlow backend x_train_reshaped = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test_reshaped = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) # Convert class vectors to binary class matrices. This uses 1 hot encoding. y_train_binary = keras.utils.to_categorical(y_train, num_classes) y_test_binary = keras.utils.to_categorical(y_test, num_classes) Define the model model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) Train the model epochs = 10 batch_size = 128 # Fit the model weights. model.fit(x_train_reshaped, y_train_binary, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test_reshaped, y_test_binary)) Evaluate the model score = model.evaluate(x_test_reshaped, y_test_binary, verbose=0) print('Model accuracy:') print('Test loss:', score[0]) print('Test accuracy:', score[1])
H: Make the CNN to say "I don't know" I am currently working on an image classification problem. To ease the implementation I used transfer learning in Keras with Resnet50 as base model. I have 2 classes, however there are many possibilities that the input image may not belong to both the classes. In such cases the CNN should be able to output as Unknown. Note : I can not create a separate class for unknown as it can come from any distribution. I read that the Bayesian neural network would help in such cases. Could you please help me in understanding how can I implement Bayesian CNN with transfer learning. Any material or reference link would be really helpful. Thank you AI: You could put a one-class classification model before your CNN. This would mean that you treat both your classes as one and then frame it as an anomaly detection problem. There are some different ways of achieving this. One way could be to do dimensionality reduction on the images and then use that image encoding to train an outlier detector like an one-class SVM. You can get some ideas here: Learning Deep Features for One-ClassClassification Anomaly Detection using One-Class Neural Networks You can also look at cbeleites answer to this related questions. This was actually where I learned about one-class classification on images and it goes into a bit more detail and explains some caveats with using it.
H: How to interpret this 'Zipf' (survival, war casualties) plot? Help me interpret this chart please? What do the X and Y axes mean here? (I asked the author directly but they didn't respond for some reason.) Any thoughts on what 'Surivival Function' might mean here? War Casualties from 1800 until today, n data points AI: Here, survival function $Y'$ shows the probability of survival for an individual when there is $X'$ war casualties. If $X'$ is close to zero, survival probability would be close to one. When variable $Y'$ (survival function) depends on $X'$ (war casualties) based on Zipf's law, that means $$Y'=C'{X'}^{-\alpha}$$ which is a power-law relationship. If we take the logarithm of relationship, we have $$\text{log}Y'=-\alpha\text{log}X'+logC'$$ By setting $X=\text{log}X'$, $Y=\text{log}Y'$, and $C=logC'$, it becomes $$Y=-\alpha X + C$$ This means if we plot the logarithm of variables (log-log plot), we should see a line with negative slope. Of course, in real world scenarios, variables would follow this relationship approximately. For example, in the plot, the slope becomes closer to zero when $X$ is smaller. Note that in the plot the values are still showing the original variables $Y'$ (between 0 and 1) and $X'$ (on the order of millions).
H: Splitting train/test sets by an identifier? I know sklearn has train_test_split() to split a train and test set. But I read that, even with setting a random seed, if your actual dataset is updated regularly, the random seed will reset with each updated dataset and take a different train/test split. Doing this, your ML algos will eventually cover the whole dataset, defeating the purpose of the train/test split because it'll eventually train on too much of the whole dataset over time. The book I'm reading (Hands-On Machine Learning with Scikit-Learn and Tensorflow) gives this code to split train/test by id: # Function to check test set's identifier. def test_set_check(identifier, test_ratio): return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32 # Function to split train/test def split_train_test_by_id(data, test_ratio, id_column): ids = data[id_column] in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio)) return data.loc[~in_test_set], data.loc[in_test_set] And it says when there's no ID column given, to create one either by indexing the rows or creating a unique index from one of the variables. My questions are: What is the 3rd line doing: crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32 What is the anonymous function doing in the 2nd to last line? lambda id_: test_set_check(id_, test_ratio) In practice, do you commonly split datasets by id in this manner? Thanks, Greg AI: In Auriel Geron's book, there is a short description of the approach: you could compute a hash of each instance’s identifier, keep only the last byte of the hash, and put the instance in the test set if this value is lower or equal to 51 (~20% of 256). This ensures that the test set will remain consistent across multiple runs, even if you refresh the dataset. The new test set will contain 20% of the new instances, but it will not contain any instance that was previously in the training set. While a full explanation of what exactly happens and why is probably best placed on StackOverflow, I can try to answer your questions, first with some background info. The method uses a cyclic redundancy check, which is a method of checking that the raw blocks of memory have not been damaged/changed. It is a way to ensure data integrity, e.g. in network traffic - checking if a message way altered between being sent and received. For train/test splits, it is checking the unique identifier of each sample. We have a column that gives each sample an ID - this should never be changed! Don't delete rows, only append to the end with new unique IDs. In this part: test_ratio * 2**32, the part $2^{32}$ represents the largest integer of a 32-bit system. 0xFFFFFFFF is a large number; it's the hexadecimal representation of $2^{32}-1$ To answer your questions: What is the 3rd line doing: crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32 Based on the information I gave above, we see the crc32 function finds the checksum value in memory (of the unique identifier). If we know the unique ID has never changed, then We ensure that crc32(np.int64(identifier)) & 0xffffffff will always return exactly the same numeric value, across all Python versions and platforms. Imagine we give IDs in the range 0-80 for train, and 81-100 for test. No we want to make sure a sample'd s ID falls in the first bucket. We check its ID is simple less that 81, right? Well the numeric value we made above is checked to be less than our test_ratio * 2**32, where 2**32 is the largest 32-bit number. It checks that the sample's ID is within the range of train data, not in the test bucket:: > test_ratio * 2**32. What is the anonymous function doing in the 2nd to last line? lambda id_: test_set_check(id_, test_ratio) This simply applies our test_set_check function to each sample's unique identifiers. Using the apply methd on a Pandas Series object (here it is one column of a Pandas DataFrame). In practice, do you commonly split datasets by ID in this manner? Not really... Scikit-Learn's train_test_split is often good enough. I think there are many other ways to remove bias and errors from your models before worrying too much about the impact of random splits. For example, the snoop bias, whereby you analyse the entire dataset yourself before deciding on a model architecture/pipeline, thereby incorporating knowledge of the entuire distribution, which is inherently biasing our model. There is also bias in overfitting e.g. in sequential imaging data (think frames of videos) such that the background is consistent, even though the objects you might want to detect are not. Your model will learn what to expect based on the background, which is not robust! Here you might look into using a geographical split (not random at all). On a side note, there is also a slightly robuster way of setting random seeds in Python (instead of using NumPy's random seed generator). Have a look here for some differences. Helpful resources: https://stackoverflow.com/questions/36819849/detect-int32-overflow-using-0xffffffff-masking-in-python https://pynative.com/python-random-module/ https://stackoverflow.com/questions/30092226/how-to-calculate-crc32-with-python-to-match-online-results https://stackoverflow.com/questions/49331030/bitwise-xor-0xffffffff/49332291#49332291
H: How to replace NaN values for image data? My data set has a total of 200 columns, where each column corresponds to the same pixel in all of my images. In total, I have 48,500 rows. The labels for the data range from 0-9. The data looks something like this: raw_0 raw_1 raw_2 raw_3 raw_4 0 120.0 133.0 96.0 155.0 66.0 1 159.0 167.0 163.0 185.0 160.0 2 45.0 239.0 66.0 252.0 NaN 3 126.0 239.0 137.0 NaN 120.0 4 226.0 222.0 153.0 235.0 171.0 5 169.0 81.0 100.0 44.0 104.0 6 154.0 145.0 76.0 134.0 175.0 7 77.0 35.0 105.0 108.0 112.0 8 104.0 55.0 113.0 90.0 107.0 9 97.0 253.0 255.0 251.0 141.0 10 224.0 227.0 84.0 214.0 57.0 11 NaN 13.0 51.0 50.0 NaN 12 82.0 213.0 61.0 98.0 59.0 13 NaN 40.0 84.0 7.0 39.0 14 129.0 103.0 65.0 159.0 NaN 15 123.0 128.0 116.0 198.0 111.0 Each column has around 5% missing values and I want to fill in these NaN values with something meaningful. However, I'm not sure how to go about this. Any suggestions would be welcome. Thank you! AI: Given you have images stretched out as columns in a table with ~48,500 rows, I am assuming you have the raw images that are 220x220 in dimension. You can use a function available via OpenCV called inpaint, which will restore missing pixel values (for example black pixels of degraded photos). Here is an image example. Top-left shows the image with missing values (in black). Top-right shows just the missing values (the mask). Bottom-left and bottom-right are the final output, comparing two different algorithms for filling the images. I would suggest trying both methods on your images to see what looks best. Have a look at the Documentation for more details on the algorithms themselves. Here is the documentation of the actual function. As for code, it will look something like this: import opencv as cv # you will need to install OpenCV dst = cv.inpaint(img, mask, 3, cv.INPAINT_TELEA) the first argument is your image with missing values the second is the mask, with locations of where missing pixels are, i.e. which pixels should be filled/interpolated. third is the radius around missing pixels to fill fourth is the flag for the algorithm to use (see link above for two alternatives) For each image, you can generate the mask with something like this: mask = image[np.isnan(image)] Note: '==' doesn't work with np.nan
H: Why would one crossvalidate the random state number? Still learning about machine learning, I've stumbled across a kaggle (link), which I cannot understand. Here are lines 72 and 73: parameters = {'solver': ['lbfgs'], 'max_iter': [1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000 ], 'alpha': 10.0 ** -np.arange(1, 10), 'hidden_layer_sizes':np.arange(10, 15), 'random_state':[0,1,2,3,4,5,6,7,8,9]} clf = GridSearchCV(MLPClassifier(), parameters, n_jobs=-1) As you can see, the random_state parameter is been tested across 10 values. What is the point of doing this? If one model perform better with some random_state, does it make any sense to use this particular parameter on other models? AI: I personally think that the general idea of optimising your model with different random seeds is not a good idea. There are many other, more important, aspects of the modelling process that you can worry about, tweak and compare before spending time on the effects of random initialisation. That being said, if you just want to test the effect of random initialisation of model weights on a final validation metric, this could be an approach to do so. Kind of the reverse argument to my point above. If you can show for different random seeds (ceteris paribus: with all other parameters equal) that the final model performs differently, it shows maybe that their is either inconsistency in the model, or a bug in the code even. I would not expect a well-validated model to give hugely differing results if being run with a different random seed, so if it does, it tells me something weird is going on!
H: Data enrichment of geographical records I have a user_data table with various fields, some of them are based on geography. I'd like to enrich the data with additional columns, like expected_income_in_region, city_population, life_expectancy_in_state, etc... for each user record. I'd like to use these extended fields with my sql query (prior to additional python manipulations) Is the standard way of getting hold of this data , or do I need to scrape from the US Census bureau and format it myself ? AI: If you are looking to use US Census data, the American FactFinder website: https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml allows you to search for Census data in different ways and then download tables in csv format. For example, on the Community Facts site, you can enter a state, county, city, town, or zip code and obtain the population of the geography that you entered. Then you can click on the subjects listed on the left side of the page (Age, Business and Industry, Education, etc.) to see other interesting information about your geography, or click the links under "Popular tables for this Geography" to see tables containing other data for your community. Or you can use the Advanced Search feature to search all available data using all of the search features provided by the site. Also, I don't know much about this myself, but there exists at least this one Github repository with "SQL scripts and other supporting files for importing recent American Community Survey releases from the US Census Bureau datasets". The scripts are released under the GNU General Public License.
H: If the set of all possible states changes each time, how can Q-learning "learn" anything? I found this resource that explains q-learning with a very simple example. Make it a 2D problem, a rectangle instead of a line, and it's still simple. The only difference is that now there are 2 more possible actions (up and down). My question is: if the length and height of the rectangle are random, as well as the starting position and the location of the Treasure, how can the bot apply the knowledge acquired to the new problem? Is there an evolved version of q-learning for problems with dynamical-states? AI: if the length and height of the rectangle are random, as well as the starting position and the location of the Treasure, how can the bot apply the knowledge acquired to the new problem? You have two possible approaches here, depending on how the problems are being presented to you: If the agent has time to learn/plan on each environment separately, then you need an agent that has capability to learn each environment. A simple tabular Q-learning agent already has this capability, up to a certain size of problem (where the number of states and actions would fit in memory and can be iterated over in simulation enough times). Beyond that size, provided you can come up with a fixed feature set capable of representing any shape and size of problem that the agent could be presented with, and used e.g. DQN or other approximation technique, then you still have a generalised learning bot. A bot that is generalised during training to attempt to solve new instances with variation needs to be trained with many variations and more state data. If shape, size and layout of the maze can change between episodes, then this data must become part of the state. This can expand the state space a lot, and requires different representations. A simple representation of grid spaces would be actual map of the grid as a rectangular "image", assuming the agent start, walls and goal position could be almost anywhere within the space. If the maze has lots of objects, then you can put each object type into a separate "channel" and use a Convolutional Neural Network as part of the Q function approximation. If the area is more sparse with just a few objects (e.g. just the agent, a single blocker and the treasure) then it would be easier to use a direct vector representation of positions of the objects and use a fully connected neural network. For a toy problem where key positions can change between episodes, yet this still can be solved easily by a tabular agent, see Open AI's Taxi-V2 which is an implementation of a classic control problem where the locations of a sub-goal and goal are randomised on each episode.
H: How many features can we input for a SVM to classify? I am new to SVM classifiers. I read on the internet that SVM are binary classifiers and also many SVMs, as described in research papers, only take 2 features as the input. (e.g. https://scikit-learn.org/stable/auto_examples/svm/plot_iris.html) My question is, does it have to be 2 input features? Can we use more than 2? If so, how do we write this code in python? AI: There is no restriction on the number of features. The syntax is exactly the same. For your tutorial, rather than X = iris.data[:, :2] That takes only $2$ columns, just change it to a bigger number if those columsn area in your dataframe. It is a common practice that the last column is the $y$ values of which case just to remove that column, just use $-1$. For some online tutorials, sometimes we use $2$ features for the convenience of visualization only. However, note that we do not want to use too many features as we want to build a model can be generalized. A simple model that works is better than a complicated model.
H: Is linear regression suitable for these data? I have a data set predicting a continuous variable, $Y$. I have $15$ to $20$ potential feature variables most of which are categorical, some of which are ordinal or categorical. These have been converted to numerical values. I have two questions. Is linear regression suitable in this case? If the variables do not show linear relationships with $Y$, is linear regression still suitable? Otherwise, which algorithms, hopefully existing in scikit-learn, might work? AI: You can employ the linear regression algorithm even for categorical data. The point is that whether your data is learnable or not. For instance, take a look at your data, and see whether an expert can really find the output by taking a look at the input vector. If it's possible, your task can be learnt using linear regression method. About linearity, the point is that linear regression can also learn nonlinear mappings. You just have to provide enough higher order polynomials of the current feature space you have which is not an easy task. For instance, you can expand your current feature space by adding the square of each feature to the current feature space. You will observe that it may have better performance than the usual case if your mapping is not linear, but you may still have error. Consequently, you have to supply more polynomial features, but you do not know which to use. An alternative to linear regression which does not need to add extra features is multi layer neural networks (MLP). You can simply use them which can learn nonlinear mappings. You can take a look at the official page of SKlearn for applying them. Furthermore, you can take a look at here for applying them.
H: memory error while converting images into an array I am working on a facial recognition use case. I have 57k jpg images and am converting them into an array. While executing the program, I am getting a memory error. The function I am using: def image_array(l): features = [] for pgm in l: pic = image.load_img(pgm, target_size=(224, 224)) x = image.img_to_array(pic) x = np.expand_dims(x, axis=0) x = preprocess_input(x) features.append(x) npfeatures = np.array(features) img_dt = np.rollaxis(npfeatures, 1, 0) return img_dt[0] The input for this function is a list which looks like: l =['/home/user/image1.jpg','/home/user/image2.jpg','/home/user/image3.jpg'......] The error that I am getting: Traceback (most recent call last): File "FR.py", line 145, in <module> vec_image1 = image_array(final_df['image1'].values.tolist()) File "FR.py", line 140, in image_array npfeatures = np.array(features) MemoryError The imports that I used for above function are: > from keras.preprocessing import image > from keras.applications.vgg16 import preprocess_input AI: You can actually compute how much memory it will take to hold 57,000 images in memory (it is a lot!). You are also holding them twice: once in the features list and then trying again in npfeatures. This second part will make a full copy of the entire features list. Hence why it runs out of memory there. Here are some starting steps that should help you understand the limitations of your approach and perhaps get a working method: 1. You don't need this line: x = np.expand_dims(x, axis=0): They do that in the Keras documentation, because you need a single image to have an additional dimension for the batch size. You do not need that because you append many images to a list (features), and the length of that list is the same thing, the batch size (the number of images). 2. Do your own scaling In the case f VGG16 (and 19 I believe), the preprocess_input method simply scales pixel values between -1 and +1. You can probably do this a little more light-weight yourself. You can do the scaling on a numpy array like this: x = (x / 127.5 - 1) ... and now remove the line with preprocess_input 3. Break it down into smaller pieces: You can then either feed them directly into you models (if that is possible), or save the npfeatures to disk, one chunk at a time. 4. Try seeing how much memory your machine has while running this script: Add a simple line after features.append(x), like this: print("Loaded {} images"format(len(features))) If you are running this on a Linux machine or Mac OSX, try using a tool like htop in terminal. There is also system monitor... same for Windows. You should be able to see the memory consumption grow until the point your script crashes. Now you know how many images you can do in one cycle - the last printed number before the crash.
H: What will the required time to process 500GB of images using NVIDIA GEFORCE 930M GPU I have an image dataset of size 500GiB, and my system specs are NVIDIA GEFORCE 930M, 12GB of RAM and Intel Core i5. I have the following questions: Is it possible such a large dataset to be used in my local machine? If yes, How much time will be required for one epoch or equivalently on iteration? Any links or reference on how to compute the required processing time will be helpful. If my system is not good, what are the other possible solutions I have? AI: The large size of your data is acceptable for deep learning and big data projects. Your system is also acceptable, though it is not powerful. If you have enough hard disk to store them all, it will suffice which means you can train your network. The elapsed time for each epoch depends on multiple aspects. For instance, some elements which are important are, the batch size and your vectorized implementation, the bottle-neck between the disk and RAM, the bottle between RAM and GPU, the size of the model, the size of training data, the memory size of your GPU alongside the size of your RAM, the size of each data, the load which is imposed to your GPU by your OS, and so forth. The easiest way is to code your network and try it yourself. As I've mentioned, by the current settings you can train your network, but you may not have very fast computation. However, you can use some techniques to faciliate your training phase as much as possible. For instance, you have two main bottle-necks. The first bottleneck, which exists between disk and RAM, can be dealt with using generators. Namely, you can employ generators to decrease the number of disk calls. The other bottle-neck, between RAM and GPU, can be handled using vectorized implementation of your neural network. After loading your network, you can find the appropriate batch size to use all available GPU memory. I also want to point out that the current GPU you have may have space limitations. This can incur difficulties when your network is very large. In such cases, you won't be able to load your entire network to your GPU.
H: Problem trying to build my own sklean transformer I build the following sklearn transformer : class Cat2Rat( BaseEstimator , TransformerMixin ) : def __init__( self , col2trans ) : self._col2trans = col2trans def fit( self, X, y = None ): return self def transform( self , X , y = None ) : for col in self._col2trans : p = X[ col ].value_counts() / X.shape[0] dic = dict( [ ( i , p[i] ) for i in ( X[ col ].value_counts() ).index ] ) X.replace( { col : dic } , inplace = True ) return X This transformer is replacing catagorical values by their rates. For example : myarray = np.array([ [ 1 , 1 , 3 , 'v' , 0 ] , [ 2 , 2 , 2 , 'v' , 1 ] , [ 4 , 5 , 1 , 'w' , 1 ] , [ 2 , 1 , 9 , 'w' , 1 ] , [ 1 , 0 , 4 , 'w' , 1 ] ] ) colnames = [ 'one', 'two', 'three' , 'four' , 'target' ] df = pd.DataFrame( myarray , columns = colnames ) Value 'v' ( 'w' ) for column 'four' is replaced by 2/5 ( 3/5 ). My purpose is to fit the transformer on df and apply it to another dataframe df2 : myarray2 = np.array([ [ 2 , 7 , 3 , 'v' , 0 ] , [ 9 , 2 , 2 , 'v' , 0 ] , [ 4 , 5 , 1 , 'w' , 1 ] ] ) colnames2 = [ 'one', 'two', 'three' , 'four' , 'target' ] df2 = pd.DataFrame( myarray2 , columns = colnames2 ) I am doing this that way : # Transformer instance trsf = Cat2Rat( [ 'four' ] ) # Fitting trsf.fit( df ) # Then applying trsf.transform( df2 ) But the rates are those of column 'four' values of df2 not df ( on which the transformer was fitted ). I must have been missing something on the way to properly build such a transformer. Could someone give some clue on how to fix the transformer so that it gives proper result? Thanks. AI: The definition of dic should happen inside the fit method; you're wanting the rates to be based on the dataframe passed to fit. So you'll want to set dic as another class attribute, to be referenced in the transform method. (This kind of replacement is called target encoding, and there's an existing package to do it in sklearn.)
H: How is hinge loss related to primal form / dual form of SVM I'm learning SVM and many classic tutorials talk about the formulation of SVM problem as a convex optimization problem: i.e. We have the objective function with slack variables and subject to constraints. Most tutorials go through the derivation from this primal problem formulation to the classic formulation (using Lagrange multipliers, get the dual form, etc...). As I followed the steps, they make sense eventually after some time of learning. But then an important concept for SVM is the hinge loss. If I'm not mistaken, the hinge loss formula is completely separate from all the steps I described above. I can't find where the hinge loss comes into play when going through the tutorials that derive the SVM problem formulation. Now, I only know SVM as a classic convex optimization / linear programming problem with its objective function and slack variables that is subject to constraints. How is that related to hinge loss?? AI: Hinge loss for sample point $i$: $$l( y_i, z_i) = \max(0, 1-y_iz_i)$$ Let $z_i=w^Tx_i+b$. We want to minimize $$\min \frac1n \sum_{i=1}^nl(y_i, w^Tx_i+b)+\|w\|^2$$ which can be written as $$\min \frac1n \sum_{i=1}^n\max(0,1-y_i (w^Tx_i+b))+\|w\|^2$$ which can be written as $$\min \frac1n \sum_{i=1}^n \zeta_i + \|w\|^2$$ subject to $$\zeta_i \ge 0$$ $$\zeta_i \ge 1-y_i (w^Tx_i+b)$$ The constraint comes from hinge loss. It is a reformulation of a minimax optimization problem.
H: How to find combined feature contributions in a random forest algorithm I am currently working on a pancreatic cancer dataset which has numerous features including smoking, family history, age, etc. I have used the feature_importances_ method in sklearn to find and plot the individual feature importances, but wanted to know how to find the importance of combined variables (e.g. are those who smoke and have a family history of pancreatic cancer much more likely than others?). I know it is not as simple as adding the individual feature contributions for the combined features, and wanted to know if there is a specific method/function that can be used to do that. I have also used treeinterpreter to find the feature contributions. AI: My recommendation would be to use Partial Dependence Plots, which show the marginal effect one or two features have on the predicted outcome. They plot the average predictions over a range of values of the predictor(s) you specify, with the influence of all other predictors "averaged out". If you are familiar with linear regression, it is like plotting the predicted value of the target for different values of a predictor multiplied by the estimated weight/coefficient. In your question, you are interested in two predictors (smoking and family history), so you would plot a two-way partial dependence plot showing the dependence of the target variable on joint values of smoking and family history. There is a PDPBox package you can install that produces a variety of very nice plots and has some tutorials: https://github.com/SauceCat/PDPbox. It claims to support all Scikit-Learn algorithms. There is also a Cross Validated Q&A on interpreting PDP's for Random Forests (produced with R): https://stats.stackexchange.com/questions/121383/interpreting-y-axis-of-a-partial-dependence-plots
H: What is .numpy()? I searched for numpy.numpy() and tried replacing .numpy() with .np() because numpy is already imported as np here: Tensorflow tutorial But using `.np() returns an error. In the section, "Creating training examples and targets" there is: # Create training examples / targets char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int) #slices text_as_int into elements for dataset print(type(char_dataset)) for i in char_dataset.take(5): #from 0 to 4 print(i, i.numpy()) print(idx2char[i.numpy()]) That outputs: <class 'tensorflow.python.data.ops.dataset_ops.TensorSliceDataset'> tf.Tensor(18, shape=(), dtype=int64) 18 F tf.Tensor(47, shape=(), dtype=int64) 47 i tf.Tensor(56, shape=(), dtype=int64) 56 r tf.Tensor(57, shape=(), dtype=int64) 57 s tf.Tensor(58, shape=(), dtype=int64) 58 t So i is a tensor and .numpy() seems to convert that into the int representing the character in the text. However, I was looking for a more formal explanation. AI: That is part of TensorFlow's eager execution: Tensors can be explicitly converted to NumPy ndarrays by invoking the .numpy() method on them. There is no such thing as numpy.numpy(). There is no numpy function inside of the NumPy package. The numpy function is only in the TensorFlow package. The NumPy package is frequently imported with alias: import numpy as np After importing the NumPy package, you have access to NumPy's modules and functions: np.random.random_sample()
H: Why can't my neural network learn how to predict the squares of natural numbers? I want my neural network to learn to predict the square $n+1$ number having $n$ number. I am considering a regression problem. That's what I'm doing: from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Embedding, Dropout import numpy as np x = np.array([[int(i)] for i in range(1001)]) y = np.array([x*x for x in range(1001)]) model = Sequential() model.add(Dense(100, activation = 'relu', input_dim = 1)) model.add(Dense(50, activation = 'relu')) model.add(Dense(10, activation = 'relu')) model.add(Dense(1)) model.compile(loss='mse',optimizer='adam', metrics=['mae']) model.fit(x,y,epochs= 2500) pred = model.predict([1001]) print(pred) However, as a result, I get [[ 1000166.8125]] instead 1002001. Update: x = np.array([[int(i)] for i in range(80001)]) y = np.array([x*x for x in range(80001)]) print(x) print(y) model = Sequential() model.add(Dense(20, activation = 'relu', input_dim = 1)) model.add(Dense(20, activation = 'relu')) model.add(Dense(1)) adam = optimizers.Adam(lr=0.0002,beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False,) model.compile(loss='mse',optimizer=adam, metrics=['mae']) model.fit(x,y,epochs= 3000) pred = model.predict([80001]) print(pred) model.save_weights("test.h5") model_json = model.to_json() json_file = open("test.json", "w") json_file.write(model_json) json_file.close() result: [[ 4.81360333e+09]] AI: Decrease the number of hidden layers; you can omit the dense layer with $50$ neurons. Furthermore, train your network more. You should also provide more data. It is not much at the moment. Your current architecture is very deep for such a relatively easy task. Consequently, it needs more train time. You can just decrease the size of the current model by diminishing the number of hidden layers and neurons. For instance, use the following setting to see how you can train very fast and have a good accuracy. model = Sequential() model.add(Dense(20, activation = 'relu', input_dim = 1)) model.add(Dense(20, activation = 'relu')) model.add(Dense(1))
H: How is the linear regression cost function evolved? A couple of weeks ago I joined the Standford University machine learning course on Coursera. In that course, they directly gave the cost function formula without telling how this formula was evolved. Can anyone help me by telling how that cost function has been evolved? AI: What I remember is that they give you more insight in future lectures, but the main reason for that is the maximum likelihood with which you try to increase the chance of making the current data set by setting the parameters. It is a good choice for setting the parameters, but its weakness is that it may overfit to your training data. You may want to take a look at Maximum Likelihood Estimation. The cost function which is discussed there is called minimum square errors. It is found by maximum likelihood. This means that you want to increase the chance of making your training set. In other words, you want to increase the chance of $P(D|\theta)$ where $D$ can be considered as your training set. Due to the fact that your data should be iid, you can write the previous probability as $\pi p(x_i|\theta)$. You then apply some simplifications and you finally find that cost function. You can take a look at MSE as Maximum Likelihood for exact justification.
H: Notation unclear - Radial Basis Funtion In Chapter 6 (Deep Forward Networks) on Page 193 of Deep Learning they talk about the design of Hidden Units. The Radial Basis Function is introduced as follows: $$ h_i = exp\big{(}-\frac{1}{\sigma_i^2}||W_{:,i} − x||^2\big{)} $$ What does the colon as Index for $W$ mean? AI: According to the notation page, it means the $i$-th column. $W$ is a template matrix, where the closer $x$ is to $i$-column of $W$, the bigger the value of $h_i$.
H: Help with understanding cross-validation My understanding of cross-validation is that we divide our data set into parts 1-k, then use part 1 as a validation set and parts 2-k as a training set, then use part 2 as a validation set and the remainder as a training set, etc., until we've used each part as a validation set. What I don't totally understand is: What is the actual goal here? I get that we take an average of the k rounds of cross-validation that we do, but what is our output? I've read that we discard the result of each round after calculating the validation error, so how do we actually produce a model from this? What is the difference between a validation and test set, if any? i.e., we normally separate our data into training and test data, but do we then further divide our training data into validation sets, and keep our test data aside during the cross-validation process, or do we run cross-validation on the entire data set? AI: What is the actual goal here? I get that we take an average of the k rounds of cross-validation that we do, but what is our output? I've read that we discard the result of each round after calculating the validation error, so how do we actually produce a model from this? The goal is to obtain an accurate estimate of the model's performance. Cross-validation produces unbiased model predictions for every example in the dataset. We use these predictions to compute metrics, like accuracy or log loss, for the training dataset. When we are actually building a model, we will use the entire dataset rather than cross-validation. What is the difference between a validation and test set, if any? i.e., we normally separate our data into training and test data, but do we then further divide our training data into validation sets, and keep our test data aside during the cross-validation process, or do we run cross-validation on the entire data set? A validation set is a set of labeled examples used during the training phase, but not actually for training the model. This can be done in many ways. For example, when building a model through an iterative process (like gradient descent for neural networks), we can use a validation set to decide when to stop iterating. The performance on the training set almost always improves after each training step, but will eventually overfit and produce poor predictions on unseen data. In this scenario, we can test the model on the validation set to detect when overfitting occurs. A test set, on the other hand, is only used to evaluate the model after training is complete. Optimising our training for performance on the validation set means we are slightly biased towards it, and we should finally test on a truly held-out set of labeled data to evaluate the performance of the model. Cross-validation is intended to "replace" the need for a test set for providing an unbiased estimate of model performance, so you usually would not bother with having a test set when doing cross-validation. However, if you have a large dataset it is generally recommended to split into train/test rather than to use cross-validation.
H: Smaller network width than output size? I am trying to figure out if it makes sense that the width of the network could be smaller than the input/output size? So for example, I am giving the Neural Network 2048 numbers, and I am expecting 2048 numbers back. I would also like to use LSTM's, which take a lot of time/space etc to train, and having one or a few 2048 or larger LSTM layers connected to Dense layers would take a lot of space/time to train. But maybe that is the way to go? Does anyone have any experience with this type of problem? Thank you! AI: In high dimensional spaces like the one-dimensional space, you have to employ a very simple network, maybe just a single neuron and investigate whether your data is linearly separable in that feature space which can not be visualized. If you observe that you do not have a good performance, you can increase the size or the layers of your network step by step. You can generalise what I've just referred to other networks like RNNs. What I've referred means that your data may be in a way that is separable though it's in 1000 dimensional space and you just need one neuron to classify it.
H: Pandas Groupby makes kernel die in Jupyter notebook/Python I have a groupby in jupyter-notebook that takes ages to run and after 10 minutes of running it says 'kernel died...', The groupby looks like this: df1.groupby(['date', 'unit', 'company', 'city'])['col1', 'col2', 'col3', 'col4', ... 'col20'].mean() All of the 'col' columns are float values. I am running everything locally. Any ideas? UPDATE: The shape of df1 is: (1360, 24) Memory and dtypes: dtypes: category(3), datetime64[ns](2), float64(17), int64(2) memory usage: 266.9 KB The unique size of city, date, company, unit: len(df1.date.unique()) = 789 len(df1.unit.unique()) = 76 len(df1.company.unique()) = 205 len(df1.city.unique()) = 237 I have 16GB of memory on MacBook Pro. UPDATE 2: It works only if I have date and unit inside the groupby columns as the only 2 columns. If I add either a company or city, it doesn't work anymore, it keeps running indefinitely. AI: I thought it might be because of the different types being used in the columns, but I created an example below, which works fine over mixed column types. The only real different is the size - that is why I think you are probably running out of memory. Working example I use int, str and datetime objects: In [1]: import pandas as pd In [2]: import datetime In [3]: df = pd.DataFrame({'Branch': 'A A A A A A A B'.split(), 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(), 'Quantity': [1, 3, 5, 1, 8, 1, 9, 3], 'Date':[datetime.datetime(2013, 1, 1, 13, 0), datetime.datetime(2013, 1, 1, 13, 5), datetime.datetime(2013, 10, 1, 20, 0), datetime.datetime(2013, 10, 2, 10, 0), datetime.datetime(2013, 10, 1, 20, 0), datetime.datetime(2013, 10, 2, 10, 0), datetime.datetime(2013, 12, 2, 12, 0), datetime.datetime(2013, 12, 2, 14, 0)]}) In [4]: df Out[4]: Branch Buyer Quantity Date 0 A Carl 1 2013-01-01 13:00:00 1 A Mark 3 2013-01-01 13:05:00 2 A Carl 5 2013-10-01 20:00:00 3 A Carl 1 2013-10-02 10:00:00 4 A Joe 8 2013-10-01 20:00:00 5 A Joe 1 2013-10-02 10:00:00 6 A Joe 9 2013-12-02 12:00:00 7 B Carl 3 2013-12-02 14:00:00 In [5]: df.shape Out[5]: (8, 4) Now I just repeat the dataframe again, but add one hour to each of the datetime values, just to increase the number of groupby combinations to expect: In [14]: df.iloc[0:8, 3] += datetime.timedelta(hours=1) Now perform a groupby over all columns, and sum only on Quantity (it is my only numeric column). The reuslts are as expected: In [16]: df.groupby(["Branch", "Buyer", "Quantity", "Date"])["Quantity"].sum() Out[16]: Branch Buyer Quantity Date A Carl 1 2013-01-01 13:00:00 1 2013-01-01 14:00:00 1 2013-10-02 10:00:00 1 2013-10-02 11:00:00 1 5 2013-10-01 20:00:00 5 2013-10-01 21:00:00 5 Joe 1 2013-10-02 10:00:00 1 2013-10-02 11:00:00 1 8 2013-10-01 20:00:00 8 2013-10-01 21:00:00 8 9 2013-12-02 12:00:00 9 2013-12-02 13:00:00 9 Mark 3 2013-01-01 13:05:00 3 2013-01-01 14:05:00 3 B Carl 3 2013-12-02 14:00:00 3 2013-12-02 15:00:00 3 Name: Quantity, dtype: int64 Break your problem down It might be difficult to break down your problem, because you need to whole data for the groupby operation. You could however save each of the groups to disk, perform the mean() computation on them separately and merge the results yourself. The name of each group is actually the combination of the groupby columns selected. This can be used to build the index of the reuslting dataframe. It could look something like this: for name, group in df1.groupby(['date', 'unit', 'company', 'city']): print("Processing groupby combination: ", name) # This is the current groupby combination result = group.mean() _df = pd.DataFrame(index=[name], data=[result]) _df.to_csv("path/somewhere/" + name + ".csv You will then have a folder full of the results for each group and will have to just read them back in and combine them. Other methods It is known that Pandas does not handle many operations on huge datasets very efficienty (compared to e.g. the data.table package). There is the Dask package, which essentially does Pandas things in a distributed manner, but that might be overkill (and you'll of course need more resources!)
H: What predictive model to use to impute Gender? My data looks like this: birth_date has 634,990 missing values gender has 328,849 missing values Both of these are a substantial amounts since I have 900k entries, so I can't discard empty rows. For birth_date someone recommended using Multivariate imputation by Chained equation (MICE). I don't know what predictive model I should use for gender. Of the non-missing data, there are 5x more males than females. Can someone tell me what would be best practice here? What would be the best way to fill in the missing values for gender ? I'm using the data to predict bike-ride duration and final destination (I know they're shown on the table above) AI: I agree with Simon's advice. I find that the gains that you obtain from using any external method of imputation is often inferior to an internal method, and on top of this, exposes you to even more potential of severely screwing up with respect to data leakage. That being said, besides using an algorithm that automatically handles missing data for you (which often are models based off trees/rules, though they do not all use the same method of imputation), there are external based methods that might be of interest. I find that as you get more "fancier" the results are not enough of an improvement compared to the computational pain it is to use them. Starting with the simplest; 1) Mode imputation; simply use the most common gender in your training data set. For your test dataset, use the most common gender that exists in your training data set. Since there are 5x more males than females, this would result in you almost certainly assigning male to all observations with missing gender. Obviously, this doesn't use a whole lot of information besides the observed frequency of the class, but this method is pretty common and often "good enough". 2) kNN imputation; take the k most closest neighbours (that do not have missing genders) to the observation that you wish to impute gender for. Then, simply treat each of these k neighbours as a committee of "voters" who use their own gender as their vote. Weight each vote by how close they are (based off other variables that aren't missing) to the observation with the missing gender value. Whichever gender wins in votes gives you the imputed gender. This method to me, is a clear improvement over method 1) and is also quite fast. However, this will require you to center and scale your data (because we are using distances to define "closeness") and k is now a tuning parameter which further complicates matters. 3) Random Forest imputation; initially, use method 1) to temporarily fill in your missing genders (just mode impute). Then, run a random forest algorithm on the imputed dataset, generating N trees. Compute what is referred to as the "proximity matrix", where each $(i,j), i \ne j $ entry in this matrix (diagonal entries are all 0) is equal to the number of times observations $i$ and $j$ fall in the same terminal node through the entire forest divided by the number of trees in the forest. Using these proximities as weights, calculate a weighted vote of all the observations that do not have missing genders using their genders as their "vote". Change any prior "temporary" imputed genders from the initial mode imputation to what has been calculated by the random forest if they differ. Repeat (fit another random forest again), using the imputed genders from the previous random forest, until all observations converge to a single gender or until some stopping criteria. This method is incredibly costly but is probably pretty accurate (I haven't used it much because it is slow). You will also have to deal with an additional tuning parameter; namely how many variables you wish to randomly select in each split. 4) MICE: I haven't really studied this method too closely, but you seem to have mentioned it. One thing I will say is that all of these methods can be used with any kinds of missing data; categorical (like gender) or continuous (like birth_date, though for method 1) you would probably use mean/median imputation instead for continuous variables, and for methods 2 and 3) you would no longer use a "vote" but a weighted average). Ultimately, MICE is just one of many methods of imputation that you can use which is why one needs to properly validate their modelling choices within cross validation if you choose to use an external method of imputation. If you have the time, try a bunch of methods and use the highest performing one. Otherwise, use a method that seems "reasonable enough" given time constraints.
H: Communication between Keras and Backend like Tensorflow, Theano Keras is a high level neural network API providing python library which uses tensor flow or theano or cntk as backend. What are the primary roles of backend libraries? Is it implementation? or Is it computational heavylifting using GPU, threading etc? I couldn't find any good resources online to understand how keras interacts with tensor flow or theano(backend) session. Any such resource or direction for understanding interaction is helpful ! AI: Keras is a high level API but the backend is readily available. You simply access it by doing: from keras import backend as K The K will then be the same as tf as if you imported Tensorflow like this: import tensorflow as tf So you can use K to perform lower level operations with the backend. For more information you can read the Keras backend documentation.
H: Problem building dictionary from series I have a pandas dataframe with a column CAS_BRM_IDA of type category (even if its values seem of foat type ... but they are not meaningful per se) I built the following serie from that dataframe : p = df.groupby( [ 'CAS_BRM_IDA' ] ).mean()[ 'TOP_FRD'] print( p ) Result displayed : CAS_BRM_IDA 10.0 0.001131 13.0 0.000000 15.0 0.002038 17.0 0.000000 20.0 0.003802 ... missing 0.019549 I then tried to build a dictionary using that serie : mydic = dict( [ ( i , p[i] ) for i in p.index ] ) But I got the following error message : TypeError: cannot do label indexing on <class 'pandas.core.indexes.category.CategoricalIndex'> with these indexers [10.0] of <class 'float'> Some details about the p index : CategoricalIndex( [10.0, 13.0, 15.0, 17.0, 20.0, 21.0, 30.0, 31.0, 40.0, 43.0, 50.0, 51.0, 56.0, 'missing'], categories=[10.0, 13.0, 15.0, 17.0, 20.0, 21.0, 30.0, 31.0, ...], ordered=False, name='CAS_BRM_IDA', dtype='category') I can't figure out where is the problem. '10.0' value seems to have been interpreted as float which I thought it's not. How can I build a dict from a Series with a Categorical Index which contains floats? AI: Try the intrinsic pandas conversion: p.to_dict()
H: What is the minimum amount of data required for sales prediction with ML I have historical data from the MySQL DB which contains 33 months of data. The features in the data are state, depot, product type, purchase date, salesperson name, volume, and price. Using this information, I would like to predict/forecast product sales weekly, monthly, and yearly. Since I only have 34 months of data, I do not think that I can apply statistical modeling techniques like Exponential Smoothing or ARIMA. So, I just wanted to check if this data is sufficient for Machine learning modelings such as Linear Regression, SVM, or Random forest? Or any kind of feature engineering. Note: I am unable to get more data, as DB has only 34 months of data. Please suggest. AI: There is no general answer to this question. Ethan's answer is correct in stating that more data is almost always desirable (up to the amount that your hardware can process quickly enough to meet your needs). Less data means that you will be limited to fewer predictors (p >= n is typically a problem), and "noisier" results (point predictions will be less accurate, prediction ranges will be wider, classification accuracy will often be lower, etc.). Machine learning can be done with a sample size of one-- the results will just probably not be worth very much. If that's all the data you have, that's all that you have. The question isn't so much can ML be done here? as will the output of a specific ML technique be good enough to use? Answering that question requires a look at your results and your needs more so than a rule of thumb about data quantity.
H: What are "belief maps" and "affinity maps"? When evaluating Nvidia-Deep Object Pose Estimation, I came across these terms, belief maps and affinity maps. I haven't been able to find a satisfying answer online on what these terms mean. AI: In computer vision, pose estimation is the detection of an object's orientation and positioning. Belief Maps A team from Nvidia have proposed "Deep Object Pose Estimation", which has two main components: Detect objects in image and generate 2D keypoint mappings Project 2D mapping to 6D From what I can see, "belief maps" are generated by the first component of their system and describe elements of the 2D keypoint mappings that are then projected to 6D by the second component. From the article: The feedforward network takes as input an RGB image of size w×h×3 and branches to produce two different outputs, namely, belief maps and vector fields. There are nine belief maps, one for each of the projected 8 vertices of the 3D bounding boxes, and one for the centroids. Similarly, there are eight vector fields indicating the direction from each of the 8 vertices to the corresponding centroid... to enable the detection of multiple instances of the same type of object. Affinity Fields I've not come across affinity maps, but affinity fields are referred to above as inspiration for their approach to pose estimation, specifically in "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields". That article aims to solve the problem of how to attribute limbs to people in video containing more than one person. Affinity maps are "an explicit nonparametric representation of the keypoints association that encodes both position and orientation of human limbs", which you can see a demo of in their video (which is pretty cool). So bringing it back to the first article, affinity fields are analogous to the vector fields mentioned above. They're used to associate body parts with a particular person in 5, or to associate vertices of belief maps to a centroid representing an object in 4. So belief maps represent the features extracted by their deep CNN, and vector/affinity fields help associate those vertices with an item that's been detected.
H: What measures can I use to find correlation between categorical features and binary label? For analyzing numerical features, we have correlation. What measures do we have to analyse the relevance of a categorical feature to the target value? If there isn't a direct measure, how can we achieve this? Chi-squared test is known, but I can't find any implementation of it for categorical values. One other way is to label encode into numerical values, but that assigns certain priority to higher valued labels. AI: Checking if two categorical variables are independent can be done with the Chi-Squared test of independence where we perform a hypothesis test for that. Let's say A & B are two categorical variables then our hypotheses are: H0: A and B are independent HA: A and B are not independent We create a Contingency table that counts for the combination of outcomes from two variables, if the null hypothesis(H0) is correct then the values of the contingency table for these variables should be distributed uniformly. And then we check how far away from uniform the actual values are. For Example Suppose we have two variables in the dataset Obesity: Obese, Not obese Marital Status: Married, Cohabiting, Dating We observe the following data: Oij for a cell (i,j) is the observed count in given data | dating | married | cohabiting | Total | -----------|------------:|:------------:|:------------:|:------------:| Obese | 81 | 147 | 103 | 331 | Not obese | 359 | 277 | 326 | 962 | Total | 440 | 424 | 429 | 1293 | Expected Counts Calculation i.e. Expected counts if H0 was true. Eij for a cell (i,j) as Eij=row j total * column i total / table total | dating | married | cohabiting | -----------|------------:|:------------:|:------------:| Obese | 113 | 109 | 110 | Not obese | 327 | 316 | 319 | X2-statistics Calculation Statistics Assuming independence, we would expect that the values in the cells are distributed uniformly with small deviations because of sampling variability so we calculate the expected values under H0 and check how far the observed values are from them. We use the standardized squared difference for that and calculate Chi-square statistics that under H0 follows χ2 distribution with df=(n−1)⋅(m−1) where n & m are the number of categories in the first & second variable respectively. \begin{equation} \chi^2 = \sum_ i \sum_ j \frac{ ({O_{ij} - E_{ij}})^2}{ E_{ij}} \end{equation} χ2 value comes out to be 30.829 We can use R to find the p-value tbl = matrix(data=c(81, 147, 103, 359, 277, 326), nrow=2, ncol=3, byrow=T) dimnames(tbl) = list(Obesity=c('Obese', 'Not obese'), Marital_status=c('Dating', 'Married','Cohabiting')) chi_res = chisq.test(tbl) chi_res Pearson's Chi-squared test data: tbl X-squared = 30.829, df = 2, p-value = 2.021e-07 Since p-value < 0.05 we reject the null hypothesis, we can conclude that obesity and marital status are dependent. There also exists a Crammer's V that is a measure of correlation that follows from this test. Putting values in the formula, R code sqrt(chisq.test(tbl)$statistic / (sum(tbl) * min(dim(tbl) - 1 ))) 0.1544 So we can say there is a weak positive correlation between obesity and marital status. I hope I am clear with the explanation.
H: Estimating the value of $\pi$ with a Monte Carlo dartboard: $<$ or $\leq$? I'm trying to figure out which is the proper way to estimate $\pi$ using the Monte Carlo method randomly distributing points in a square that also contains an inscribed circle. Some sources say to use the comparison of $\sqrt{x^2+y^2}\le 1$, while others use $\sqrt{x^2+y^2}<1$.                                            Here's some example code from a wikipedia article: def monte_carlo_pi(nsamples): acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / nsamples Instead of posting a long list of websites that use $\le 1$ or $< 1$, I've made the list and stored it on the following websites: See either: socrates.io or markdown.press or markdownshare for examples using less than and less than or equal to. AI: Short answer: Both formulations lead to the same answer. Mathematical explanation: In order to understand that let us look at two similar problems. Imagine we want to integrate a function $f(x)=x^2$ on two intervals $I_1=[0,1]$ (including both bounds) and $I_2=(0,1)$ (excluding both bounds). For $I_1$ we have $$\int_0^1 x^2~dx=1/3.$$ For the second interval, we need to introduce a positive dummy parameter $\varepsilon$ then we can calculate the integral as $$\lim_{\varepsilon\to 0}\int_{0+\varepsilon}^{1-\varepsilon}x^2~dx=1/3.$$ So the line of separation between both intervals does not contribute to the integral (area) because it has an infinitesimal width. The same argument can be applied to the circle area. But including the line of the circle will make the calculations easier because we do not need to introduce a dummy variable. Numerical explanation: Because of the numerical precision of your computer, it will be very unlikely that the generated random numbers will lead to points that are really on the line of the circle. Numerically it will be impossible to obtain any value in which at least one of the coordinates is irrational. Only if both values are rational and if $x^2+y^2=1$ they can lie on the circle. But this case is also very unlikely.
H: Keeping part of a string in R I have a dataframe with the following column city <- c("Sydney NSW", "Newcastle NSW", "Liverpool NSW", "Broken Hill NSW") I want to maintain everything prior to NSW (space included). What Regex expression can be used in R for that? AI: Use strsplit(). If you want to remove the space between the two words (along with 'NSW'): city_clean <- unlist(c(strsplit(city, " NSW"))) Output: [1] "Sydney" "Newcastle" "Liverpool" "Broken Hill" It wasn't clear to me whether you wanted to keep the space or not. If you want to keep the trailing space after the city name city_clean <- unlist(c(strsplit(city, "NSW"))) Output: [1] "Sydney " "Newcastle " "Liverpool " "Broken Hill " If you prefer to use regex, here's another solution: city_clean <- gsub(" NSW", "", city) Output: [1] "Sydney" "Newcastle" "Liverpool" "Broken Hill"
H: Pandas throwing "Error tokenizing data. C error" while loading data sets from URL I am trying to work on the Titanic competition to get hands on experience with data science & machine learning. I tried to load up the datasets from GitHub but pandas threw the following error: ParserError: Error tokenizing data. C error: Expected 1 fields in line 32, saw 2 I tried to follow the advice of other SO users so I added skiprows=1 parameter in my pd.csv() call to skip the first row but it didn't work. import pandas as pd train_dataset = pd.read_csv("https://github.com/oo92/titanic-files/blob/master/train.csv", skiprows=1) test_dataset = pd.read_csv("https://github.com/oo92/titanic-files/blob/master/test.csv", skiprows=1) ground_truths = pd.read_csv("https://github.com/oo92/titanic-files/blob/master/gender_submission.csv", skiprows=1) train_dataset.head() AI: The path that you are accessing from is a Github repository page which is a webpage, it does not return CSV. You have to click on 'raw' option in Github and then pass the URL which in your case is: test = pd.read_csv('https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/test.csv')
H: Derivation of backpropagation for Softmax So, after a couple dozen tries I finally implemented a standalone nice and flashy softmax layer for my neural network in numpy. All works well, but I have a question regarding the maths part because there's just one tiny point I can't understand, like at all. Having any kind of activation function in the output layer, back-propagation looks like: $$ z = x\cdot w+b \\ a = f(z) \\ E = \frac{1}{N} \sum^{N}_{i=1} (a_{i} - y_{i})^{2} \\ \frac{\partial E}{\partial w} := \frac{\partial E}{\partial a_{i}} \cdot\frac{\partial a_{i}}{\partial z} \cdot \frac{\partial z}{\partial w} \\ \frac{\partial E}{\partial w} := (a_{i} - y_{i}) \cdot f^{'}(z) \cdot x $$ So by differentiating $ a_{l} $ with respect to $ z_{l} $, the result is the derivative of the activation function with $ z_{l} $ itself. Now, with Softmax in the final layer, this does not apply. If I use $ Softmax'(z_{l}) $ I get incorrect results, but I rather need $ Softmax'(a_{l}) $. Looking at a couple online materials like LINK in the Backpropagation phase section in the python code, the author also uses the activation itself in the argument for softmax's derivative and I've been differentiating there and back for the last 2 weeks, ... I don't understand why it works like that. In short, my question is: With specifically Softmax in the output layer, why is my $ \frac{\partial a_{l}}{\partial z_{l}} $ becomes $ Softmax'(a_{l}) $ instead of $ Softmax'(z_{l}) $ like with pretty much any other normal activation function that's out there? AI: We let $$a=\operatorname{Softmax}(z)$$ that is $$a_i = \frac{e^{z_i}}{\sum_{j=1}^N e^{z_j}}.$$ $a$ is indeed a function of $z$ and we want to differentiate $a$ with respect to $z$. The interesting thing is we are able to express this final outcome as an expression of $a$ in an elegant fashion. If you look at the section of "Derivative of Softmax Function" in your link, using quotient rule: We can see that if $i=m$, \begin{align}\frac{\partial a_i}{\partial z_m} &=\left(\frac{e^{z_i}}{\sum_{j=1}^N e^{z_j}}\right)\left(\frac{\sum_{j=1}^Ne^{z^j}-e^{z_m}}{\sum_{j=1}^N e^{z_j}} \right)=a_i(1-a_m)\end{align} if $i\ne m$, \begin{align}\frac{\partial a_i}{\partial z_m} &=-\left(\frac{e^{z_m}}{\sum_{j=1}^N e^{z_j}}\right)\left(\frac{e^{z_i}}{\sum_{j=1}^N e^{z_j}} \right)=-a_m(a_i)\end{align} If you want to evaluate things in terms of $z$, you can still use the middle term though using the formula $$\frac{da}{dz}=ae^T\circ (I-ea^T)$$ is more elegant.
H: How do we define the noisy-MAX canonical aggregator for Bayesian networks? In the book Probabilistic graphical models - principles and techniques, Daphne Koller and Nir Friedman introduce the noisy-OR canonical model for CPDs (in the independence of causal inference family of aggregators) and go on to say this can be extended to noisy-MAX. However the details on how to do this are omitted. The OpenMarkov project (http://www.openmarkov.org/) notes talk about a noisy-MAX model but again the details on how this is implemented are absent. The noisy-OR model takes in a number of binary variables $X_i$, for $i=1,2,...n$. It then transforms them to an intermediate set of binary random variables $Z_i$ such that $Z_i=1$ with probability $\lambda_i$ when $X_i=1$ and 0 otherwise. Adds in the binary variable $Z_0$, called the leak, which is 1 with probability $\lambda_0$ (and 0 otherwise). The resulting (binary) output is then the OR over all the $Z_j$, $j=0,1,2,...,n$. It is unclear how one should extend this to MAX, since directly replacing OR with MAX will yield exactly the same as with OR. (A series of 1s and 0s OR'd is the same result as MAX'ing it.) What is the methodology/algorithm/formula used for calculating the noisy-MAX? Answers should also whether the inputs and output can be generalised to non-binary (or even numerical/continuous) types. AI: As cited in the OpenMarkov tutorial that you linked, the noisyMAX is to extend noisyOR to non-binary variables. You can read more in section 4.1.5 here.
H: bert-as-service maximum sequence length I installed bert-as-service (bert-as-service github repo) and tried encoding some sentences in Japanese on the multi_cased_L-12_H-768_A-12 model. It seems to work as I am getting vectors of length 768 per word but np.shape() shows this for each sentence: np.shape(vec_j[0]): (25, 768) np.shape(vec_j[1]): (25, 768) np.shape(vec_j[2]): (25, 768) np.shape(vec_j[3]): (25, 768) np.shape(vec_j[4]): (25, 768) type: <class 'numpy.ndarray'> My sentences are short so there is quite a bit of padding with 0's. Still, I am unsure why this model seems to have a maximum sequence length of 25 rather than the 512 mentioned here: Bert documentation section on tokenization "Truncate to the maximum sequence length. (You can use up to 512, but you probably want to use shorter if possible for memory and speed reasons.)" AI: The default setting for max_seq_len is 25 as seen here under heading Server API: bert-as-service readme There is an open issue regarding this on the Github repo here and the creator seems to be implementing a feature: bert-as-service issues
H: difference betwen predicting seen and unseen data I tried to test my model with seen and unseen data (seen data are data that i used to learn the model). I figure out that as much as i increase the number of features seen data can be properly predicted, while when using a feature selection technique unseen data can be properly predicted. Is there any explanation about this. Thanks in advance AI: What you want to do in building predictive models , in general , is closing the gap between the loss on seen data ( training data ) and unseen data ( test data ). Increasing the numbers of features will naturally make the hypothesis ( the function that is meant to learn from your training and will receive unseen data ( X ) to predict your target value ( Y ) ) more complex and can fit your training data perfectly at a certain point, BUT there is a thresh-hold ( which you went above according to your results by increasing the number of features) where your model starts giving very good results on your training set and much poorer results on the test set, and that is called overfitting which is a problem caused by high variance in your model which basically means you are allowing your function to be complex enough to fit your training data perfectly, and be less able to predict unseen data ( your test data ). Go with the feature selection technique if it reduces the gap between your training loss and test loss! Notice here in this example that increasing the number of features gives lesser test loss but, at a certain point, the loss on test starts going up again.
H: Scatter Plot with Conditions Suppose I have a data frame name = ['A', 'B', 'C'] score = [2,4,6] I want to create a scatter plot with the following conditions, color the bubble as green if the score is greater than 3 and red otherwise. I'd also like to label the bubble with its respective name. I'm only able to create a scatter plot with the bubble having the respective name. AI: This type of scatter plot is best performed in seaborn for Python. # example from: https://python-graph-gallery.com/?s=scatter # library & dataset import seaborn as sns df = sns.load_dataset('iris') # Use the 'hue' argument to provide a factor variable sns.lmplot( x="sepal_length", y="sepal_width", data=df, fit_reg=False, hue='species', legend=False) # Move the legend to an empty part of the plot plt.legend(loc='lower right') The keyword hue will designate which variable is the one which is used for labeling the data points. Also, have a look at the link provided for more examples.
H: statsmodels ols does not include all categorical values I am doing an ordinary least squares regression (in python with statsmodels) using a categorical variable as a predictor. There are 5 values that the categorical variable can have. However, after running the regression, the output only includes 4 of them. Here is what I am running: >>> from statsmodels.formula.api import ols >>> model = ols("normalized_score ~ C(general_subreddit)", data=df_feature) >>> results = model.fit() >>> results.summary() The output of the last command includes the following rows in the table: I can check the count of each of the categorical variables as follows: >>> from collections import Counter >>> Counter(df_feature["general_subreddit"]) Counter({nan: 20, 'community': 4159, 'ending_addiction': 3819, 'mental_health': 4650, 'other': 6920, 'relationships': 4318}) Ignoring the NaNs, why does the categorical value of "community" not appear in the model summary? AI: When a logistic model is built using a categorical variable with N levels, it only considers N-1 levels, as the remaining level is used as a reference by the model. What this means to your model as a whole is that, each level (when remaining variables remain same) is compared to the reference level. In your example, "community" level is used as the reference. So, ending_addiction contributes 0.0749 units more than "community" to the dependent variable. Hit this up if it's not clear --> https://community.alteryx.com/t5/Alteryx-Designer-Discussions/In-the-Logistic-Regression-Report-Factor-Missing/m-p/10100/highlight/true#M5169
H: Neural networks, optimization math intuition When I look into the following partial derivative, I see it as being the key element of any optimization algorithm out there. Correct me if I'm wrong, but this gets us the slope of the loss function, so we can go opposite to that slope, therefore minimizing the loss. $$\frac{\partial \theta}{\partial \mathcal{L}}$$ where: $\theta$ is the weights, and the $\mathcal{L}$ is the loss; Does that make sense? Is there any other calculation step that is arguably more fundamental to the optimization of neural networks other than this derivative? This topic is specially important for me right now, because I was thinking of tattoing this derivative, as a cool A.I. tattoo, and I want it to be fundamental and simple. AI: Note that $\frac{\partial L}{\partial \theta}$ is different from $\frac{\partial \theta}{\partial L}$. What you tried to describe seems to be $\frac{\partial L}{\partial \theta}$ where $\theta$ is a variable. If $\theta$ is high dimensional, sometimes we just use the $\nabla$ notation. Gradient descent is $$\theta_{n+1}=\theta_n-\gamma \nabla L(\theta_n)$$ Not everything is differentiable and gradient might not be well defined for some optimization problem. In the event that there are constraints, $L$ might need to take the role of Langragian rather than the objective function. Gradient descent is just a means to find the parameters for a model. Gradient based approach seems to be the norm for now but things can change. What you proposed to tattoo is just "gradient" or "slope". Not objection but just want to let you know what you are doing.
H: what is the first input to the decoder in a transformer model? The image is from url: Jay Alammar on transformers K_encdec and V_encdec are calculated in a matrix multiplication with the encoder outputs and sent to the encoder-decoder attention layer of each decoder layer in the decoder. The previous output is the input to the decoder from step 2 but what is the input to the decoder in step 1? Just the K_encdec and V_encdec or is it necessary to prompt the decoder by inputting the vectorized output (from the encoder) for the first word? AI: At each decoding time step, the decoder receives 2 inputs: the encoder output: this is computed once and is fed to all layers of the decoder at each decoding time step as key ($K_{endec}$) and value ($V_{endec}$) for the encoder-decoder attention blocks. the target tokens decoded up to the current decoding step: for the first step, the matrix contains in its first position a special token, normally </s>. After each decoding step $k$, the result of the decoder at position $k$ is written to the target tokens matrix at position $k+1$, and then the next decoding step takes place. For instance, in the fairseq implementation of the decoding, you can see how they create the target tokens matrix and fill it with padding here and then how they place an EOS token (</s>) at the first position here. As you have tagged your question with the bert tag, you should know that what I described before only applies to the sequence-to-sequence transduction task way of using the Transformer (i.e. when used for machine translation), and this is not how BERT works. BERT is trained on a masked language model loss which makes its use at inference time much different than the NMT Transformer.
H: Clustering vs Non Clustering problems? I'm just getting started with Andrew Ng's Machine Learning wherein he explained the example of the cocktail party problem vs the gene clustering problem in order to explain the difference between clustering and non clustering problems within unsupervised learning itself. However I still don't understand the difference between the two. Can someone please help clarify. AI: Both of the examples are clustering examples. Clustering is about grouping of similar dataset when one is not given the data. In the gene problem, One possible setting is you are given the DNA micro-array data. Your task is to learn how many types of people are there. This is an unsupervised learning problem, we are not given the labels. We just group people with similar type of genes together. In the cocktail party problem: There are two people in a room and there are microphones. We just record the audio and pass it to an algorithm and tell the algorithm, hey, learn the pattern if there is any. Each individual might have their own speech pattern/ language/accent. The algorithm pick it up on its own such pattern and detect that there are two people in the room and they can distinguish the speeches of the two people. Let me give you an example of a non-clustering example. For example, a question of interest is to detect anomaly. For example, what you are given could be thr normal operating state of a machine, say their sensor readings. From those readings, you have to learn what is normal for the machine and you have to figure out when you are given a new data point, should it be considered normal.
H: What do each of the three SVM classes in R represent? Inspired by this post, I took a look at this doc SVN in R output this: Statistics by Class: Class: 1 Class: 2 Class: 3 Sensitivity 1.0000 0.9412 1.0000 Specificity 1.0000 1.0000 0.9762 Pos Pred Value 1.0000 1.0000 0.9796 Neg Pred Value 1.0000 0.9865 1.0000 Prevalence 0.2778 0.1889 0.5333 Detection Rate 0.2778 0.1778 0.5333 Detection Prevalence 0.2778 0.1778 0.5444 Balanced Accuracy 1.0000 0.9706 0.9881 What are each of the three classes? AI: Go to the kaggle site that you shared. Click on Data on the left. Click on datadict.txt. This is known as data dictionary, a file that help us understand the dataset. We can see 23: outcome - what eventually happened to the horse? - possible values: 1 = lived 2 = died 3 = was euthanized Those are the meaning of the $3$ classes. You might also like to print out hodata$outcome or do str(hodata$outcome).
H: Having trouble understanding None in the summary of my Keras model The above code is a sample of a CNN model built using Keras. The first layer is a convolutional layer which will receive images of input_shape = (64, 64, 3), thus meaning that the format of the images is in RBG. The output of the first layer is (None, 62, 62, 32), but isn't in supposed to be (62, 62, 32)? Where does the None come from? Also since my images are (64, 64, 3) doesn't it mean that the filter should have 3 dimensions to cater for the color channels of the image? AI: The output of the first layer is (None, 62, 62, 32), but isn't in supposed to be (62, 62, 32)? Where does the None come from? The None is just a placeholder saying that the network can input more than one sample at the time. If your input shape was (64, 64, 3) it would mean that your network could one hold one image at the same time. But you want to train with batches of varying length or predict several images at the same time, that's why it's (None, 64, 64, 3). As an example, if you would send in a batch of 10 images your shape would be (10, 64, 64, 3) and if you changed it to 25 you would have (25, 64, 64, 3). Also since my images are (64, 64, 3) doesn't it mean that the filter should have 3 dimensions to cater for the color channels of the image? Technically you could use a 3D filter with Conv3D on images. However, it is not something people do for RGB images. Probably because it gives worse results than using 2D. Instead you treat RGB as three separate 2D channels rather than one 3D channel. Using three dimensions is usually more relevant in video where you instead use time as your third dimension. Here is a Quora where it is discussed more at length: Why do we use 2D convolution for RGB images which are in fact 3D data cubes? Shouldn't we use 3D convolution for RGB and 4D for video?
H: Am I using GridSearch correctly or do I need to use all data for cross validation? I'm working with a dataset that has 400 observations, 34 features and quite a few outliers, some of them extreme. Given the nature of my data, these need to be in the model. I started by doing a 75-25 split on my data and leaving those 25% aside. With the train set, I used GridSearchCV with a RepeatedKFold of 10 folds and 7 repeats and this returned my best_estimator results, which when we go in .cv_results_ we see it's the mean_test_score metric. I then called this my "Cross Validation score". Then, with this model fit, I ran it on the test set as grid.score(X_test, y_test) and called this my Test score. def rf(df, score): X_train, X_test, y_train, y_test = train_test(df) params = {'n_estimators': [400, 700, 1000], 'max_features': ['sqrt', 'auto'], 'min_samples_split': [2, 3], 'min_samples_leaf': [1, 2, 3], 'max_depth': [50, 100, None], 'bootstrap': [True, False] } scorers = {'RMSE': make_scorer(rmse, greater_is_better=False), 'MAE': make_scorer(mean_absolute_error, greater_is_better=False), 'R2': make_scorer(r2_score)} cv = RepeatedKFold(n_splits=10, n_repeats=7) grid = GridSearchCV(estimator=RandomForestRegressor(random_state=random.seed(42)), param_grid=params, verbose=1, cv=cv, n_jobs =-1, scoring=scorers, refit = score) grid = grid.fit(X_train, y_train) print('Parameters used:', grid.best_params_) if score == 'RMSE': print('RMSE score on train:', round(-1*grid.best_score_,4)) print('RMSE score on test: ', round(-1*grid.score(X_test, y_test),4)) elif score == 'R2': print('R Squared score on train:', round(grid.best_score_,4)) print('R Squared score on test: ', round(grid.score(X_test, y_test),4)) elif score == 'MAE': print('MAE score on train:', round(-1*grid.best_score_,4)) print('MAE score on test: ', round(-1*grid.score(X_test, y_test),4)) When I set my metric to RMSE (the most important one), this is what it outputs: RMSE score on train: 8.489 RMSE score on test: 5.7952 Have I done this correctly? Can I consider this discrepancy acceptable? With Random Forest for example, if I deliberately ignore the gridsearch parameters and set my min_leaf_node to something like 10, my RMSE goes all the way up to 12 but it becomes very similar between the CV score and my test data. I'm experiencing similar results with SVR and MLP algorithms. This is part of my thesis and now I have my supervisor telling me I should be using all my data for cross-validation which I don't think is correct. My conclusion is that given the outliers in the model, without more observations, a discrepancy in results is to be expected, however I don't know if this conclusion is right or if I'm doing something wrong here. Running my model in a somewhat similar dataset with fewer outliers gives results closer to one another. RMSE score on train: 5.9731 RMSE score on test: 6.9164 AI: Your procedure is, from what I can tell, correct. You are correctly splitting your data into train/test, and then using your training data only to find optimal hyper-parameters. Using all of the training data and the hyper parameters found in cross validation, you are then evaluating your final model on the test set. Indeed, the outliers and the size of your dataset are the most probable causes for the large differences between validation and test. Basically, if the majority of these outlier observations fall in the test set after your initial splitting of your data it is highly likely that your test set scores will be larger than your validation scores. On the other hand, if these outliers are in your training set, you would expect the opposite to be true. This is because, regardless of what data your have in your training set for any random partitioning, these outlier observations are unlikely to be predicted well regardless of what you do, and because the size of your dataset is so small, these outliers will heavily impact your error estimate (especially with RMSE that is incredibly sensitive to outliers due to the squaring). Either way, report the test set scores only as your final estimates if you want to be honest. In scenarios in which the variance is large in my test set scores (usually due to a small data set and a large number of outliers), I would highly recommend repeating your entire model building procedure as you described but with a different seed each time (i.e. split your entire dataset into train/test with a different seed each time). Do this until you run out of patience or the variance in your estimate of model performance is small enough. Maybe form bootstrapped confidence intervals from these repetitions to give you information on how much your model's performance varies.
H: How to convert Hindi/Telugu/Marathi text to vector for text classification problem? sentence = 'अच्छा होगा अगर इसमें और गहने ना हों' Which method will work for this task? Is any pretrained model available to convert this text to vectors? Please help by giving the code. AI: Hindi text is in Unicode format and can be read as follows in Python: separators = [u"।", u",", u"."] text = open("indiantext.txt").read() #This converts the encoded text to an internal unicode object, where # all characters are properly recognized as an entity: text = text.decode("utf-8") #this breaks the text on the white spaces, yielding a list of words: words = text.split() counter = 1 output = "" for word in words: #if the last char is a separator, and is joined to the word: if word[-1] in separators and len(word) > 1: #word up to the second to last char: output += word[:-1] + u"(%d) " % counter counter += 1 #last char output += word[-1] + u"(%d) " % counter else: output += word + u"(%d) " % counter counter += 1 print It is not clear from the question as to what exactly you mean by pretrained model for converting into vector but assuming that you are asking about the language model for hindi, ULMFiT is something you must refer to
H: How to use SimpleImputer Class to replace missing values with mean values using Python? This is my code import numpy as np import matplotlib.pyplot as plt import pandas as pd #Importing Dataset dataset = pd.read_csv('C:/Users/Rupali Singh/Desktop/ML A-Z/Machine Learning A-Z Template Folder/Part 1 - Data Preprocessing/Data.csv') print(dataset) X = dataset.iloc[:, :-1].values Y = dataset.iloc[:, 3].values #Missing Data from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values= np.nan, strategy='mean') X.fit[:, 1:3] = imputer.fit_transform(X[:, 1:3]) print(X) My data set: Country Age Salary Purchased 0 France 44.0 72000.0 No 1 Spain 27.0 48000.0 Yes 2 Germany 30.0 54000.0 No 3 Spain 38.0 61000.0 No 4 Germany 40.0 NaN Yes 5 France 35.0 58000.0 Yes 6 Spain NaN 52000.0 No 7 France 48.0 79000.0 Yes 8 Germany 50.0 83000.0 No 9 France 37.0 67000.0 Yes Error Message: File "C:/Users/Rupali Singh/PycharmProjects/Machine_Learning/data_preprocessing_Template.py", line 15, in <module> X.fit[:, 1:3] = imputer.fit_transform(X[:, 1:3]) AttributeError: 'numpy.ndarray' object has no attribute 'fit' AI: Your error is due to using Simple Imputer's fit and fit_transform on a numpy array. Here's how i used it on a Dataframe imr = Imputer(missing_values='NaN', strategy='median', axis=0) imr = imr.fit(data[['age']]) data['age'] = imr.transform(data[['age']]).ravel() X.fit = impute.fit_transform().. this is wrong. you can't assign a value to a X.fit() just simply because .fit() is an imputer function, you can't use the method fit() on a numpy array, hence your error! Use x[:, 1:3] = imputer.fit_transform(x[:, 1:3]) instead Hope this helps!
H: Does it make sense to randomly select features as a baseline? In my paper, I am saying that the accuracy of classification is $x\%$ when using the top N features. My supervisor thinks that we should capture the classification accuracy when using N randomly selected features to show that the initial feature selection technique makes an actual difference. Does this make sense? I've argued that no one cares about randomly selected features so this addition doesn't make sense. It's quite obvious that randomly selecting features will provide a worse classification accuracy so there's no need to show that using any sort of feature ranking metric will be superior. AI: Your supervisor is right. Maybe not in the specific way to show your solution's dominance on the problem but at least in the main idea: He is right because you need a benchmark to prove your feature selection is better than not-doing-anything. Look for alternatives to represent why is your selection better: A ranking for 5 possible selections: Yours and another four. The alternative your supervisor told you: Random vs Yours. The alternative your supervisor told you but improved: Select randomly features M times, select the best and show that your solution is the best. Your solution vs all the features (the problem with this is that the accuracy improves with more variables many time, so you would have to use a measure which punishes using many features just as AIC)
H: How to optimize the lambdas of a hybrid loss in a deep learning model I am using a generative adversarial deep learning model (GAN) with a hybrid loss represented by a linear combination of four losses with three $\lambda$'s, something like: $total\_loss = loss_1 + \lambda_1\times loss_2 + \lambda_2\times loss_3 + \lambda_3\times loss_4$. Is there a way to optimize these $\lambda$'s towards the best performance? provided that the computational complexity is not trivial. If my objective is for $loss_4$ to be minimum, should I use a high value for $\lambda_3$ compared to the other ones? But, how would that affect the overall performance, as all the models used in the GAN might play an important role in the final result. NB. $loss_1$ is based on MSE measure, while the other losses are based on $l_1$ norm. AI: When there is an optimization problem involving more than one objective function to be optimized simultaneously, it is called multi-objective optimization. Gradient descent methods can still be used to minimize the overall objective function. A paper entitled "Gradient-Based Multiobjective Optimization with Uncertainties" goes into greater detail. In PyTorch, autograd.backward can handle multiple objectives. You need to provide a list of tensors for backpropagation.
H: Can one use non converged results from Logistic Regression? I'm running Logistic Regression on a dataset for a classification problem. I used the model on the dataset when it was normalized and I had no problem with it converging. Now, I wanted to see the results without feature normalization and I am getting the warning: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations. "of iterations.", ConvergenceWarning) I've tried increasing the number of iterations and I have upped it to 8000 so far and am still getting the error. I wanted to ask, is this error is critical? Because, I'm still getting my cross validated results even after the error. What exactly is convergence? AI: An iterative algorithm is said to converge when, as the iterations proceed, the output gets closer and closer to a specific value. More precisely, no matter how small an error value you choose, if you continue long enough the function will eventually stay closer than that error value from some final value. In some circumstances, an algorithm will not converge; it could even diverge, where its output will undergo larger and larger oscillations, never approaching a useful result. More precisely, no matter how long you continue, the function value will never settle down within a range of any "final" value. You could use the results which did not converge, but it's not recommended.
H: sklearn FeatureUnion vs ColumnTransformer I am trying to build a sklearn pipeline which does different transformations on numerical data, and different transformation on categorical data. In the process, I compare the results from ColumnTransformer vs FeatureUnion, and they are not the same. Please advise if the following are equivalent, if not what the problem is. The data is from kaggle https://www.kaggle.com/ronitf/heart-disease-uci from sklearn.model_selection import train_test_split cat_attribs = ['sex','cp','fbs','restecg','exang','ca','thal'] num_attribs = ['trestbps','chol','thalach','oldpeak','slope'] X_train,X_test,y_train,y_test = train_test_split(heart_df,y,test_size=0.25,random_state=100) Approach #1, using column transformer from sklearn.compose import ColumnTransformer ct = ColumnTransformer([('oneHot', OneHotEncoder(categories='auto'),cat_attribs) , ('minMax',MinMaxScaler(),num_attribs)]) ct_result = ct.fit_transform(X_train) Approach #2, using FeatureUnion from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import FeatureUnion from sklearn.pipeline import Pipeline class DataFrameSelector(BaseEstimator, TransformerMixin): def __init__(self, attribute_names): self.attribute_names = attribute_names def fit(self, X, y=None): return self def transform(self, X): return X[self.attribute_names].values num_pipeline = Pipeline([('selector', DataFrameSelector(num_attribs)), ('minMax',MinMaxScaler())]) cat_pipeline = Pipeline([('selector', DataFrameSelector(cat_attribs)), ('oneHot',OneHotEncoder(categories='auto'))]) full_pipeline = FeatureUnion(transformer_list=[ ('num_pipeline', num_pipeline), ('cat_pipeline', cat_pipeline)]) fp_result= full_pipeline.fit_transform(X_train) AI: I've tried this with an sklearn builtin dataset rather than yours, but the only difference appears to be the order of the columns. Switching the order of the elements in the transformer lists produces the same results. (In both cases, the numeric columns and categorical one-hot encoded columns are separated from each other, but are placed in the order that they appear in the transformer list.)
H: Calculate a ranking function from classification features I am using 3 features (x1, x2, x3) for binary classification. All my feature values are in 0 to 1 range (unit range). I obtained how important each feature was in classification as follows (i.e. feature importance) x1 --> 0.1 x2 --> 0.5 x3 --> 0.7 It is clear that feature 3 (x3) contributes the most, x2 the second and x1 the least in classification. I also performed correlation analysis to check if my features are positively or negatively correlated with the target (y) as follows. x1 --> positively correlated x2 --> positively correlated x3 --> negatively correlated I am wondering if it is possible to convert my classification features into a ranking function using feature importance and correlation. For instance, my suggestion looks as follows. ranking_score = 0.1*x1 + 0.5*x2 + 0.7*(1/x3) The reason for using (1/x3) in the above equation is because it is negatively correlated with the target (y). Please let me know if my ranking_score equation is statistically correct? If not, please let me know your suggestions. EDIT: Why ranking is important to me? My features are related to employee details (x1, x2, x3). At first I used these 3 features to classify efficient and 'inefficient' employees. Now, I want to rank the efficient employees based on these 3 features. The above equation I proposed is to facilitate this task. I am happy to provide more details if needed. AI: Generally speaking moving from a classification space to a ranking space is not straight forward. In classification problems there is no meaning to order between labels. This means that your suggested equation might not represent the order between labels at all. This is somewhat dependent on the feature space and classification algorithm. Some classification algorithms(tree-based for example) don't use the concept of distance in their search for best fit. The separation created from the fit(which is used for classification) does not include how far the boundaries are from one another. Some classification algorithms (logistic regressions, SVM, others) do have this distance feature, and this can be translated to the probability of an instance belonging to each class. Using this probability as a ranking mechanism (instead of how asking how efficient, asking what is the probability of being efficient) might make sense.
H: Train Naive Based Classifier For (a) I have calculated $P(G)=\frac{5}{8}$, $P(O|G)=\frac{2}{5}$, $P(B|G)=\frac{1}{5}$, $P(C|G)=\frac{4}{5}$, and $P(A|G)=\frac{4}{5}$. Now how do I calculate the maximum likelihood estimate of these values? And how do I go about part (b)? I get that $O,B,C,A$ are independent so I can multiply them to get joint probability. But for values like $O_i$ for sample $i=9$, that is just $0$, since sample 9 doesn't have outdoor seating. And how am I supposed to calculate $P(G_i)$ if I don't know what $G_9$ is? AI: Done: MLE is a somewhat abstractly defined concept, but in essence it is your best guess at a parameter. In this case we assume that the observed frequency is your best guess. You want to calculate the probabilty of your observation (HasOutdoorSeating=0 in this case), given IsGoodRestaurant=1. That probability is not 0 (check the first sample for instance)
H: Is numpy.corrcoef() enough to find correlation? I am currently working through Kaggle's titanic competition and I'm trying to figure out the correlation between the Survived column and other columns. I am using numpy.corrcoef() to matrix the correlation between the columns and here is what I have: The correlation between pClass & Survived is: [[ 1. -0.33848104] [-0.33848104 1. ]] The correlation between Sex & Survived is: [[ 1. -0.54335138] [-0.54335138 1. ]] The correlation between Age & Survived is:[[ 1. -0.07065723] [-0.07065723 1. ]] The correlation between Fare & Survived is: [[1. 0.25730652] [0.25730652 1. ]] The correlation between Parent-Children & Survived is: [[1. 0.08162941] [0.08162941 1. ]] The correlation between Sibling-Spouse & Survived is: [[ 1. -0.0353225] [-0.0353225 1. ]] The correlation between Embarked & Survived is: [[ 1. -0.16767531] [-0.16767531 1. ]] There should be higher correlation between Survived and [pClass, sex, Sibling-Spouse] and yet the values are really low. I'm new to this so I understand that a simple method is not the best way to find correlations but at the moment, this doesn't add up. This is my full code (without the printf() calls): import pandas as pd import numpy as np train = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/train.csv") test = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/test.csv") survived = train['Survived'] pClass = train['Pclass'] sex = train['Sex'].replace(['female', 'male'], [0, 1]) age = train['Age'].fillna(round(float(np.mean(train['Age'].dropna())))) fare = train['Fare'] parch = train['Parch'] sibSp = train['SibSp'] embarked = train['Embarked'].replace(['C', 'Q', 'S'], [1, 2, 3]) AI: On a side note, I don't think correlation is the correct measure of relation for you to be using, since Survived is technically a binary categorical variable. "Correlation" measures used should depend on the type of variables being investigated: continuous variable v continuous variable: use "traditional" correlation - e.g. Spearman's rank correlation or Pearson's linear correlation. continuous variable v categorical variable: use an ANOVA F-test / difference of means categorical variable v categorical variable: use Chi-square / Cramer's V
H: Logistic Regression - ROC curve plots Sensitivity vs Specificity instead of (1-Specificity) I am new to Machine Learning and have been doing some practice on Logistic Regression. To evaluate the models, I've been trying to create some ROC plots. The package that i used is pROC. The model name is - 'model' dataset is 'data'. The code I used in R is: library(pROC) predictionData <- predict(model, newdata = data, type = "response") rocModel <- roc(data$y ~ as.numeric(predictionData>0.5)) plot(rocModel) Ideally, from what i have learned, ROC should plot Sensitivity or TPR (True Positive Rate) vs 1 - Specificity. But as shown in the picture below, it shows Sensitivity vs Specificity. Am I missing some obvious trick here or is something wrong with what I have done ? AI: As you can see, the specificity in the x-axis goes from 1.0 to 0.0 (backwards). So the effect is the same, you are seeing 1-Specificity
H: Is it possible that a CNN has better accuracy than RNN in word classification? So I found something strange once I compared the accuracy of the prediction of a class for a question between a CNN and an RNN (GRU). The CNN achieved 0.87 accuracy over the RNN (GRU) with 0.7520 accuracy. In terms of question classification task, would it makes sense if CNN outperforms RNN or is it a rare case? AI: CNN considers only the current input while RNN considers the current input and also the previously received inputs. It can memorize previous inputs due to its internal memory Here's a comparative study on RNN vs CNN on different problems. As you can see RNN-GRU outperforms CNN on most of the cases
H: Weighted Linear Combination of Classifiers I am trying to build an ensemble of classifiers whereby I want my algorithm to learn a set of weights such that it can weight the outputs of different classifiers for a set of data points. I am wondering, how would I go about learning these weights? I tried using automatic differentiation but the weights are not moving at all (no gradient information). Does anyone know how I can fix this? AI: I don't know how to fix your automatic differentiation, but I can show you what I did (and I have seen others do too) when I wanted to achieve the same thing. You can fit a linear meta-classifier on the outputs of your classifiers that you want to ensemble. Here is the implementation from my scikit toolbox: ''' ------------------------------------------------------- Stack Classifier - extrakit-learn Author: Simon Larsson <[email protected]> License: MIT ------------------------------------------------------- ''' from sklearn.base import BaseEstimator, ClassifierMixin from sklearn.utils.validation import check_X_y, check_array, check_is_fitted import numpy as np class StackClassifier(BaseEstimator, ClassifierMixin): ''' Stack Classifier Ensemble classifier that uses one meta classifiers and several sub-classifiers. The sub-classifiers give their output to to the meta classifier which will use them as input features. Parameters ---------- clfs : Classifiers who's output will assist the meta_clf, list classifier meta_clf : Ensemble classifier that makes the final output, classifier drop_first : Drop first class probability to avoid multi-collinearity, bool keep_features : If original input features should be used by meta_clf, bool refit : If sub-classifiers should be refit, bool ''' def __init__(self, clfs, meta_clf, drop_first=True, keep_features=False, refit=True): self.clfs = clfs self.meta_clf = meta_clf self.drop_first = drop_first self.keep_features = keep_features self.refit = refit def fit(self, X, y): ''' Fitting of the classifier Parameters ---------- X : array-like, shape (n_samples, n_features) The training input samples. y : array-like, shape (n_samples,) The target values. An array of int. Returns ------- self : object Returns self. ''' X, y = check_X_y(X, y, accept_sparse=True) # Refit of classifier ensemble if self.refit: for clf in self.clfs: clf.fit(X, y) # Build new tier-2 features X_meta = build_meta_X(self.clfs, X, self.keep_features) # Fit meta classifer, Stack the ensemble self.meta_clf.fit(X_meta, y) # set attributes self.n_features_ = X.shape[1] self.n_meta_features_ = X_meta.shape[1] self.n_clfs_ = len(self.clfs) return self def predict_proba(self, X): ''' Probability prediction Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The prediction input samples. Returns ------- y : ndarray, shape (n_samples,) Returns an array of probabilities, floats. ''' X = check_array(X, accept_sparse=True) check_is_fitted(self, 'n_features_') # Build new tier-2 features X_meta = build_meta_X(self.clfs, X, self.keep_features) return self.meta_clf.predict_proba(X_meta) def predict(self, X): ''' Classification Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The prediction input samples. Returns ------- y : ndarray, shape (n_samples,) Returns an array of classifications, bools. ''' X = check_array(X, accept_sparse=True) check_is_fitted(self, 'n_features_') # Build new tier-2 features X_meta = build_meta_X(self.clfs, X, self.keep_features) return self.meta_clf.predict(X_meta) def build_meta_X(clfs, X=None, drop_first=True, keep_features=False): ''' Build features that includes outputs of the sub-classifiers Parameters ---------- clfs : Classifiers that who's output will assist the meta_clf, list classifier X : {array-like, sparse matrix}, shape (n_samples, n_features) The prediction input samples. drop_first : Drop first proba to avoid multi-collinearity, bool keep_features : If original input features should be used by meta_clf, bool Returns ------- X_meta : {array-like, sparse matrix}, shape (n_samples, n_features + n_clfs*classes) The prediction input samples for the meta clf. ''' if keep_features: X_meta = X else: X_meta = None for clf in clfs: if X_meta is None: if drop_first: X_meta = clf.predict_proba(X) else: X_meta = clf.predict_proba(X)[:, 1:] else: if drop_first: y_ = clf.predict_proba(X) else: y_ = clf.predict_proba(X)[:, 1:] X_meta = np.hstack([X_meta, y_]) return X_meta This would allow you to use any meta-classifier, but with linear models like ridge/lasso/logistic regression it will acts as learned linear weights of your ensemble classifiers. Like this: from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression from xklearn.models import StackClassifier X, y = make_classification(n_classes=2, n_features=4, n_samples=1000) meta_clf = LogisticRegression(solver='lbfgs') ensemble = [DecisionTreeClassifier(max_depth=1), DecisionTreeClassifier(max_depth=5), DecisionTreeClassifier(max_depth=10)] stack_clf = StackClassifier(clfs=ensemble, meta_clf=meta_clf) stack_clf.fit(X, y) print('Weights:', stack_clf.meta_clf.coef_[0],' Bias: ', stack_clf.meta_clf.intercept_) output: Weights: [0.50017775 2.2626092 6.30510687] Bias: [-4.82988374]
H: Hierarchical clustering with precomputed cosine similarity matrix using scikit learn produces error We want to use cosine similarity with hierarchical clustering and we have cosine similarities already calculated. In the sklearn.cluster.AgglomerativeClustering documentation it says: A distance matrix (instead of a similarity matrix) is needed as input for the fit method. So, we converted cosine similarities to distances as distance = 1 - similarity Our python code produces error at the fit() method at the end. (I am not writing the real value of X in the code, since it is very big.) X is just a cosine similarity matrix with values converted to distance as written above. Notice the diagonal, it is all 0.) Here is the code: import pandas as pd import numpy as np from sklearn.cluster import AgglomerativeClustering X = np.array([[0,0.3,0.4],[0.3,0,0.7],[0.4,0.7,0]]) cluster = AgglomerativeClustering(affinity='precomputed') cluster.fit(X) The error is: runfile('/Users/stackoverflowuser/Desktop/4.2/Pr/untitled0.py', wdir='/Users/stackoverflowuser/Desktop/4.2/Pr') Traceback (most recent call last): File "<ipython-input-1-b8b98765b168>", line 1, in <module> runfile('/Users/stackoverflowuser/Desktop/4.2/Pr/untitled0.py', wdir='/Users/stackoverflowuser/Desktop/4.2/Pr') File "/anaconda2/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 704, in runfile execfile(filename, namespace) File "/anaconda2/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 100, in execfile builtins.execfile(filename, *where) File "/Users/stackoverflowuser/Desktop/4.2/Pr/untitled0.py", line 84, in <module> cluster.fit(X) File "/anaconda2/lib/python2.7/site-packages/sklearn/cluster/hierarchical.py", line 795, in fit (self.affinity, )) ValueError: precomputed was provided as affinity. Ward can only work with euclidean distances. Is there anything that I can provide? Thanks already. AI: According to sklearn's documentation: If linkage is “ward”, only “euclidean” is accepted. If “precomputed”, a distance matrix (instead of a similarity matrix) is needed as input for the fit method. So you need to change the linkage to one of complete, average or single. If you try this it works: import numpy as np from sklearn.cluster import AgglomerativeClustering X = np.array([[0,0.3,0.4],[0.3,0,0.7],[0.4,0.7,0]]) #cluster = AgglomerativeClustering(affinity='precomputed', linkage='complete') #cluster = AgglomerativeClustering(affinity='precomputed', linkage='average') cluster = AgglomerativeClustering(affinity='precomputed', linkage='single') cluster.fit(X)
H: Why Decision Tree Classifier is not working with categorical value? I am learning my way through this, so please be easy on me if you find any mistakes, I could really use a professional opinion here. Thx. I am trying to model a Decision Tree Classifier as part of an ensemble (soft voting system). The problem is that I have the categorical features already converted to Integer values (i.e. if blood pressure is within the medium range, then blood pressure =2 in the dataset). The categories for the feature variables are as follows feature#1 (total cholesterol) = 1, 2, 3 feature#2 (Systolic Blood Pressure) = 1, 2, 3 feature#3 (diastolic Blood Pressure) = 1, 2, 3 feature#4 (smoking rate) = 1, 2, 3 The target/class variable (stroke) = 1, 2 (yes and no, respectively). The problem is that when I draw the DT (please check the picture), I expect to get a splitting decision where a certain feature equals one of the above-mentioned values (i.e. 1, 2, 3). However, it is giving me other values like 1.5, 0.5, etc., I have included the code and a sample dataset for anyone who can help me out. btw, I need to keep the current dataset in the number format as I am using it in conjunction with other classifiers for the soft voting classifier. Here is the dataset and this is the code for reference, I am using Spyder with Python 3.7 AI: Its all ok. Exactly because your values are between 1 and 2 does he chooses 1.5. In other words trees will select 1 and 2 as values (integers not real numbers) but 1.5 is there as cut-off point between these classes.
H: Why multiply by 2 when calculating partial derivatives during backpropagation? I'm wondering why we multiple by 2 when calculating partial derivatives. I'm referencing the 2's that I've circled below, from here. We also see this in the python implementation, m_deriv += -2*X[i] * (Y[i] - (m*X[i] + b)), there is a 2. How can I understand the intuition here? My math is not strong so apologies in advance if this is a silly question. AI: You're using the Mean Square Error $\Sigma\frac{1}{N}(y-(Wx+b))^2$as the loss function, if you take the derivative, you will have the $2$. In some materials, we will use $\frac{1}{2}\Sigma\frac{1}{N}(y-(Wx+b))^2$ as the loss function to cancel out the $2$. In fact, this doesn't matter at all and it has no impacts on params optimization.
H: What is the effect of KL divergence between two Gaussian distributions as a loss function in neural networks? In many deep neural networks, especially those based on VAE architecture, a KL divergence term is added to the loss function. The divergence is computed between the estimated Gaussian distribution and prior. Since Gaussian distribution is completely specified by mean and co-variance, only those two parameters are estimated by the neural network. For Gaussian distributions, KL divergence has a closed form solution. By minimizing KL divergence, we bring the estimated distribution closer to the prior. My question is, since Gaussian distribution is completely specified by mean and co-variance, why don't we just take MSE between estimated parameters and prior parameters? Minimizing MSE between mean and co-variance also brings the two distributions closer. Does taking KL divergence have any significance? AI: It's too strong of an assumption (I am answering generally, I am sure you know. Coming to VAE later in post), that they are Gaussian. You can not claim that distribution is X if Moments are certain values. I can bring them all to the same values using this. Hence if you can not make this assumption it is cheaper to estimate KL metric BUT with VAE you do have information about distributions, encoders distribution is $q(z|x)=\mathcal{N}(z|\mu(x),\Sigma(x))$ where $\Sigma=\text{diag}(\sigma_1,\ldots,\sigma_n)$, while the latent prior is given by $p(z)=\mathcal{N}(0,I)$. Both are multivariate Gaussians of dimension $n$, for which in general the KL divergence is: $$ \mathfrak{D}_\text{KL}[p_1\mid\mid p_2] = \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - n + \text{tr} \{ \Sigma_2^{-1}\Sigma_1 \} + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right] $$ where $p_1 = \mathcal{N}(\mu_1,\Sigma_1)$ and $p_2 = \mathcal{N}(\mu_2,\Sigma_2)$. In the VAE case, $p_1 = q(z|x)$ and $p_2=p(z)$, so $\mu_1=\mu$, $\Sigma_1 = \Sigma$, $\mu_2=\vec{0}$, $\Sigma_2=I$. Thus: \begin{align} \mathfrak{D}_\text{KL}[q(z|x)\mid\mid p(z)] &= \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - n + \text{tr} \{ \Sigma_2^{-1}\Sigma_1 \} + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right]\\ &= \frac{1}{2}\left[\log\frac{|I|}{|\Sigma|} - n + \text{tr} \{ I^{-1}\Sigma \} + (\vec{0} - \mu)^T I^{-1}(\vec{0} - \mu)\right]\\ &= \frac{1}{2}\left[-\log{|\Sigma|} - n + \text{tr} \{ \Sigma \} + \mu^T \mu\right]\\ &= \frac{1}{2}\left[-\log\prod_i\sigma_i^2 - n + \sum_i\sigma_i^2 + \sum_i\mu^2_i\right]\\ &= \frac{1}{2}\left[-\sum_i\log\sigma_i^2 - n + \sum_i\sigma_i^2 + \sum_i\mu^2_i\right]\\ &= \frac{1}{2}\left[-\sum_i\left(\log\sigma_i^2 + 1\right) + \sum_i\sigma_i^2 + \sum_i\mu^2_i\right]\\ \end{align} You see that the mean minimisation is the same so the only cost factor is between these stds and the covariance matrix values. We can see that its more expensive to evaluate these integrals in covariance matrix and then minimise them, then just to minimise these std. TL;DR cheaper, but you are right it could be done. Good question!
H: How to interpret Shapley value plot for a model? I was trying to use Shapley value approach for understanding the model predictions. I am trying this on a Xgboost model. My plot looks like as below Can someone help me interpret this? Or confirm my understanding is correct? My interpretation 1) High values of Feature 5 (indicated by rose/purple combination) - leads to prediction 1 2) Low values of Feature 5 (indicated by blue) - leads to prediction 0 3) Step 1 and 2 applies for Feature 1 as well 4) Low values of Feature 6 leads to prediction 1 and high values of Feature 6 leads to Prediction 0 5) Low values of Feature 8 leads to prediction 1 and high values of Feature 8 leads to prediction 1 as well. If it's too the extreme of x-axis (meaning from x(1,2) or x(2,3) - it means the impact of low values (in this case) of this feature, has a huge impact on the prediction 1. Am I right? 6) Why don't I see all my 45 features in the plot irrespective of the importance/influence. Shouldn't I be seeing no color when they have no importance. Why is that I only see around 12-14 features? 7) What role does Feature 43,Feature 55, Feature 14 play in prediction output? 8) Why is the SHAP value range from -2,2? Can someone help me with this? AI: 1. 2. not always there are some blue points also. 3. 4. 5. yes 6. it depends on the shap plot you are using, on some them default is to surpress less important features and not even plot them. 7. They are discriminatory but not as much, you can reconcile them with some other feature selection technique and decide if you want to keep them. 8. Range of the SHAP values are only bounded by the output magnitude range of the model you are explaining. The SHAP values will sum up to the current output, but when there are canceling effects between features some SHAP values may have a larger magnitude than the model output for a specific instance. If you are explaining a model that outputs a probability then the range of the values will be -1 to 1, because the range of the model output is 0 to 1. If you are explaining a model that outputs a real number or log odds the SHAP values could be larger since the model outputs can be larger.
H: Graph isn't an attribute in TensorFlow? Very basic question Hi am am just starting to learn how to use TensorFlow and this is one of my first projects. I am simply trying to create a graph. This is my code: import tensorflow as tf graph1 = tf.Graph() I cannot seem to create a Graph. The compiler threw an AttributeError (not exactly sure what that is)... AttributeError Traceback (most recent call last) The compiler said the issue was with line two in creating the graph. ----> 2 graph1 = tf.Graph() AttributeError: module 'tensorflow' has no attribute 'Graph' If it is of any use I am using a Jupyter Notebook in JetBrain's PyCharm. I believe I imported everything correctly but I am not sure if using PyCharm will mess anything up. Any help you be very appreciated!! :) AI: 2 options. reinstall and try again (dont ask) option b) downgrade to versions 1.13.1 and lower (again dont ask) ofcourse you can ask I am just lazy
H: F1 score vs accuracy, which metric is more important? I have two multiclass classification models for making predictions (number of classes is three to be precise). One is Keras neural network, other is Gradient Boosted Classifier from Scikit Learn library. I have noticed that after training on same data GBC has higher accuracy score, while Keras model has higher F1 score. Which model should I use for making predictions on future data? Which metric is more important? AI: Well it is highly dependent on your use cases and how your data is distributed. Let me breakdown the pros and cons in practice : F1-Score Pros : Takes into account how the data is distributed. Useful when you have data with imbalance classes. Cons : Less interpretable. Precision and recall is more interpretable than f1-score, since it measures the type-1 error and type-2 error. However, f1-score measures the trade-off between this two. When positive class is minority class, the score is quite sensitive when there is switching where the ground truth is positive. Accuracy Pros : Easy to understand. Cons : It does not take into account how the data is distributed (example case below). This error could be crucial and might lead to incorrect conclusion. Here is an example depicting cons of accuracy. Without loss of generality consider a binary classification task. Imagine having a data which of 100 samples with 90 negative sample and 10 positive sample. Suppose you have a classifier that predicts all negative. You will have an accuracy of 90%, but let's consider the f1 score, you will actually get 0 because your recall (which is a component of f1 score) is 0. In practice, for multi-class classification model (which is your use-cases) accuracy is mostly favored. f1 is usually used for multi-label or binary label where the classes are highly unbalanced.
H: Binary classifier using Keras with backend Tensorflow with a Binary output I am trying to build a binary classifier with tensorflow.keras Currently unable to identify a solution to having the model generating only 0s and 1s. The code for compiling my tensorflow model. from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers def build_model(train_dataset): model = keras.Sequential([ layers.Dense(10, activation='relu', input_shape=[len(train_dataset.keys())]), layers.Dense(1, activation='sigmoid') ]) optimizer = 'adam' model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=[tf.keras.metrics.Precision(), tf.keras.metrics.Recall(), tf.keras.metrics.Accuracy()]) return model model = build_model(X) After calling model.fit(X,y) and model.predict(X_test) an array of numbers are produced as prediction values: array([[8.3726919e-01], [9.1233850e-04], [8.3726919e-01], ..., [4.6819448e-05], [1.5565268e-08], [0.0000000e+00]], dtype=float32) My current solution to the problem is to get the output as a number in the range (0,1) from the sigmoid activation function; and then transforming the values to 0 or 1 reset_predictions = [] for p in predictions: if p >= 0.5: reset_predictions.append(1) else: reset_predictions.append(0) However this is not ideal; what I hope to achieve is for the model to classify 0 or 1 for me without an external function. AI: I don't believe that's possible, in order for the model to return 0 or 1, your activation function on the output layer would have to return 0 or 1, which would mean that the activation function is non-differentiable, and you cannot do that. Also you can simplify your transformer function to: In [20]: predictions = np.array([[0.1], [0.9], [0.3], [0.6]]) In [21]: (predictions[:, 0] > 0.5).astype(np.int8) Out[21]: array([0, 1, 0, 1], dtype=int8) So it is very little extra work, and it gives your more flexibility if you want to see how confident the model is about the prediction, and you can change the threshold if you like.
H: Why can't i normalize my data using preprocessing.normalize? code: df['Miles'] = preprocessing.normalize(df['Miles']) Error msg: ValueError: Expected 2D array, got 1D array instead: array=[33.3304678 33.3304678 33.3304678 ... 3.00959404 3.00959404 3.00959404]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. AI: Sklearn.preprocessing.normalize is vector norm normalization. By default it use L2 Norm which is $x_{norm}=\sqrt{\Sigma{x_i^2}}$. For example, if there's one vector $(1,2,3)$, then the $x_{norm}=\sqrt{1^2+2^2+3^2}=3.7416$, then if you normalize the vector, it would be $(\frac{1}{3.7416},\frac{2}{3.7416},\frac{3}{3.7416})$. If that's what you want, try # convert to numpy array with shape(rows,1) df_array = df['Miles'].values.reshape(-1,1) df['Miles'] = preprocessing.normalize(df_array,axis=0) Note that setting axis=0, otherwise it would be done column-wise and you will get all 1 vector. I think you may want to scale the vector rather than normalization, try mean-std scale by sklearn.preprocessing.scale, max-min scale by sklearn.preprocessing.minmax_scale. Be sure that you reshape the array first.
H: How do CNNs find different feature maps? Assume I have a CNN that in the first (conv) layer takes a 1-channel signal (the input) and gives a 2-channel output. Let's further assume that the rest of the net has symmetric architecture from the point-of-view of any of those channels. Moreover, let's assume all the weights are initialized by zero (or the same number). How do we know that the feature maps (kernels of the channels) after training are not similar given the symmetry in such network? Specifically, what is unreasonable to have kernels with identical feature maps? AI: You shouldn't initialize all the weights with zero, use random initialization instead. Otherwise you can't break the symmetry and all the outputs in the network would be same.
H: How to get sensitivity and specificity for multi-class classification for each fold of cross validation? I am working on a multi-class classification consisting of 4 classes. I am applying 5-fold cross-validation on it and would like to get the sensitivity (recall) and specificity score for each of those folds. I found out that using cross_validate function, I can provide it with a list of scoring parameters for each fold. scoring = {'accuracy' : make_scorer(accuracy), 'precision' : make_scorer(precision_score), 'recall' : make_scorer(recall_score), 'f1_score' : make_scorer(f1_score)} cross_validate(neural_network, data, y, cv=5,scoring=scoring) However, this generates an error because these functions (except accuracy) are only for binary classification and not for multi-class. So, I decided to make my own functions for sensitivity score and specificity score that returns the mean of the 4 individual values (1 for each class). I return the mean of them, and not the individual 4 values because scorer functions that return multiple values are not permitted. That is fine with me though as I want their mean only. Here's what I tried: def sensitivity(y_true,y_pred): cm=confusion_matrix(y_true, y_pred) FP = cm.sum(axis=0) - np.diag(cm) FN = cm.sum(axis=1) - np.diag(cm) TP = np.diag(cm) TN = cm.sum() - (FP + FN + TP) Sensitivity = TP/(TP+FN) return np.mean(Sensitivity) def specificity(y_true,y_pred): cm=confusion_matrix(y_true, y_pred) FP = cm.sum(axis=0) - np.diag(cm) FN = cm.sum(axis=1) - np.diag(cm) TP = np.diag(cm) TN = cm.sum() - (FP + FN + TP) Specificity = TN/(TN+FP) return np.mean(Specificity) scoring = {'sensitivity' : make_scorer(sensitivity), 'specificity' : make_scorer(specificity)} cross_validate(neural_network, data, y, cv=5,scoring=scoring) But it still is throwing the same error: ValueError: Classification metrics can't handle a mix of multilabel-indicator and multiclass targets I don't know what's not working here. I just want the mean of sensitivity for each class and mean of specificity for each class, for each of the 5 folds. What is wrong with my approach and also is there a simpler way to do this ? AI: I think this error comes from confusion_matrix(), here we have three "types_of_target": multiclass, multilabel-indicator, continuous-multioutput. For example, np.array([1, 0, 2]) is multiclass, it's one-hot-encoding np.array([[0,1,0],[1,0,0],[0,0,1]]) is multilabel-indicator, what we predict np.array([0.3,0.4,0.3],[0.7,0.2,0.1],[0.1,0.1,0.8]) is continuous-multioutput. The input of confusion_matrix must be of type "multiclass". I think you can try confusion_matrix(y_true.argmax(axis=1),np.rint(y_pred).argmax(axis=1)) by converting y_true from multilabel-indicator to multiclass, and y_pred from probs(continuous-multioutput) to one-hot(multilabel-indicator) then multiclass.
H: Is it possible to change pandas column data type within a sklearn pipeline? Sklearn pipeline I am using has multiple transformers but one of the initial transformers returns numerical type and the consecutive one takes object type variables. Basically I need squeeze in a: data[col] = data[col].astype(object) for the required columns within the pipeline. Is there any way to do it? Note: I am using Feature-engine transformers. AI: You sure can. One solution off the bat is to extend your estimator that takes object type variables. So what does that mean. Library that you said are all estimators in the sklearn form fit, predict methodology. So all you have to do is something as follows: > class modifiedTraf(oldTraf): > def __init__(self): > super(modifiedTraf, self).__init__(): > > bla bla > > def fit(self, X): > X[col] = X[col].astype(object) > super(modifiedTraf,self).fit(X) > return self > def transform(self, X): > X[col] = X[col].astype(object) > X=super(modifiedTraf,self).transform(X) > return X Learn more about overriding sklearn classes and super argument for example here
H: building a 2-layer LSTM for time series prediction using tensorflow From Tensorflow tutorials i am experimenting time series with LSTM In the section 'multi-step prediction' using LSTM tutorial says Since the task here is a bit more complicated than the previous task, the model now consists of two LSTM layers. Finally, since 72 predictions are made, the dense layer outputs 72 predictions. where previous task was prediction over a single point. How do we know how many layers a problem requires (here, 2) ? Then, from implementation point of view, using Python Tensorflow library, multi_step_model = tf.keras.models.Sequential() multi_step_model.add(tf.keras.layers.LSTM(32, return_sequences=True, input_shape=x_train_multi.shape[-2:])) multi_step_model.add(tf.keras.layers.LSTM(16, activation='relu')) multi_step_model.add(tf.keras.layers.Dense(72)) why is there a need for the adding a Dense(72) layer ? what is the function Dense() doing ? (reading the doc doesn't help really) AI: First question: How many layers? This is architectural question and one of them most important when constructing NN. Generally the more complex the task the more layers you should use to approximate (until a certain point than there is overkill, motivation for ResNet) If you are looking for some guidelines there are some good posts, but the research and general trend nowadays is that we are over-doing it in the first place, and we can achieve some good results with some smart tricks without making it too deep. TL;DR it depends on the problem, but not as deep as we think for 98% of problems. Second (third?) question: why is there a need for the adding a Dense(72) layer ? what is the function Dense() doing ? Well you said that Finally, since 72 predictions are made, the dense layer outputs 72 predictions. For the question what is dense function doing in short its producing an (output) vector. In long dense layer represents a matrix vector multiplication. Values in the matrix are actually the trainable parameters (weights) which get updated during backpropagation, and if you seen mathematically representation of NN with matrizes (which all of them are thats how you utilise power of GPU-s bla bla) thats exactly what this dense layer represents.
H: How to order the data with respect to data type I am having large data set (82 variables) Is there any way to arrange data such a way that I have to get all numerical variables firstly then categorical variables so that I can run hypothesis testing or exploratory data analysis(eda) by having loop. If not is there any way to do eda in simpler manner because I can't check the correlation or chi.square test for each variable AI: Let your data frame be df. First get the numeric columns: num_col = df.select_dtypes('number').columns Then get the remaining columns. non_num_col = set(df.columns) - set(df.select_dtypes('number').columns) Merge as required. df = pd.concat([df[num_col], df[list(non_num_col)]], axis=1) The columns are now in the desired sequence.
H: Text summarization with limited number of words I am reviewing summarization techniques and haven't (yet) found an approach to limit the length of a summary. So for example a summarization function that gives me a summary that is < 500 words. Can you point me in the right direction? Are there approaches/implementations out there that try to solve this challenge? Appreciate your replies! AI: You sure can, for example in latent semantic analysis you can fixate number of topics (which is actually size of the decomposition matrix) beforehand.
H: How to interpret predicted data from a keras model I tried building a keras model to classify leaves from the leaf classification dataset on Kaggle. After I compiled and trained the model, I used it to predict the name of the leaves in the testing images, but all I got is an array of integers. How can I exactly interpret those numbers in order to get the names of the leaves. model = Sequential() model.add(Dense(128, kernel_initializer="uniform", input_dim= 192, activation='relu')) model.add(Dropout(0.25)) model.add(Dense(99, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model_history = model.fit(x=X_train,y=Y_train, epochs=500, batch_size= 32, validation_data=(X_val, Y_val), verbose=1) predictions = model.predict(test_arr, batch_size=32, verbose=1) computed_predictions = np.argmax(predictions, axis=1) computed_predictions array([51, 50, 1, 19, 14, 3, 3, 28, 84, 8, 43, 74, 75, 10, 52, 46, 45, 73, 13, 71, 61, 68, 57, 77, 1, 70, 28, 15, 35, 70, 53, 74, 47, 50, 4, 36, 14, 69, 36, 93, 8, 32, 8, 9, 71, 70, 38, 23, 26, 18, 17, 5, 55, 94, 14, 86, 62, 33, 51, 54, 88, 56, 21, 59, 65, 11, 48, 5, 13, 4, 54, 57, 29, 7, 31, 98, 92, 84, 25, 10, 61, 43, 85, 24, 1, 2, 23, 83, 40, 22, 48, 90, 25, 21, 37, 56, 41, 95, 7, 49, 98, 77, 3, 12, 31, 84, 53, 96, 64, 72, 93, 93, 67, 30, 8, 88, 60, 87, 6, 57, 34, 34, 60, 17, 75, 27, 51, 73, 39, 23, 38, 2, 41, 61, 24, 97, 29, 28, 68, 81, 42, 51, 86, 62, 60, 52, 95, 81, 42, 96, 95, 20, 59, 35, 86, 1, 26, 38, 43, 75, 20, 60, 46, 79, 22, 79, 69, 87, 65, 97, 75, 21, 29, 21, 11, 10, 58, 94, 27, 22, 15, 45, 89, 54, 43, 5, 23, 94, 40, 49, 89, 72, 36, 11, 81, 95, 18, 91, 29, 64, 80, 6, 78, 45, 28, 9, 78, 90, 44, 89, 92, 13, 2, 59, 0, 96, 70, 32, 29, 78, 91, 55, 44, 38, 5, 60, 49, 58, 93, 67, 92, 88, 90, 79, 25, 37, 18, 0, 76, 27, 70, 71, 44, 70, 32, 90, 30, 82, 34, 30, 82, 96, 48, 65, 57, 64, 26, 53, 69, 73, 9, 3, 83, 26, 30, 63, 17, 22, 36, 63, 12, 78, 36, 14, 27, 25, 67, 38, 20, 54, 76, 69, 67, 97, 80, 44, 92, 69, 23, 21, 11, 51, 33, 77, 16, 11, 97, 1, 52, 39, 24, 52, 42, 17, 2, 73, 96, 83, 88, 9, 63, 50, 16, 37, 87, 95, 3, 35, 83, 60, 59, 58, 0, 79, 62, 38, 93, 68, 69, 46, 19, 46, 94, 18, 0, 33, 89, 40, 62, 48, 42, 6, 31, 91, 73, 81, 12, 85, 26, 6, 79, 2, 22, 35, 43, 6, 80, 78, 82, 5, 61, 37, 43, 33, 69, 56, 71, 45, 59, 42, 66, 86, 98, 83, 90, 64, 82, 11, 79, 56, 56, 49, 48, 20, 74, 15, 33, 49, 89, 44, 7, 35, 14, 55, 23, 34, 44, 32, 30, 36, 9, 72, 31, 61, 50, 82, 34, 28, 22, 92, 72, 11, 19, 4, 87, 51, 80, 39, 84, 32, 66, 36, 41, 31, 80, 4, 26, 68, 96, 20, 36, 34, 39, 56, 73, 76, 84, 7, 67, 37, 8, 95, 85, 62, 10, 65, 41, 2, 83, 86, 41, 52, 3, 49, 47, 76, 52, 11, 26, 88, 71, 45, 39, 66, 87, 75, 74, 7, 64, 65, 78, 63, 56, 21, 61, 88, 62, 91, 59, 12, 74, 15, 85, 8, 66, 57, 83, 82, 72, 58, 96, 7, 67, 66, 57, 66, 92, 35, 18, 9, 54, 91, 65, 19, 15, 10, 24, 71, 69, 48, 39, 98, 16, 19, 45, 74, 6, 69, 42, 34, 71, 47, 85, 28, 85, 47, 25, 27, 58, 68, 84, 97, 63, 97, 76, 81, 87, 77, 14, 0, 28, 41, 14, 12, 33, 86, 46, 4, 4, 47, 30, 19, 58, 13, 77, 98, 5, 49, 72, 53, 32, 77, 40, 68, 26, 92, 16, 81, 37, 14, 93, 80, 53, 46, 25, 50, 17, 37, 93, 0, 20, 54, 10, 91, 40, 81, 53, 18, 27, 1, 12, 54, 73, 15], dtype=int64) AI: Simple do backwards transformation of y_pred. You label-encoded Y_train (i.e. every number 1-99 essentially represents a leaf) you you need to do is find mapping from leaf number and leaf name (thats in LabelEncoder) you would proceede something like this le=LabelEncoder() bla bal Y_train=le.fit_transform() predictions_test = le.inverse_transform(prediction_test)
H: Use of Standardizer to handle outliers? I have a dataset with 60 columns and 5K records. There are few columns which has outliers. I understand that there are multiple approach to handle outliers. Actually I don't wish to drop the data as it is an outlier because am not sure whether it is really an outlier or a meaningful value (like how income could be skewed). So what I did is just standardize the columns using standard scaler. Is it okay to do this? Am assuming that by doing this standard scaling, I have retained the data as well as got rid of outliers. I did refer couple of posts (post1, post2) but couldn't get an idea Is this what standardization does? Can anyone suggest me is there anyother way to handle outliers without dropping the records? Can you help? AI: Standardising may not be the best option. Because they will still not be bounded (like when normalised) between -1 and 1 but be distribution dependent. What I mean is if they are outliers their standard deviation will be big for these values. In any case its not that you should rescale the values to combat these outliers. Outliers are not in the first place problematic because they are big in values, but mostly because these values lie in the cluster thats not possible/representative of future data and you will learn on it. One solution for without dropping them is for example binning, where you say all outliers that have standard deviation bigger than two should have value "X" where X is some 99,5% quantile for example.
H: Prediction vs causation in a ML project I am performing a classification task and was able to identify significant predictors (important features using Random Forest) that can help separate the classes or influence the outcome. But I read online that prediction models are not causal models. Let's say if my prediction model says that Age is one of the significant factors that influence outcome (death), how can I prove that Age is the cause of death. I read that any intervention/change on strong predictors of your models, will not necessarily impact the outcome. How can I find out the list of factors that really cause change in the outcome? Currently what I do is run a RF model to identify the important features and communicate that these are the top 5 features that seem to influence the outcome. How can I prove that it is causation and not just correlation? AI: ML questions are concerned mainly around predictions, but we can extrapolate to (in a certain sence) causality. First of all these are two different modelling approaches: Causal inference is focused on knowing what happens to $Y$ when you change $X$. Prediction is focused on knowing the next $Y$ given $X$ Some of the current causal approaches are randomised testing, do-calculus etc... So how can we extrapolate to causal inference of standard predictive ML models? Counterfactual Explanations we can simulate counterfactuals for predictions of machine learning models where we simply change the feature values of an instance before making the predictions and we analyze how the prediction changes. Read more about it here, and there is python library called alibi that implements it.
H: What is the meaning of likelihood? I am studying Bayes probability applied to machine learning, and I have encoutered the concept of likelihood, which I don't understand. I have seen that the Bayes rule is: $P(A|B)=\frac{P(B|A)P(A)}{P(B)}$ where $P(B|A)P(A)$ is the conditional probability, while $P(B|A)$ is the likelihood, but I don't understand what the likelihood is. Can somebody please help me? AI: Theory Let us use notations that are less confusing than in your question: the likelihood of a probability distribution $P$ indexed by hyperparameters $\theta$ (namely, $P_\theta$), given a set of observations ${x_i}$, is the probability that the observations were yielded by this distribution, given $\theta$. This translates in: $$\mathcal{L}(\theta \mid x) = P_\theta(X_1=x_1, \ldots, X_n = x_n)$$ which can also be written: $P(X = x\mid\theta)$ The link with Bayes statistics is the reversal of conditions: $\theta\mid x$ into $x\mid\theta$. The highest the probability, the more likely it is that the observations were yielded by this distribution. Example Consider a set of 10 heads/tails draws, which result in 7 heads and 3 tails. You can define the likelihood that the coin is balanced ($p_\text{heads} = p_\text{tails} = 0.5$), given the results, by: $$\mathcal{L}_\text{balanced} = P(\text{7 heads, 3 tails} \mid p_\text{heads} = 0.5) = \binom{10}{7}\left(\frac{1}{2}\right)^7\left(\frac{1}{2}\right)^3 \sim 0.117$$ And the likelihood that the probability of heads is 0.8 is: $$\mathcal{L}_\text{balanced} = P(\text{7 heads, 3 tails} \mid p_\text{heads} = 0.8) = \binom{10}{7}0.8^70.2^3 \sim 0.203$$ From this, you can tell that it is more likely that the coin has 80% probability of heads, than it is balanced.