text
stringlengths
83
79.5k
H: Using data agumentation for a frozen pre-trained model I was following the following article with regards to doing transfer learning: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html In the section, Using the bottleneck features of a pre-trained network: 90% accuracy in a minute, the authors mentioned that: "Note that this prevents us from using data augmentation" I am not very clear about this; is there a rule that discourages the use of data augmentation when the pre-trained model is totally frozen? AI: No, you can definitely use data augmentation when the layers of the pre-trained model are frozen. In this article, the author refers to the size of VGG16 which is quite a large CNN. Also, he trains the model on his CPU which slows down training even more. Therefore, he does not want to use data augmentation, as that would increase the training time even more: Running VGG16 is expensive, especially if you're working on CPU, and we want to only do it once. Note that this prevents us from using data augmentation.
H: Why do seaborn.dist and pyplot.hist generate two different looking histograms on the same data? I'm looking at telecom customers data. Two of the variables I'm looking at currently are: Monthly Charges - The total amount charged to the customer monthly. Is Senior Citizen - Whether the customer is a senior citizen. I'm trying to plot two histograms to see if the distributions for non-senior and senior citizens is different. If I use seaborn's distplot then I get the following result And if I use pyplot hist then I get the following result In the first plot the blue one towers above the orange ones in the range ~70-120 whereas in the second image the blue one always stays below the orange one. What is the difference between these two? AI: The first returns a probability density of the distributions. As you can see, they integrate to 1, i.e. they cover the same area (because they are probabilities, not the raw data). The second returns actual frequencies, and that's why you have the actual scale of the data. Different histograms having different scales.
H: Getting Value Error while training a model for binary classification While training a sequential model using Keras, Im getting this error The model summary is shown below Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 512) 20480512 _________________________________________________________________ activation_1 (Activation) (None, 512) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 512) 262656 _________________________________________________________________ activation_2 (Activation) (None, 512) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 512) 0 _________________________________________________________________ dense_3 (Dense) (None, 2) 1026 _________________________________________________________________ activation_3 (Activation) (None, 2) 0 ================================================================= I used the below steps to train the model, for binary classification, model = Sequential() model.add(Dense(512, input_shape=(vocab_size,))) model.add(Activation('relu')) model.add(Dropout(0.3)) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.3)) model.add(Dense(num_labels)) model.add(Activation('softmax')) num_labels = 2 for the above code The error is shown below. ValueError: Error when checking target: expected activation_3 to have shape (2,) but got array with shape (1,) AI: The problem is the shape of y_train. You're defining an output layer of 2 neurons, and only passing labels with a shape of (n_samples, 1). You need to convert this to shape (n_samples, num_labels) and can use keras.utils.to_categorical to do so easily. The number of neurons in your output layer has to match the second dimension of the label array, so you transform it from one dimensional like this: [0,1,0,0,0,1] To two dimensional like this: [[1,0], [0,1], [1,0], [1,0], [1,0], [0,1]] When you make your predictions, softmax activation will give you a 'probability' for each class like so: [[0.97,0.03], [0.85,0.15], ... ]]
H: How to barplot output of pandas.describe() from multiple datasets I'm trying to compare the differences and similarities between 10 dataframes. I have decided to df.describe() each dataframe in turn and accumulate the results into a new dataframe. count mean std min 25% 50% 75% max run 0 38 11.9394 3.99795 2.66622 9.00963 13.6531 14.6516 18.2803 1 75 13.7902 2.69114 8.06895 13.5017 14.3492 15.4146 17.4614 2 17 13.9666 1.12535 11.1525 13.7025 14.1217 14.6637 15.6118 3 21 13.2841 2.81016 6.25177 13.198 14.0382 15.1457 16.2141 4 29 11.5376 3.35056 6.70377 8.43451 12.8287 14.7004 16.155 5 11 12.5245 3.0237 6.01391 11.0818 13.6772 14.6237 15.527 6 32 13.7039 2.36393 6.95464 13.6765 14.1967 14.8114 17.3966 7 11 13.9055 2.03886 10.5235 12.6321 13.9394 14.5784 18.0726 8 19 13.2579 1.80329 9.00478 13.0772 13.8909 14.1755 15.0772 9 28 13.2817 3.61778 5.64462 9.90116 14.6581 15.6785 18.7766 I thought from this point it would be trivial to do a barplot where each bar was a different variable (the columns) and they where hue'd according to which dataframe the variable was from(the rows). However I can't work out how to split up the columns. sns.barplot(data = describedWidth) outputs the following graph Thanks in advance AI: You need to get used to so called wide and long table format, from there you should get the trick rapidly. By then, I would use unstack in order to have three columns and do something like this: unstacked_data = describedWidth.unstack() unstacked_data = unstacked_data.reset_index() unstacked_data.columns = ['metric', 'run', 'y'] sns.barplot(data=unstacked_data, x="metric", y="y", hue="run" ) Which should give a table that has the following format: And final result:
H: "Stationarity of statistics" and "locality of pixel dependencies" I'm reading the ImageNet Classification with Deep Convolutional Neural Networks paper by Krizhevsky et al, and came across these lines in the Intro paragraph: Their (convolutional neural networks') capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse. What's meant by "stationarity of statistics" and "locality of pixel dependencies"? Also, what's the basis of saying that CNN's theoretically best performance is only slightly worse than that of feedforward NN? AI: At the time of writing the article, CNNs were not yet a particularly popular architecture for neural networks (and neural networks in themselves weren't as popular and common as today). In fact this article can be seen as the one who started the current age of deep learning as a default machine learning approach. All this intro was for the soul purpose of saying that by "standard feed forward neural networks", the author meant neural networks that consist only out of dense layers (fully connected). So to answer your last question first, in theory, anything a convolutional layer can do, a fully connected layer with the same number of input parameters and output parameters can also accomplish. i.e. you can produce a convolutional layer from a fully connected one (it will just have the same weights repeat themselves in a pattern of the same size as the convolution kernel), and you can also produce a lot more operations that a simple convolutional layer cannot perform. The simplest example can be a location dependent convolutional layer (where different areas of the image are convolved with different kernels). In reality, the search space for optimal parameters of such a layer would be so huge (and so non-convex), that it won't be able to converge. The limitations on the convolutional layer are actually what helps her shine, which brings us to your first question. The "locality of pixel dependencies" is exactly what allows such a limited operation, to achieve such fine results. The meaning of the sentence is very simple, close pixels are very likely to be dependent on each other, so we can and should leverage this dependency to process them together (as is done with convolutional kernels). Regarding the "stationary of statistics" phrase, I'm not 100% sure, but I think it refers to the fact that at small local patches, there is a higher probability of finding recurring patterns (the smaller the patch size, the smaller the possible variance of the patch). However as I said, I'm not sure about this one.
H: How to compare different similarity measurements in text clustering? I have a dataset which contains vectors generated from subtitles (each column represents a genre, each row is a movie name), my purpose is to find the most similar movie titles, I want to use different distance/similarity measurements and compare them, what is the best method to use? For now, I tried L1 distance, cosine similarity, Euclidean distance, Mahalanobis Distance, I got the results of top n most similar titles, but all the results seem very reasonable, how can I compare them to see which method perform best? I also tried to do k-means, when I implement K-means clustering, it used Euclidean distance by default, how to use other distance to implement K-means?Also any suggestions about other similarity measurements? Many thanks AI: I would try a different approach than clustering. For now, I tried L1 distance, cosine similarity, Euclidean distance, Mahalanobis Distance First, you could have a look at approximate string matching measures. These are likely to give you much better similarity results on a pair of movie titles. It's usually a good idea to use not only word-based measures but also character-based or char n-grams based measures. how can I compare them to see which method perform best? A proper evaluation framework would require annotating manually a large amount of pairs of titles as similar/not similar (or even a degree of similarity). Unless you have a lot of time, this is completely impractical because there is certainly a massive imbalance between positive and negative pairs. So instead you could use bootstrapping, which means running a few similarity measures on your data, extract the top N pairs for each measure, then manually annotate only these. It's likely that this would give you a high amount of (rare) positive cases, and you can build a labeled dataset by assuming that other instances are negative. It's obviously a simplification, otherwise you can take the time to annotate a lot of negative cases as well (it's still much faster than without bootstrapping, since you already have your positive cases). my purpose is to find the most similar movie titles, I want to use different distance/similarity measurements and compare them, what is the best method to use? Based on the dataset you have built, you can now train a supervised model, with a pair of titles as instance. You can use various similarity measures as features, and you should vary the type of similarity (char-based, ngram-based, word-based) across these features in order to provide the model with a diversity of characteristics. Then you can predict the similarity between any two pairs. This gives you a graph of similarity relations between all the movies, from which you can extract groups which are similar together. Note that this is just a general strategy, many parts of it can be refined/adapted to your data and of course it depends how much time you want to spend on this problem.
H: Is there any method to determine which clustering algorithm to use on a particular dataset? I'm having a hard time getting kmeans to cluster data effectively. It fails to segment data well even for a simple attribute with 5 categories. I'm aware of DBSCAN, Hierarchical Clustering and GMM. However, just wanted to know if there's any way (visual or otherwise) to narrow down the clustering algorithm which might work on the dataset in question, before I start to write the code for each of these algorithms. Thanks in advance. AI: No. Clustering is an explorative technique, it is subjective what is good, and the best clusters are those that are "interpretable but unexpected", a property that you cannot quantity with statistics. So it is a trial-and-error task. Furthermore, data preparation is much more important than the choice of clustering algorithm. On badly prepared data, none will work. Last but not least, categoricial data is a huge problem. It lacks detail for most clustering approaches - treating this as binary variables is much too coarse and tends to produce bad solutions (such as tiny "clusters" and trivial splits on a single variable). This is likely a problem of the data, not the algorithm. Similar issues can be seen with integer attributes or any other attribute that has only very few discrete levels (including Likert-like-scale questionnaires). Methods such as k-modes exist for categoricial data, but often don't produce better results either...
H: How to define a neural model as linear or non linear TL;DR: in what sense is the model of a neuron seen in the image above nonlinear? In chapter 1, section 1.3 MODELS OF A NEURON of Simon Haykin's Neural Networks book, the standard model of a single neuron is described and visualised in the picture above. Haykin states that this model, which consists of a set of inputs x1..xm, their corresponding weights w1..wm, a linear combiner that sums the weighted inputs and the bias (b) and an activation function that takes that sum and produces the output, is nonlinear. So, my question is, isn't the output linearly dependent on the input? For example, if the neuron only takes one input, x1, then the linear combiner takes the form v = x1 + b and the activation function is φ(v). So, the only way that I can see this model being nonlinear is if the activation function is nonlinear. But there are clearly cases where the activation function is linear (like the piecewise-linear function described in that same section of the book). So how can this model be inherently nonlinear? I realise that this isn't a major concern, but I'd like to understand every part of the book before moving on, and this has been bugging me since I saw it. Thanks to everyone in advance for your answers. AI: You are right, for the model to be non linear, the activation function must be non linear. If the activation function is for example the identity function, your model won't be non linear, even if you stack multiple hidden layers. Indeed, the output of the neurons will only just be a linear composition of the inputs combined with weights. I think the author says this neuron model is non-linear because in practice a linear activation function is almost never used. Some functions look like linear functions, but in fact these are not. Here is a demonstration for the ReLU activation function, which look like a linear function but is mathematically not : https://datascience.stackexchange.com/a/26481/73209 I guess it is the same for the Piecewise Linear Unit (PLU) activation function.
H: unimportant features impact on model's performance Using XGBoost and RandomForests, do unimportant features (according to the feature_importances_ attribute) hurt the model's performance? Do I need to carefully select highly correlated and import features? Or do I throw everything in and hope that it can correctly add some information on the target variable? AI: Yes, unimportant features can hurt the model's performance. This happens in my experience in a few ways: Efficiency - they make the fitting process slower. Particularly if you're one-hot encoding categorical features, and end up with a large and useless sparse matrix. If these reductions in efficiency force you into taking steps that might impact on actually informative features, like a PCA transformation, you're then potentially also directly impacting on predictive ability. A minimum number of samples is needed per feature for models to effectively learn. By including redundant features, this sample:feature ratio is diminished, which makes it more difficult for many models to determine whether or not a feature is a useful predictor. You make it more likely to overfit to the "noisy" features. You can recursively reject features that the model's feature_importances_ routine has decided are unimportant using sklearns recursive feature elimination. Or, in the exploratory phase of building your model, you can assess predictive power using visualisation or hypothesis testing. This nicely highlights the improvement in performance by ditching redundant features: https://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_with_cross_validation.html#sphx-glr-auto-examples-feature-selection-plot-rfe-with-cross-validation-py
H: Pandas change value of a column based another column condition I have values in column1, I have columns in column2. What I want to achieve: Condition: where column2 == 2 leave to be 2 if column1 < 30 elsif change to 3 if column1 > 90. Here is what i did so far, the problem is 2 does not change to 3 where column1 > 90. filter1 = data['column1'] for x in filter1: if x < 30: data['column2'] = data['column2'].replace([2], [2]) else: data['column2'] = data['Output'].replace([2], [3]) AI: What I want to achieve: Condition: where column2 == 2 leave to be 2 if column1 < 30 elsif change to 3 if column1 > 90 This can be simplified into where (column2 == 2 and column1 > 90) set column2 to 3. The column1 < 30 part is redundant, since the value of column2 is only going to change from 2 to 3 if column1 > 90. In the code that you provide, you are using pandas function replace, which operates on the entire Series, as stated in the reference: Values of the Series are replaced with other values dynamically. This differs from updating with .loc or .iloc, which require you to specify a location to update with some value. This means that for each iteration of for x in filter1 your code performs global replacement, which is not what you want to do - you want to update the specific row of column2 that corresponds to x from column1 (which you are iterating over). the problem is 2 does not change to 3 where column1 > 90 This is truly strange. I would expect the code you provided to have changed every instance of 2 in column2 to 3 as soon as it encountered an x >= 30, as dictated by your code conditional statement (the execution of the else branch). This discrepancy may stem from the fact that you are assigning to column2 the result of global replacement performed on the column Output (the contents of which are unknown). In any case, if you want your program to do something under a specific condition, such as x > 90, it should be explicitly stated in the code. You should also note that the statement data['column2'] = data['column2'].replace([2], [2]) achieves nothing, since 2 is being replaced with 2 and the same column is both the source and the destination. What you could use to solve this particular task is a boolean mask (or the query method). Both are explained in an excellent manner in this question. Using a boolean mask would be the easiest approach in your case: mask = (data['column2'] == 2) & (data['column1'] > 90) data['column2'][mask] = 3 The first line builds a Series of booleans (True/False) that indicate whether the supplied condition is satisfied. The second line assigns the value 3 to those rows of column2 where the mask is True.
H: Is it normal that a classifier always wrongly predicts the same samples? I'm trying to improve the accuracy of a classifier, a random forest one. I built different models with the same hyperparameters but with different random seeds, trained them with the same training data, used the same test daata to make the predictions and compared the results. I discovered that 50% of the errors were always made on the same samples. Therefore, do these samples which are always wrongly predicted deserve a particular attention or is it kind of logic ? I hope the question is clear enough. AI: What you are experiencing is pretty normal given your approach. A random forest is an ensemble of decision trees where models are trained in parallel using bootstrapped samples (a technique called bagging). Even though the decision trees are randomized (a thorough explanation of random_state can be found in this question), they still rely on an internal criterion (such as Gini index by default in RandomForestClassifier) to split the nodes and define decision paths. The fact that some of your samples are consistently being misclassified regardless of the random state is an indication of their objective difficulty when using this specific criterion. Therefore, do these samples which are always wrongly predicted deserve a particular attention or is it kind of logic ? You are absolutely correct with your first thought. Paying particular attention to wrongly predicted samples in ensembles is the goal of a technique called boosting. The main idea is to train ensemble models in sequence, with new learners focusing on data points that the rest of the ensemble has previously failed on. A great overview of ensemble approaches is presented in this answer, which I highly recommend. As far as boosting algorithms go, there are different flavors as well: you might want to try sklearn's AdaBoost and gradient tree boosting implementations, or XGBoost. These may help you finally defeat those pesky hard-to-classify samples, but be aware that bagging (your current model) has its own perks that boosting lacks.
H: SRGAN Generator Architecture: Why is it possible to do this elementwise sum? Consider the first residual block. Its first convolution layer takes in inputs: THe PRELU's output 64 filters(64 outputs) each one being 3*3 with a stride of (1 ; 1) So I think that the output of this convolution will have a different shape than the PRELU's output's one. Then there is a second convolution layer in this same first residual block. So the shape will be even more different! The first residual blocks ends with an elementwise sum... Which has these problematic inputs: PRELU's output The Conv2D, BN, PRELU, Conv2D, BN output The problem is that, as explained above, the PRELU's output will have a shape quite different than "the Conv2D, BN, PRELU, Conv2D, BN output"'s shape. So how is it theoretically possible (then technically, using Keras) to do this elementwise sum? Edit: by "shape", I mean "width, height and eventually depth". AI: It's possible by adding 0 as padding to the tensor. What is currently refered to "same padding" (in particular in Keras).
H: CrossValidation using glmnet and very high values of Lambda? I am trying to run crossvalidation (folds=10) using glmnet library on my dataset. My outcome of interest is BMI and predictors include a set of clinical variables. My final goal is to use elastic-net regression to select features and also predict BMI. For crossvalidation, I am using a range of alpha from 0 to 1 using a 0.1 increment. I am using min CVM to decide the values of lambda, with my current code I get extremely high estimates for lambda. Are higher values of lambda acceptable or I am failing to see here? Below is my code snippet. I appreciate all your help and comments. size <- floor(nrow(DataFile) * 0.7) Train_rows <- sample(rownames(DataFile),size=size,replace=FALSE) Train_Data <- DataFile[Train_rows,] Train_bmi <- phenoFile[Train_rows,]$BMI ####### Cross Validation Alpha and Lambda ##### myAlpha <- seq(0,1,by=0.1) findAlpha_lambda <- function(iAlpha){ Train_Data <- as.matrix(Train_Data) crossModel <- cv.glmnet(Train_Data,Train_bmi,alpha=iAlpha) myLambda <- crossModel$lambda.min myCVM <- min(crossModel$cvm) title <- paste(iAlpha,myLambda,sep="_") return(c(iAlpha,myLambda,myCVM)) } myFrame <- as.data.frame(do.call(rbind,lapply(myAlpha,findAlpha_lambda))) colnames(myFrame) <- c('Alpha','Lamda','CVM') myFrame <- myFrame[order(myFrame$CVM),] print(myFrame) Alpha Lamda CVM 1 0.0 50.9839208 54.25337 2 0.1 1.7901432 54.37151 3 0.2 3.1427680 54.75240 4 0.3 2.1949422 57.68935 5 0.4 1.8927376 61.68384 9 0.8 1.2510439 63.69622 6 0.5 1.0933677 64.68333 8 0.7 0.2441112 64.73192 7 0.6 2.2050751 65.01727 11 1.0 1.3860429 65.17181 10 0.9 0.5042962 65.70732 AI: I don‘t think a large lambda is a problem per se. It just means that a lot of regularization is going on (under Ridge). See here: https://stats.stackexchange.com/questions/212056/ridge-lasso-lambda-greater-than-1 Here is a good tutorial from the authors of glmnet. I suggest you check your approach, i.e by looking at various figures as shown in the tutorial, especially plot(cvfit) might be instructive. Also when you go through the tutorial, you see quite „large“ values of lambda (note that log of lambda is plotted). https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
H: Can a Logsitic Regression model continue making predictions after removing predictions from the data set? I have a logistic regression model that predicts churn (0 vs. 1). I was asked to use the model to predict on a historical group of non-churners, remove anyone who was marked as a churner, and then increase one variable while keeping the rest constant to see how the prediction changes. Interestingly, it predicts zero churns after removing this first cohort of predicted churn. Increasing this one variable seems to have no impact after the initial cohort is removed, although this variable has the greatest feature importance in the model. Is this simply functioning as expected? Here is the sample code being used: r1_pred = logisticRegr_balanced.predict(dfa) dfa['Churn Prediction'] = r1_pred print("Non-Churners: "+str(len(dfa[dfa['Churn Prediction']==0]))) print("Churners: "+str(len(dfa[dfa['Churn Prediction']==1]))) print("Percent Churn: " +str((len(dfa[dfa['Churn Prediction']==1]))/len(dfa['Churn Prediction']))) Results in: Non-Churners: 70611 Churners: 19609 Percent Churn: 0.21734648636665926 Then I create a new dataframe with only the survivors, and increment the "Customer Life" variable by 30 days. dfa_30 = dfa[dfa['Churn Prediction']==0] dfa_30 = dfa_60.drop('Churn Prediction', axis=1, inplace=False) dfa_30.CustomerLife = dfa_30.CustomerLife + 30 r30_pred = logisticRegr_balanced.predict(dfa_30) dfa_30['Churn Prediction'] = r30_pred print("Non-Churners: "+str(len(dfa_30[dfa_30['Churn Prediction']==0]))) print("Churners: "+str(len(dfa_30[dfa_30['Churn Prediction']==1]))) print("Percent Churn: " +str((len(dfa_30[dfa_30['Churn Prediction']==1]))/len(dfa_30['Churn Prediction']))) Results in: Non-Churners: 70611 Churners: 0 Percent Churn: 0.0 Is the model no longer able to predict churn because it has categorized everything on the binary scale in the first prediction, so all that's left are "permanent" survivors? AI: The model is able to predict churn on any sample. The issue is that increasing the „customer life“ by 30 days does not lead to any change in the prediction (based on what the model has learned). I think you can check two things: A) More of a fundamental thing, try to improve the model, e.g. by applying regularization to get a better fit on you features. GLMNET is the thing to go for in this case: https://web.stanford.edu/~hastie/glmnet_python/ B) Check how „sensitive“ your predictions are to changes in features. You could plot or look at predictions in case you increase „customer life“ by say 10, 20, 30, ..., 100 or so days. Remember that you basically predict the probability of churn. So it is possible that (some) probabilities at 30 days of customer life are just below 50%. If you gradually look at increases of customer life, you get a good idea of how churn changes in X. This is kind of a marginal effect (dy/dX).
H: Is Nvidia Jetson product family also suitable for machine learing model training? I recently came accross these products (Nvidia Jetson) and they are all tagged as "edge", so i think they are designed only for machine learning inference and not model training. They are quite interesting for their low power consumpion and price (eg: Jetson Nano) so i hope they are suitable also for model trainig. So i would ask if someone may clarify this aspect about the focus of the product. AI: Training a neural network involves a lot more computation than inferencing from a pre-trained one. During training, a neural network must go through both forward propagation and back propagation steps. The back propagation step is computationally expensive, since it requires many gradient computations and updates to weight variables. During inference, the only computation needed is forward propagation, where the input is multiplied to the weights to produce a prediction. Boards like the NVIDIA Jetson are optimized for fast matrix multiplication, hence why they are effective during inference but not as much for training. Another reason for the Jetson's popularity is its small form factor. Architects choose it because it can fit within the size and price requirements of the product while also offering decent performance. If you had unlimited budget and can fit a GTX Titan in your product for inferencing, you would do that, right? I worked in the autonomous vehicle sector for a while and that there is no way we could fit a machine with any full-size GPU in a car. There are no size limitations for training models, so naturally you would go after the machines with more powerful and efficient GPUs.
H: Handling unwanted negative numbers With me is a dataset collected from IoT sensors with one column labeled “Soil Humidity” measured in percentage. It stands to reason then that all the values be positive percentages, however there’s a mix of negative percentage in there which is undesirable. Is there a way to handle unwanted negative numbers in pandas python before running it through a machine learning model. AI: How to handle invalid values like this is an extremely common problem in machine learning, since most datasets contain errors of some kind. There are a few ways to do it. For example, you could set them all to 0: df.loc[df.SoilHumidity < 0, 'SoilHumidity'] = 0 Or you could fill them with the avg(SoilHumidity), and create an extra feature to flag to the model that they were missing: import numpy as np df['SoilHumidityInvalid'] = np.where(df.SoilHumidity < 0, 1, 0) df.loc[df.SoilHumidity < 0, 'SoilHumidity'] = df.SoilHumidity.mean() Or, you can try to impute them somehow. Either by back or forward filling (I.E. taking the value from the next or the previous row in your dataset) or by creating a model that uses the other features of your dataset to predict what these invalid values should be. The right method can depend; sometimes domain knowledge guides you (i.e. if you know the sensor can mistakenly read negatives when it should read 0, then you know to fill with 0). Failing that, I would just try a couple of methods and use cross-validation to see which improves your model the most.
H: what should i learn for deep learning i have studied the basic of machine learning ,algebra ,statics and probability.now i want start with deep learning for field of image recognition,classification. i don't know where to start deep learning ,can anyone suggest me where to start and end like syllabus AI: Have a look at fast.ai (https://www.fast.ai), only heard the best about it. Supposedly, it's completely free and easy to get into. Other often mentioned resources are the course by Andrew Ng (https://www.coursera.org/specializations/deep-learning and/or https://www.coursera.org/learn/machine-learning) or the (also free) book by Ian Goodfellow (https://www.deeplearningbook.org).
H: I have a 20.2 GB dataset which is entirely consisting of 1 file. Need help on how to open the file Whenever I am dealing with datasets that have weird extensions that I have not encountered before, I typically open them with notepad to see how the data looks like before using pandas to analyze them. But this dataset (link), that has an extension of .tr (stands for trace files I suppose), is not supported by notepad due to its huge size (that's what the error message says). Which software should I use to open the trace file? Also is there a safe way to segment the file into smaller parts, as I typically upload my data to kaggle and work on their kernel. I am not sure if this is the right place to post, if not please point me towards the correct place before removing the post. AI: This file should be a plain text file with n rows and m columns separated by a tabulator. Here is another much smaller file which they offer for download and claim that it has the same data format. Since the file is too big to fit in memory, you can use the data format HDF5. This will allow you to load slices of your dataset without having to load the whole file into memory. However, first, you need to convert the file to a .h5 file. This can be done with h5py. I put the following code together right now. It reads the text-file line by line and writes each line to the .h5-file. It works fine on the smaller file which they offer for download. I haven't tested it on a larger file, however. If you try it out, let me know how it goes. import h5py import numpy as np N_COLS = 3 # adjust to your number of columns n_rows_dataset = 1000 file = h5py.File('/path/to/your/output.h5', 'w') dataset = file.create_dataset( 'my_dataset', (n_rows_dataset, N_COLS), chunks=(5, N_COLS), maxshape=(None, N_COLS)) current_row = 0 # this does not load the whole file but creates an iterator instead for line in open("/path/to/the/large/file"): if current_row == n_rows_dataset-1: n_rows_dataset = n_rows_dataset + 1000 dataset.resize((n_rows_dataset, N_COLS)) dset[current_row] = np.fromstring(line, sep=" ", dtype=np.float64) current_row += 1 Note: You can find out the number of colums of your dataset by printing the first lines of your file: i = 0 for line in open("/path/to/the/large/file"): print(line) i += 1 if i >= 5: break
H: How to generate 12 independent random weights which all add up to one I'm using Palisade's @Risk software with a triangular distribution to generate 12 random weights which must add up to one, but I get a lot of negative numbers. Is there a straightforward way to set this up? AI: I am afraid I do not know the software you mention, but I can show you the principles and suggest why maybe you are getting negative numbers. I will do this in Python, making use of the numerical library, numpy. I import numpy and generate 12 random integers between 0 and 9 (10 is excluded as upper limit): In [1]: import numpy as np; samples = np.random.randint(0, 10, 12) In [2]: samples Out[2]: array([8, 5, 8, 4, 4, 7, 2, 2, 0, 5, 9, 1]) To scale the values to a range that makes their sum equal to 1, we can do the following. First sum up all values: In [3]: total = np.sum(samples) Now simply divide each value by the sum (the division happens here individually for each element of samples: In [4]: normalised = samples/total In [5]: normalised Out[5]: array([0.14545455, 0.09090909, 0.14545455, 0.07272727, 0.07272727, 0.12727273, 0.03636364, 0.03636364, 0. , 0.09090909, 0.16363636, 0.01818182]) We can see that the result does indeed sum to 1: In [6]: np.sum(normalised) Out[6]: 1.0 What you may have is a set of samples that contains some negative numbers, like the following in samples_neg, with ten integers ranging from -5 to +9: In [7]: samples_neg = np.random.randint(-5, 10, 10) In [8]: samples_neg Out[8]: array([ 4, 7, 0, 4, -3, 0, 9, -3, 9, 3]) We can follow the same recipe as before, summing the values and dividing each value by the sum: In [9]: total_neg = np.sum(samples_neg) In [10]: normalised_neg = samples_neg / total_neg We see that the result this time includes negative values, as you mentioned: In [11]: normalised_neg Out[11]: array([ 0.13333333, 0.23333333, 0. , 0.13333333, -0.1 , 0. , 0.3 , -0.1 , 0.3 , 0.1 ]) However, this does still satisfy the constraint you originally had, which was that they sum to 1: In [12]: np.sum(normalised_neg) Out[12]: 0.9999999999999999 # this is 1, within rounding errors of floating point values A suggestion would be to first normalise the values in a range of [0, 1] and afterwards, re-weight the values such that their sum is 1.
H: sckit-learn Cross validation and model retrain I want to train a model and also perform cross validation in scikit-learn, If i want to access the model (For instance to see the parameter's selected and weights or to predict) i will need to fit it again. I believe and according to my understanding if CV is training the model for performing k-fold validation, do we need to train again as any of k-models are not trained on the whole training data. I have seen cross_val_predict method and it is not exactly what i need as it is for predicting score and i want model object. My main confusion is that for hyper parameter tuning and in grid search we can have best paramters,but in cross validation only do we need to call the fit method after CV step AI: "I want to train a model and also perform cross-validation in scikit-learn" By this, I assume you meant saying, "I want to assess different model performace using K-fold Cross-validation approach and based on the performance, I want to select a model fitting" Primarily, K-fold CV will divide the training data into a number of folds k, and then will fit and evaluate the model. Reference from chapter 5 in ISLR: Cross-validation can be used to estimate the test error associated with a given statistical learning method in order to evaluate its performance, or to select the appropriate level of flexibility. I try to explain with a sample code: from sklearn import svm from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score # Training Data feature_cols=['X1', 'x2', 'x3'] X = train_data[feature_cols] y = train_data.target #Build a svm classifier model model_svm = svm.SVC(C=100, gamma=1) #Initialise the number of folds k for doing CV kfold = KFold(10, False, 2) #Evaluate the model using k-fold CV cross_val_scores = cross_val_score(model_svm, X, y, cv=kfold, scoring='accuracy') #Get the model performance metrics print("Mean: " + str(cross_val_scores.mean())) You can do k-fold CV for different model parameters say (different C and gamma values in SVM) or even different models altogether (say Logistic regression) and choose the best model. Once you chose the best performing model you then can proceed to fit the model for making the prediction. #Fit the model model_svm.fit(X,y) #Prediction on test set model_svm.predict(test_data) "I believe and according to my understanding if the CV is training the model for performing k-fold validation, do we need to train again as any of k-models are not trained on the whole training data." In the above sample code, the number of folds k=10. So CV will be done 10 iterations with each training set containing (k − 1)n/k = 9(n/10) observations and n/10 observations for testing. So no, you don't need to train again as K-fold CV does this procedure for you. From ISLR: K-fold CV approach involves randomly dividing the set of observations into k groups, or folds, of approximately equal size. The first fold is treated as a validation set, and the method is fit on the remaining k − 1 folds. This procedure is repeated k times; each time, a different group of observations is treated as a validation set. "My main confusion is that for hyper parameter tuning and in grid search we can have best paramters,but in cross validation only do we need to call the fit method after CV step" Again, the k-fold CV method is used for model assessment. What you can do is to use the grid search for selecting the best parameters and say the GridSearch method to use K-fold Cross-validation to selecting the best parameters. Also in a way Yes, you will call the fit method after CV because you would have chosen the best performing method and be proceeding to fit the model. from sklearn import svm from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV # Training Data feature_cols=['X1', 'x2', 'x3'] X = train_data[feature_cols] y = train_data.target #Build a svm classifier model model_svm = svm.SVC(C=100, gamma=1) #Initialise the number of folds k for doing CV kfold = KFold(10, False, 2) Cs = [0.001, 0.01, 0.1, 0.3, 1, 3, 10, 100] gammas = [0.001, 0.01, 0.1, 0.3, 1, 3, 10, 100] param_grid = {'C': Cs, 'gamma' : gammas} grid_search = GridSearchCV(model_svm, param_grid, cv=kfold, scoring='accuracy') grid_search.fit(X, y) grid_search.best_params_ I hope this clears thing up.
H: Tensorflow 2 eager vs graph mode I've been working through the tensorflow-2.0.0 beta tutorials. In the advanced example a tensorflow.keras subclass is used. The presence of the @tf.function decorator on train_step and test_step means the model executes in graph mode (not sure if that's the correct terminology, I mean oposite to eager mode). If I remove these decorators I can single step right into the model call function and see the input/output tensor for each layer which is neat. My question is, is there a programatic way to enable/disable the @tf.function decorators. Commenting them out to switch between eager and graph mode doesn't seem particularly scaleable but it's certainly useful for debugging/learning) AI: You could always write two functions (one with the decorator and one without) and call whichever suits you... For example @tf.function() def graph_function() # This function will operate in graph mode ... def eager_function() # This function will operate in eager mode ... if tf.executing_eagerly() my_function = eager_function else: my_function = graph_function # You proceed to my_function from now on I don't know if there is a better way but I've seen this a lot being used in the tensorflow official repository on github.
H: Any issue with "overlapping" sliding windows in time-series data analysis? I am developing some classification/regression models form accelerometry time-series data. So far, I have created datapoints by extracting features from non-overlapping sliding windows of the time-series data. I would like to try using overlapping windows as well. However, I was wondering whether it is conceptually sound or there might be some caveats to keep in mind as the data is reused in the overlapping windows. AI: In general I don't see anything wrong with overlapping windows, it might make perfect sense depending on your task. In fact some learning models (e.g. for sequence labeling) do use features based on past data points, which is conceptually similar to having overlaps between them. However you need to be careful about the fact that this makes data points depend on each other, so of course the preprocessing must be done separately for the training and test set.
H: Which service could I use to train my networks? My laptop's Intel i7 3630QM 2.4GHZ, 8Gb RAM and GXForce 670M are clearly not sufficient... By reading some papers, I've written an SRGAN with Python Keras. At runtime there is no error but training only 2 images (324*324) with 1 epoch, batch size of 2 lasts too much...perhaps more than 1 hour of training... Which service could I use to train my deep neural networks, see the console outputs, eventually see plots, and download the trained model? Are there such paid services? Free services? The idea would be that I query the compute server from my laptop. AI: You can use google colab which is free. It proposes notebooks usage, which are hosted on a server with good GPUs for machine and deep learning. You can write python code and train your models in real-time. https://colab.research.google.com/ However, if you are inactive for a medium period, the platform will eject you. So if you plan to train really big models durings days and weeks, you should use cloud plateforms like AWS or Google Cloud. Note that these plateforms can quickly become expensive (but you often get some free credit when you register).
H: How to feed data for ngram model? I want to train an ngram language model Let's say I have the following corpus: The sliding cat is not able to dance He is only able to slide Because obviously he is the sliding cat I am planning to use tf.data.Dataset to feed my model, which is fine But I don't know if it is better to use a sliding window to iterate through my copus or simply feed my corpus n words at a time Using a sliding window, my model (assuming a bigram) will see: The sliding sliding cat cat is is not ... Going n word at a time: The sliding cat is not able ... I'd appreciate any recommandation, thanks AI: You should definitely use a sliding window. An n-gram language model represents the probabilities for all the n-grams. If it doesn't see a particular n-gram in the training data, for example "sliding cat", it will assume that this n-gram has probability zero (actually zero probabilities are usually replaced with very low probability by smoothing, in order to account for out-of-vocabulary n-grams). This would result in a zero probability for a sentence which was actually in the training case (or a very low probability with smoothing). Also it's common to use "padding" at the beginning and end of every sentence, like this: #SENT_START# The The sliding sliding cat cat is is not ... to dance dance #SENT_END# This gives the model indications about the words more likely to be at the beginning or end (it also balances the number of n-grams by word in a sentence: exactly $n$ even for the first/last word).
H: Is RL applied to animal dispersion a valid approach? I have an agent which has a medium-sized, discrete set of actions $A$: $10<|A|<100$. The actions can be taken over an infinite horizon of 1 second per timestep $t$. The world is essentially pictures from a static camera and the states $S$ are the amount of animals detected on the picture (assuming perfect detection for simplicity). We can safely say that $max(S) < 200$. Each action $a$ is meant to lower the amount of detections because of dispersion or fear from the animals. The reward is the difference of detections from $s_{-1}$ to $s$. Admittedly I am only a beginner in RL, so I haven't seen much more than MDP's. I believe that this isn't a Markov problem since the states aren't independent (past actions having an impact on the outcome of the current action). That being said, I'm wondering if there's a specific RL algorithm for this setup or if RL even is the right way to go ? AI: Admittedly I am only a beginner in RL, so I haven't seen much more than MDP's. I believe that this isn't a Markov problem since the states aren't independent (past actions having an impact on the outcome of the current action). If there are enough hidden variables, then this could be a real problem for you. A policy maps states to actions - and searching for an optimal policy will always map the same action to the same state. If this is some kind of automated scarecrow system, then the animals are likely to become habituated to the "best" action. Two ways around that: If the habituation is slow enough, you might get away with treating the environment as having the simple state that you suggest and have an agent which constantly learns and adapts to changes in what will trigger animals to leave. This would be an environment with non-stationary dynamics (the same action in the same state will, over time, drift in terms of its expected reward and next state). For a RL agent, this could just be a matter of sticking with a relatively high exploration rate and learning rate. If the habituation is fast or adaptive, then you have to include some memory of recent actions used in the state representation. This will make the state space much larger, but is unavoidable. You could try to keep that memory inside the agent - using methods that work with Partially Observable MDPs (POMDPs) - it doesn't really make the problem any easier, but you may prefer that representation. The reward is the difference of detections from $s_{-1}$ to $s$. You need to re-think the reward system if habituation is fast/adaptive, as it will fail to differentiate between good and bad policies. All policies will end up with some distribution of detections, and all policies will have a mean total reward of 0 as a result. Your reward scheme will average zero in the long term for random behaviour, and also will average zero if the number of creatures remains at 200 the whole time, or is zero the whole time. Presumably having zero creatures would be ideal, whilst having 200 would be terrible - you need to be able to differentiate between those two scenarios. You want to minimise the number of creatures in the area consistently, so a simple reward scheme is just negative of the number of visible animals, per time step. Maybe scale this - e.g. divide by 100, or by some assumed background rate where you would get -1 reward per time step if the number of animals is the same on average as if the agent took no action. You could measure this average using the camera over a few days when the agent is not present or simply not active. It doesn't need to be super accurate, teh scaling is just for convenience - you could think of an agent that gets a mean reward of -0.2 per time step is five times better than having no agent at all, whilst an agent that scores -1.5 per time step might be attracting creatures for all you know! That being said, I'm wondering if there's a specific RL algorithm for this setup or if RL even is the right way to go ? The problem does seem like a good match to RL. There are other ways of searching for good policy functions, such as genetic algorithms, that might also apply. However, you will still need to equip the agent with either continuous learning so it can re-find the best action as it changes, or with a memory of recent actions, depending on the speed of habituation. You may even need both as smart animals like birds or mammals can adapt in long and short term in different ways.
H: perform cluster on a multiple dimensional data in R I have a data set which has 2488 samples and each sample has 13 features.Now I want to perform cluster on this data set in R but I found k-means method usually for two dimensions data.So can any one help me? Many thanks! AI: Short answer You can use any number of dimensions for k-means. Therefore, you can use the standard k-means library for R as long as: your data contains only metric variables and every column is scaled to the same range. See this related question. If your data is not scaled: Scale it. If your data contains only categorical variables: Use k-modes. [R] If your data contains categorical and metric variables: Use k-prototypes. [R] Long answer How does k-means work? The algorithm randomly initializes k points in your data. These are your cluster centers. It assigns every data point to the nearest cluster center. Usually, this is done by calculating the Euclidian distance of the data point to each of the k cluster centers. Now you have k clusters. It chooses the centroid of each of the k clusters as the new cluster center. The steps 2 and 3 are repeated until the solution converges or the maximum number of steps is reached. What this means The Euclidian distance can be calculated in n-dimensional space. Therefore, you can use any number of dimensions you want. However, if your data is not scaled, the distance calculations will be different for every feature. Therefore, you need to scale it. Since Euclidian distance won't make a lot of sense with categorical or with mixed data, you have to use an algorithm that uses Hamming distance instead for these use cases. k-modes uses only Hamming distance. k-prototypes uses Hamming distance for categorical and Euclidian distance for metric variables.
H: Which method to use to remove trend from time series? From what I understand, differencing is necessary to remove the trend and seasonality of a time series. So I assumed it basically does the same thing as signal.detrend from the scipy library. But I tried differencing and then, separately, used signal.detrend and my time series looked completely different. Original: Differencing: Imported libraries: The x axis represents months and the y axis is sales. The colours on the first two charts just represent three different years. AI: Detrend does a least squares fit (linear or constant) and subtracts this from your data points. You can look this up in the docs. Simply taking the difference between consecutive data points will in general lead to other results. In general the regression based detrending seems to be more reasonable. You could also think about using random sample consensus (RANSAC) to be more robust to outliers.
H: Differences between a Statistician and a Data Analyst in industry What is the Difference in the job of a Statistician and a Data Analyst In Industry? My take is that although both analyse data, a Statistician deals with the more theoretical aspects of data such as using mathematics to analyse data and to create mathematical models of data. So their work is more towards the mathematics side of things. Whereas the job of a Data Analyst deals more with the programming and side of things and their job is more practical in nature and they focus more on the implementation side. Im more of a theoretical person and im more mathematically orientated and i wanted to do a job that involves lots of math, so is a statistician job more suited for me rather than a Data Analyst? AI: A major difference is the job market: you'll find a lot of job ads for data analysts/scientists, very few for theoretical statisticians. In most sectors (there are some exceptions, in banking for example), companies are interested in applying existing models to their data because this is what can increase their profits. Devising new theoretical models is more on the research side of innovation. Most "pure research" job opportunities are in academia, although some big companies have research departments as well.
H: For a regression model, can you transform all your features to linear to make a better prediction? I was thinking. Would it be a good approach to check your features one by one (assuming you have a manageable amount of them) and see the relationship they have with your target variable, if they have a non linear relationship then transform each of those features using their appropriate function for each case to make them linear? In my mind if you do this your are guaranteed to have a better Linear model and also you are able to perform hypothesis testing on each feature to see the relevance of them, giving you the chance to perform some feature selection as well. I know that the interpretability of model will be thrown out of the window, but the model will give a much better performance. Basically you could potentially end up with a model with only engineer features (assuming that all of them have a non linear relationship) Would this approach be acceptable and it is worth exploring? AI: Your idea is good, but you are not the first with this idea. You can use Generalized Additive Models (GAM) with regression splines to check and/or add non-linearity in a linear regression setup. There is a clear advantage over looking at just descriptive figures one-by-one (with manual feature generation), since you can estimate a whole model with extreme flexibility. Alternatively, you can simply do a linear regression with a lasso penalty. Add polynomials to your $X$, and let the lasso „shrink“ irrelevant features to zero. The book „Introduction to Statistical Learning“ covers these topics in Sections 6,7. BTW: even with polynomials, your model can retain interpretability if you care for this. Polys are relatively easy to interpret. However, under a GAM approach, interpretation is a little more difficult. Maybe as a note: you want to make features linear (which can be a problem since you can only apply linear transformations to the data). The approaches proposed above aim at making your (linear) model more flexible to cope with non-linearity in data.
H: Where can Batch Normalization be used? CNNs or everywhere? Should BatchNormalization be used only in CNNs or can they be used in Fully Connected Networks, Recurrent networks as well? AI: A Batch Normalization layer essentially is normalizing outputs of your hidden units, so their outputs would always have a similar scale. By eliminating such internal covariate shift in outputs of hidden layers, your deeper layers become more independent from their previous layers. Nothing here is CNN specific - Batch Normalization may be applied to Fully Connected and Recurrent neural networks as well, but they are more useful with deep neural networks, which tend to accumulate this shift with each layer activated during Forward Propagation.
H: DC GAN with Batch Normalization not working I'm trying to implement DC GAN as they have described in the paper. Specifically, they mention the below points Use strided convolutions instead of pooling or upsampling layers. Use only one fully connected layer Use Batch Normalization: Directly applying batchnorm to all layers resulted in sample oscillation and model instability. This was avoided by not applying batchnorm to the generator output layer and the discriminator input layer. use ReLU for generator and Leaky ReLU for discriminator I tried to implement a GAN for MNIST dataset. It is outputting garbage. I tried changing learning rate from 0.01 to 0.00001 optimizer momentum as 0.5, 0.9 Using BatchNormalization before and after activation layer BatchNormalization momentum as 0.5, 0.9, 0.99 Training for upto 3,00,000 iterations But nothing is working. I'm just getting garbage output. But I noticed two strange things Both generator and discriminator loss are going to 0, accuracy going to 1. How is this possible? If I remove all Batch Normalization layers from discriminator, the model starts working. Why? The paper suggests to use BatchNormalization, but it is working otherwise. Any help, tips or suggestions is highly appreciated. Thanks! Here is my full code: MnistModel07.py import numpy from keras import Sequential from keras.engine.saving import load_model from keras.initializers import TruncatedNormal from keras.layers import Activation, BatchNormalization, Conv2D, Conv2DTranspose, Dense, Flatten, LeakyReLU, Reshape from keras.optimizers import Adam from DcGanBaseModel import DcGanBaseModel class MnistModel07(DcGanBaseModel): def __init__(self, verbose: bool = False): super().__init__(verbose) self.generator_model = None self.discriminator_model = None self.concatenated_model = None self.verbose = verbose def build_models(self): self.generator_model = self.build_generator_model() self.discriminator_model = self.build_discriminator_model() self.concatenated_model = self.build_concatenated_model() self.print_model_summary() def build_generator_model(self): if self.generator_model: return self.generator_model generator_model = Sequential() generator_model.add(Dense(7 * 7 * 512, input_dim=100, kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.02))) generator_model.add(Activation('relu')) generator_model.add(BatchNormalization(momentum=0.9)) generator_model.add(Reshape((7, 7, 512))) generator_model.add(Conv2DTranspose(256, 3, strides=2, padding='same', kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.02))) generator_model.add(Activation('relu')) generator_model.add(BatchNormalization(momentum=0.9)) generator_model.add(Conv2DTranspose(128, 3, strides=2, padding='same', kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.02))) generator_model.add(Activation('relu')) generator_model.add(BatchNormalization(momentum=0.9)) generator_model.add(Conv2D(1, 3, padding='same', kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.02))) generator_model.add(Activation('tanh')) return generator_model def build_discriminator_model(self): if self.discriminator_model: return self.discriminator_model discriminator_model = Sequential() discriminator_model.add(Conv2D(128, 3, strides=2, input_shape=(28, 28, 1), padding='same', kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.02))) discriminator_model.add(LeakyReLU(alpha=0.2)) discriminator_model.add(Conv2D(256, 3, strides=2, padding='same', kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.02))) discriminator_model.add(LeakyReLU(alpha=0.2)) discriminator_model.add(BatchNormalization(momentum=0.9)) discriminator_model.add(Flatten()) discriminator_model.add(Dense(1, kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.02))) discriminator_model.add(Activation('sigmoid')) return discriminator_model def build_concatenated_model(self): if self.concatenated_model: return self.concatenated_model concatenated_model = Sequential() concatenated_model.add(self.generator_model) concatenated_model.add(self.discriminator_model) return concatenated_model def print_model_summary(self): self.verbose_log(self.generator_model.summary()) self.verbose_log(self.discriminator_model.summary()) self.verbose_log(self.concatenated_model.summary()) def build_dc_gan(self): """ Binary Cross-Entropy Loss is used for both Generator and Discriminator Discriminator: loss = -log(D(x)) when x is real image and loss=-log(1-D(x)) when x is fake image Optimizer minimizes this loss. This is equivalent to maximize over D(x) as specified in original GAN paper Generator: loss = -log(D(G(z)) Optimizer minimizes this loss. This is the second loss function defined in paper, not the one in min-max definition Since while training Generator we are not minimizing log(1-D(G(z))), the analytical results we derived won't hold for generator part. Ideally, Discriminator loss = -ln(0.5); Generator loss = -ln(0.5) = 0.693 metrics = accuracy: binary_accuracy is used https://github.com/keras-team/keras/blob/d8b226f26b35348d934edb1213061993e7e5a1fa/keras/engine/training.py#L651 https://github.com/keras-team/keras/blob/c2e36f369b411ad1d0a40ac096fe35f73b9dffd3/keras/metrics.py#L6 Binary_accuracy: Average of correct predictions Discriminator: Ideally, discriminator should be completely confused i.e. accuracy=0.5 Generator: Ideally, Generator should be able to fool discriminator. So, accuracy=1. But, since Discriminator is confused, it randomly flags some images as fake. So, accuracy=0.5 """ self.build_models() self.discriminator_model.trainable = True optimizer = Adam(lr=0.0002, beta_1=0.5, beta_2=0.999, decay=0) self.discriminator_model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) self.discriminator_model.trainable = False optimizer = Adam(lr=0.0002, beta_1=0.5, beta_2=0.999, decay=0) self.concatenated_model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) def train_on_batch(self, images_real: numpy.ndarray): # Generator output has tanh activation whose range is [-1,1] images_real = (images_real.astype('float32') * 2 / 255) - 1 # Generate Fake Images batch_size = images_real.shape[0] noise = numpy.random.uniform(-1.0, 1.0, size=[batch_size, 100]) images_fake = self.generator_model.predict(noise) # Train discriminator on both real and fake images x = numpy.concatenate((images_real, images_fake), axis=0) y = numpy.ones([2 * batch_size, 1]) y[batch_size:, :] = 0 d_loss = self.discriminator_model.train_on_batch(x, y) # Train generator i.e. concatenated model # Note that in concatenated model, training of discriminator weights is disabled noise = numpy.random.uniform(-1.0, 1.0, size=[batch_size, 100]) y = numpy.ones([batch_size, 1]) g_loss = self.concatenated_model.train_on_batch(noise, y) return g_loss, d_loss def generate_images(self, num_images=1, noise=None) -> numpy.ndarray: if noise is None: noise = numpy.random.uniform(-1, 1, size=[num_images, 100]) # Generator output has tanh activation whose range is [-1,1] images = (self.generator_model.predict(noise) + 1) * 255 / 2 images = numpy.round(images).astype('uint8') return images def save_generator_model(self, save_path): self.generator_model.save(save_path) def save_generator_model_data(self, json_path, weights_path): with open(json_path, 'w') as json_file: json_file.write(self.generator_model.to_json()) self.generator_model.save_weights(weights_path) def load_generator_model(self, model_path): self.generator_model = load_model(model_path) def load_generator_model_weights(self, weights_path): self.generator_model.load_weights(weights_path) def save_discriminator_model(self, save_path): self.discriminator_model.save(save_path) def save_discriminator_model_data(self, json_path, weights_path): with open(json_path, 'w') as json_file: json_file.write(self.discriminator_model.to_json()) self.discriminator_model.save_weights(weights_path) def load_discriminator_model(self, model_path): self.discriminator_model = load_model(model_path) def load_discriminator_model_weights(self, weights_path): self.discriminator_model.load_weights(weights_path) def save_concatenated_model(self, save_path): self.concatenated_model.save(save_path) def save_concatenated_model_data(self, json_path, weights_path): with open(json_path, 'w') as json_file: json_file.write(self.concatenated_model.to_json()) self.concatenated_model.save_weights(weights_path) def load_concatenated_model(self, model_path): self.concatenated_model = load_model(model_path) def load_concatenated_model_weights(self, weights_path): self.concatenated_model.load_weights(weights_path) MnistTrainer.py import datetime import os import time import numpy from keras.datasets import mnist from matplotlib import pyplot as plt from evaluation.EvaluationMetricsWrapper import ClassifierData, Evaluator from utils import CommonUtils, GraphPlotter from utils.CommonUtils import check_output_dir class MnistTrainer: def __init__(self, model, classifier_data: ClassifierData, verbose=False): self.x_train = self.get_train_data() self.dc_gan = model(verbose=verbose) self.dc_gan.build_dc_gan() self.evaluator = Evaluator(classifier_data, num_classes=10) if classifier_data is not None else None self.verbose = verbose @staticmethod def get_train_data(): (x_train, y_train), _ = mnist.load_data() x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], x_train.shape[2], 1) return x_train def train(self, train_steps, batch_size, loss_log_interval, save_interval, output_folder_path=None): self.verbose_log('Training begins: ' + datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')) if output_folder_path is not None: CommonUtils.check_output_dir(output_folder_path) loss_file_path = os.path.join(output_folder_path, 'TrainLosses.csv') self.initialize_loss_file(loss_file_path) self.sample_real_images(output_folder_path) if self.evaluator is not None: metrics_filepath = os.path.join(output_folder_path, 'Evaluation/EvaluationMetrics.csv') self.initialize_metrics_file(metrics_filepath) for i in range(train_steps): # Get real (Dataset) Images images_real = self.x_train[numpy.random.randint(0, self.x_train.shape[0], size=batch_size), :, :, :] g_loss, d_loss = self.dc_gan.train_on_batch(images_real) if output_folder_path is not None: # Save train losses, models, generate sample images if (i + 1) % loss_log_interval == 0: # noinspection PyUnboundLocalVariable self.append_losses(loss_file_path, i + 1, g_loss, d_loss) if (i + 1) % save_interval == 0: self.save_models(output_folder_path, i + 1) self.generate_images(output_folder_path, i + 1) if self.evaluator is not None: # noinspection PyUnboundLocalVariable self.append_metrics(metrics_filepath, i + 1) if output_folder_path is not None: # Plot the loss functions and accuracy graph_file_path = os.path.join(output_folder_path, 'LossAccuracyPlot.png') GraphPlotter.plot_loss_and_accuracy(loss_file_path, graph_file_path) if self.evaluator is not None: metrics_graph_path = os.path.join(output_folder_path, 'Evaluation/EvaluationMetrics.png') GraphPlotter.plot_evaluation_metrics(metrics_filepath, metrics_graph_path) self.verbose_log('Training ends: ' + datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')) @staticmethod def initialize_loss_file(loss_file_path): line = 'Iteration No, Generator Loss, Generator Accuracy, Discriminator Loss, Discriminator Accuracy, Time\n' with open(loss_file_path, 'w') as loss_file: loss_file.write(line) def append_losses(self, loss_file_path, iteration_no, g_loss, d_loss): line = '{0:05},{1:2.4f},{2:0.4f},{3:2.4f},{4:0.4f},{5}\n' \ .format(iteration_no, g_loss[0], g_loss[1], d_loss[0], d_loss[1], datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')) with open(loss_file_path, 'a') as loss_file: loss_file.write(line) self.verbose_log(line) def save_models(self, output_folder_path, iteration_no): models_save_dir = os.path.join(output_folder_path, 'TrainedModels') if not os.path.exists(models_save_dir): os.makedirs(models_save_dir) self.dc_gan.save_generator_model( os.path.join(models_save_dir, 'generator_model_{0}.h5'.format(iteration_no))) self.dc_gan.save_generator_model_data( os.path.join(models_save_dir, 'generator_model_arch_{0}.json'.format(iteration_no)), os.path.join(models_save_dir, 'generator_model_weights_{0}.h5'.format(iteration_no)) ) self.dc_gan.save_discriminator_model( os.path.join(models_save_dir, 'discriminator_model_{0}.h5'.format(iteration_no))) self.dc_gan.save_discriminator_model_data( os.path.join(models_save_dir, 'discriminator_model_arch_{0}.json'.format(iteration_no)), os.path.join(models_save_dir, 'discriminator_model_weights_{0}.h5'.format(iteration_no)) ) self.dc_gan.save_concatenated_model( os.path.join(models_save_dir, 'concatenated_model_{0}.h5'.format(iteration_no))) self.dc_gan.save_concatenated_model_data( os.path.join(models_save_dir, 'concatenated_model_arch_{0}.json'.format(iteration_no)), os.path.join(models_save_dir, 'concatenated_model_weights_{0}.h5'.format(iteration_no)) ) def sample_real_images(self, output_folder_path): filepath = os.path.join(output_folder_path, 'MNIST_Sample_Real_Images.png') i = numpy.random.randint(0, self.x_train.shape[0], 16) images = self.x_train[i, :, :, :] plt.figure(figsize=(10, 10)) for i in range(16): plt.subplot(4, 4, i + 1) image = images[i, :, :, :] image = numpy.reshape(image, [28, 28]) plt.imshow(image, cmap='gray') plt.axis('off') plt.tight_layout() plt.savefig(filepath) plt.close('all') def generate_images(self, output_folder_path, iteration_no, noise=None): gen_images_dir = os.path.join(output_folder_path, 'Generated_Images') if not os.path.exists(gen_images_dir): os.makedirs(gen_images_dir) filepath = os.path.join(gen_images_dir, 'MNIST_Gen_Image{0}.png'.format(iteration_no)) images = self.dc_gan.generate_images(16, noise) plt.figure(figsize=(10, 10)) for i in range(16): plt.subplot(4, 4, i + 1) image = images[i, :, :, :] image = numpy.reshape(image, [28, 28]) plt.imshow(image, cmap='gray') plt.axis('off') plt.tight_layout() plt.savefig(filepath) plt.close('all') def initialize_metrics_file(self, filepath: str): check_output_dir(os.path.split(filepath)[0]) with open(filepath, 'w') as metrics_file: metrics_file.write('Iteration No,' + ','.join(self.evaluator.get_metrics_names()) + '\n') def append_metrics(self, filepath: str, iteration_no): metrics = self.evaluator.evaluate(self.dc_gan) with open(filepath, 'a') as metrics_file: metrics_file.write(str(iteration_no) + ',' + ','.join(map(str, metrics)) + '\n') def verbose_log(self, log_line): if self.verbose: print(log_line) def main(): """ Execute in src directory """ from mnist.MnistModel05 import MnistModel05 train_steps = 10000 batch_size = 128 loss_log_interval = 10 save_interval = 100 output_folder_path = '../Runs/Run01' classifier_name = 'MnistClassifier06' classifier_filepath = '../../../../DiscriminativeModels/01_MNIST_Classification/src/MnistClassifierModel06.py' classifier_json_path = \ '../../../../DiscriminativeModels/01_MNIST_Classification/Runs/MnistClassifier06/Run01/TrainedModels' \ '/MNIST_Model_Arch_30.json' classifier_weights_path = \ '../../../../DiscriminativeModels/01_MNIST_Classification/Runs/MnistClassifier06/Run01/TrainedModels' \ '/MNIST_Model_Weights_30.h5' classifier_data = ClassifierData(classifier_name, classifier_filepath, classifier_json_path, classifier_weights_path) mnist_trainer = MnistTrainer(model=MnistModel05, classifier_data=classifier_data, verbose=True) mnist_trainer.train(train_steps, batch_size, loss_log_interval, save_interval, output_folder_path) del mnist_trainer.dc_gan return if __name__ == '__main__': start_time = time.time() print('Program Started at {0}'.format(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(start_time)))) try: main() except Exception as e: print(e) end_time = time.time() print('Program Ended at {0}'.format(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(end_time)))) print('Total Execution Time: {0}s'.format(datetime.timedelta(seconds=end_time - start_time))) AI: Golden Rule: In Keras, if using Batch Normalization layer, train the discriminator on real and fake images separately. Don't combine them. I was able to solve it by changing the discriminator training code as follows: d_loss = self.discriminator_model.train_on_batch(images_real, numpy.ones((batch_size, 1))) d_loss = self.discriminator_model.train_on_batch(images_fake, numpy.zeros((batch_size, 1))) With this change, the issue of generator and discriminator accuracy being at 1 was also solved. I guess combining real and fake images in a single batch causes some problem with the Batch Normalization in Keras. That was the problem. Why that causing the problem, I have no idea.
H: RL Sutton book, initial estimate of q*(a) for 10 arm testbed The Sutton book does not mention what the initial estimate is for q*(a) before the first reward is received. In this code repo that seems to go along with the book: Sutton code repo They have initialized it with 0 per snippet below: def __init__(self, kArm=10, epsilon=0., initial=0., stepSize=0.1, sampleAverages=False, UCBParam=None, gradient=False, gradientBaseline=False, trueReward=0.): But the explanation for Figure 2.1 that shows the distribution of rewards for the 10 arms of the bandit says, Figure 2.1: An example bandit problem from the 10-armed testbed. The true value q ⇤ (a) of each of the ten actions was selected according to a normal distribution with mean zero and unit variance, and then the actual rewards were selected according to a mean q ⇤ (a) unit variance normal distribution, as suggested by these gray distributions. So should I initialize instead with np.random.randn()? Edit: The distribution AI: The description you quote explains how the true values will be set in the test when setting up a test run. This is necessary to fully state how the test works. Initialisation of your estimates is a different issue. If you know something about the distributions of the true action values, then it would make sense to use that. For instance you could set all action values to the mean expected true value. Which is $0$. However, you may also use $0$ if you have no idea about true values, as it is a simple arbitrary value. Setting the estimates to something using the same distribution is not unreasonable, as it that itself is an unbiased estimate of the same mean. However, it does not really serve you well here, because it adds variance to the initial estimates (as well as possibilty of them being closer to true values, they are equally likely to be worse) and on average slowing down some agent types slightly.
H: When I use SHAP for classification problem, it shows an output that is not 0 or 1. How can I overcome this? I'm using Pima Indians Diabetes Database(https://www.kaggle.com/uciml/pima-indians-diabetes-database). I made predictions using XGboost and I'm trying to analyze the features using SHAP. However when I use force_plot with just one training example(a 1x8 vector) it shows that my output is -2.02. This is a classification problem, I shouldn't be seeing such a value. I'm new in SHAP and I don't know what the problem is. Here is my code: import numpy as np import xgboost as xgb import sklearn as skl import shap dataset=np.loadtxt("diabetes.csv", delimiter=",") X=dataset[:,0:8] Y=dataset[:,8] seed=7 test_size=0.33 X_train, X_test, y_train, y_test=skl.model_selection.train_test_split(X, Y, test_size=test_size, random_state=seed) shap.initjs() model=xgb.XGBClassifier() model.fit(X_train, y_train) predictions=model.predict(X_test) accuracy=skl.metrics.accuracy_score(y_test, predictions) print(accuracy*100) explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_train) shap.force_plot(explainer.expected_value, shap_values[0,:].reshape(1, 8), X_train[0,:].reshape(1, 8)) Accuracy of my model is: 77,95. AI: The default link function is the identity, so you are seeing log-odds rather than probabilities. To see the probability, try adding link='logit' to your call of force_plot like so: shap.force_plot(explainer.expected_value, shap_values[0,:].reshape(1, 8), X_train[0,:].reshape(1, 8), link='logit') You can read more at the SHAP documentation site.
H: Dealing with new outliers after capping I'm trying to cap outliers in a column of my pandas DataFrame. Here's the boxplot for a column of my original data. So, using code from this stackoverflow answer, I tried capping outliers. Here's how capped column looks like, with new outliers after upper bound. My problem now is, I can't keep capping outliers, yes! What do I do now? AI: Once is all it takes; you're done. The point of dealing with outliers is that they are rare. Forcing your model to fit to those outliers can make its performance against inliers (which it will encounter far more often) worse, and that's A Bad Thing. Although you now have values that look like outliers when you plot them, this is an artifact of your earlier transformation - those values should be within the "normal" expected range of values for your data.
H: How to train a neural network for high recall? I would like to train a neural network for named entity recognition to tag an unlabeled dataset of texts. The generated labels will then be checked via a crowdsourcing platform. The goal is to annotate the dataset. Therefore, the neural net should find all possible entities in the text, i.e. have high recall rather than precision. What would be the best way to train a neural network for high recall, i.e. assigning lower cost to false positives than to false negatives? Could the loss function be changed from negative log likelihood to something else to encourage high recall? AI: There is a very simple way of increasing the recall of a network without requiring retraining: A network's output is the probability it gives that a sample belongs to a class, let's say class $0$ or class $1$. The output of the net would look like this $(0.53, 0.47)$. This means that the net gives a $53\%$ probability of the sample being class $0$ and a $47\%$ probability of it being class $1$. Normally we check which class has the highest probability and consider that to be the network's prediction. However, you could set a threshold over which the sample would be considered to be class $1$. This way you would get more samples predicted to class $1$, which would increase the class' recall (at the cost of its precision though).
H: what approach to use for find best customer out of data? I'm working on this project where the objective is to find certain good leads/customers from the existing customer dataset. I tried the RFM method for scoring but there is no data regarding money or any quantity. dataset has: customer_id date (date when meeting is done) next_action_date(if any next_meeting_scheduled that day) status( done, missed, cancelled) So based on only meetings, I need to score a lead or customer. So I started with normal control flow statement like, if meeting was scheduled and missed- 5 points if meeting was scheduled and and completed - 10points if if meeting was scheduled and and cancelled -2points. basically, lot of these cases are there by which I need to score a customer. So how should I write a python code for this, I mean what will be the approach for this. I can get the data from an api for a customer. What I thought is to create features like, days since last visit no of meeting done succesfully and more and add all these in other table. My problem is which kind of code I have to write for this? Please explain in simple words if someone can. I'm new at this. AI: Your intuitions are good: in general you need your data to contain all the possible indications (features) which might help find the answer. And your thinking is also correct about creating this "other table": the reason why you need it is because for this problem you need each instance to correspond to a customer. Your original data is not organized by customer, it's organized by meeting. So it makes sense to organize your data with features by customer such as these: days_since_last_visit number_meeting_scheduled number_meetings_attended ... 15 4 2 189 3 1 24 2 2 ... In general it's not recommended to assign scores yourself, because you probably don't know what is the optimal value. For example is a customer better when they schedule 10 meetings and attend 3 or schedule 4 and attend 2? It's usually better to give all the raw values you have to the ML algorithm and let it calculate the best way to use it. An important point to define is: what is a good customer exactly? If you think your scoring is really accurate to define a good customer, you can calculate the score and rank the customers, then you're done: this is a heuristic, because you calculate the answer directly based on your knowledge of the problem. Now assuming you are not so sure and want to use ML: If you can have a sample with labels which say whether a customer is "good" or not, then you have a supervised classification problem: the goal will be to train a model able to predict the class (category) for any customer based on the features. If you have a sample with numeric values indicating "how good" a customer is, then you have a supervised regression problem. Again the goal is to train a model which predicts the value for any customer based on the features. For both cases above I'd suggest you start with simple methods such as decision trees or SVM (both can do classification or regression). The former has the neat advantage that you can manually observe the tree and understand how the classifier works. If you don't have any labeled data, then your only option is unsupervised learning, which usually means some form of clustering: in this case there's no training stage, the algorithm is provided only with the features and it tries to group together the instances which are close to each other. A standard approach for that would be K-means, for example. Note that you're not sure to obtain the groups that you expect in this case.
H: Auto-scheduling with Machine Learning I am very new to Data Science, but I have an use case which I want to solve. I want to build a data synchronization scheduler which keeps track of the amount of data sync after every scheduled triggers and auto-adjusts the next schedule. For example : Let us suppose I have 3 jobs to execute. Currently, we keep each of them at 5 minutes interval (say) but this needs to be auto-scheduled. So let at 10 AM Job 1 got executed and got 10 entries. At 10 AM job 2 got executed and got 100 entries. At 10 AM job 3 got executed and got 200 entries. For such a scenario, job 1 got less stream of data than job 2 and job 3. The auto-scheduler in such a case will auto-adjust the interval and recommend to change the next execution at : Job 1 - may be 10 min interval Job 2 - may be 5 min interval Job 3 - may be 2 min interval. The scheduler will train itself based on time based historical data as well, for instance if stream of data is more at 10 AM for job 3, it might be less at 1 PM, when Job 1 might have more data. The scheduler would automatically adjust the time and make next schedule of 1 PM at lesser time interval for job 1 than job 3. Can you suggest me any algorithm which I can follow to support this case ? Or even if you can help me how to proceed in ML, it would help me a lot. AI: I want to build a data synchronization scheduler which keeps track of the amount of data sync after every scheduled triggers and auto-adjusts the next schedule. This doesn't immediately strike me as a "use machine learning" problem to be honest. If you just want the scheduler to schedule the next run to be after an amount of time that's determined by the number of records processed in the current batch then that's quite a simple and deterministic formula. You could have the default gap be 20 mins and then do something like: next gap = 10 * (10 / n) Where n is the number of records processed in the last run. That would mean in this case: So let at 10 AM Job 1 got executed and got 10 entries. At 10 AM job 2 got executed and got 100 entries. At 10 AM job 3 got executed and got 200 entries. The next Job 1 would be scheduled at 20 * (20 / 20) = 20 minutes, the next job 2 would be at 20 * (20 / 100) = 4 minutes and the next job 3 would be at 20 * (20 / 200) = 2 minutes. If you really wanted to use ML for it, I guess I'd suggest using a time series forecasting algorithm like ARIMA or Prophet to predict the number of samples each job will have to process over the next hour or something, and set the next run-time appropriately based on that.
H: Are batch iteration and epochs different in reinforcement learning compared to supervised learning? I'm following the Udacity "AWS DeepRacer" course about self driving cars with reinforcement learning. In one lesson, they says this: Batch size - This determines how many images, randomly sampled from the most recent episode, are used for training before updating the model. Epochs - Determines how many times you will loop through the batched data before updating the training weights. A larger number of epochs is likely needed if your model is still improving but training has otherwise ceased. Learning rate - Controls the speed at which your algorithm learns (it enlarges or shrinks the weight update after each epoch). Using Tensorflow and Keras with Adam, I learned that the weights are updated after each batch iteration. But in this course, they says that the weights are updated only after each epoch, and the batch size determine how much of the training data to loop through before updating the model. So, what's the difference between updating the model and updating the weights? In tensorflow with keras, the weights are updated after each batch iteration and also after each epoch? Is it different compared to RL? AI: Your initial statements are correct. Epoch is one-single pass over the full training set. Most people use the word Batch as the number of samples used for one update of the weights. (The back-propagation process calculates the gradients for every single sample in the batch, but the weights update is performed a single time for the mean of the gradients over the batch). However, originally the word batch was used to mean a training process where the batch size is equal to the overall number of samples in the training set (meaning that the weights update is performed only once per epoch). What we now refer to as "batch", was originally named "mini-batch". Over the years, the word batch became synonymous with mini-batch (batch training is very rare these days). I think that this is the source to your confusion.
H: Appending the values of multiple columns into one whilst retaining UIDs I have a table structured as below: UID|ColA|ColB|ColC|ColD I want to restructure so that all columns are combined (appended) into one, retaining the UID of each as below: UID|ColA UID|ColB UID|ColC UID|ColD I'm very new to Pandas and was wondering how this could be done? Thank you! AI: Assuming you already have a dataframe df, created as below: In [1] : df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c']) In [2] : df Out[2] : a b c 0 1 2 3 1 4 5 6 2 7 8 9 you could try doing this: In [3] : df.melt("a").sort_values("a") Out[3] : a variable value 0 1 b 2 3 1 c 3 1 4 b 5 4 4 c 6 2 7 b 8 5 7 c 9 where the sorting might be optional in your case. Also, you should swap "a" for the name of your column containing the UID values. You can additionally drop the autogenerated variable column too, so the final solution would be: In [4] : df.melt("a").sort_values("a").drop("variable", axis=1) Out[4] : a value 0 1 2 3 1 3 1 4 5 4 4 6 2 7 8 5 7 9 From what I remember, the melt functionality is inspired on the melt function in the R programming language, where DataFrames are one of the core built-in data-types. It is really useful for plotting using the ggplot library (in R).
H: Feature selection before or after applying filter in Time-series forecasting I'm predicting ozone concentration based on meteorological variables and ozone value of the previous day. I applied savitzky golay filter to get rid of noise in the time-series dataset. My question is, if I want to perform feature selection, do I do it before or after applying the filter? What is the logical order? Because the feature importance is different before and after applying the filter. Using XGBOOST, this is the feature importance before the filter: And this is after the filter: I'd really be grateful for any help or information. AI: I find your question confusing (this might be my fault). Let's see if I understand you correctly. Feature selection is not applied before or after the filter. You choose to include a feature, or not. There are many ways to make a selection of features. If you use decision trees (or algorithms based on decision trees like XGBoost or random forest), then you can calculate the feature importance and use that to select features. Your set-up to predict ozone concentration contains feature processing (the filter) and then an algorithm (boosted trees) that makes a prediction. You use a set of features, run them through a filter, you train the trees. Now you can use the filter + the trained trees to get your prediction (ozone concentration). You can also look at the feature importance of all the features that you used, make a selection, and retrain with a reduced set of features. What makes no sense to me is to use a different set-up without the filter, to train the trees on the raw unfiltered features, and then look at the feature importance, because you have trained your trees on input data that the trees will never see in the set-up you're actually trying to use. So I think the answer to your question is "after". :-)
H: Why should I know C++ ,if I am a machine learning engineer? I see there's a lot of machine learning job openings with skills requirements, python ,R, keras,tensorflow,pytorch,spark, etc.which are completely fine & reasonable, but why many of the recruiters include C++ ,like what is use of C++ in ML research, or even creating ML pipelines ? How much C++ should I know if I'm good at rest of ML skills ? AI: C++ is often listed not because you will necessarily be coding in C++, but rather fundamental understanding of memory allocations, object-oriented design patterns, and other CS fundamentals that python (and other similar) languages abstract a bit. Plus, there's Cython and other c-derivative frameworks that may be necessary to understand in a real-time environment.
H: Why doesn't the binary classification log loss formula make it explicit that natural log is being used? I'm completing a DataCamp course where we are introduced to the log loss formula for binary classification: Two scenarios are given to show how the formula is used. One with p=0.1 and one with p=0.5. The answers the instructor displayed were 2.3 and .69, respectively. However, using a calculator, the answers for log(0.1) and log(0.5) are -1 and -0.30, respectively. I later tried using natural log instead and got the same answers as the instructor, except negative. Specifically, the calculator returned -2.3 for ln(0.1) and -0.69 for ln(0.5). Is it common in math for log to implied to be "ln" or "log e" without stating it explicitly in the formula? Also, is there something about the log loss binary classification formula that suggest that the absolute value of the result should be taken? AI: This comes down to the change-of-base formula. For any two numbers $a$ and $b$, the following equation is true. $$ \log_a(x) = \frac{\log_b(x)}{\log_b(a)}. $$ What this means is that the errors are proportional. So if you wanted to change to using $\log_{10}$, you would simply end up multiplying by a constant factor, and model selection would be the same. Explicitly, $$logloss(N=1) = \frac{y \log_{10}(p) + (1 - y) \log_{10}(p)}{\log_{10}(e)}$$ Or, equivalently $$ logloss(N=1) \cdot \log_{10}(e) = y \log_{10}(p) + (1 - y) \log_{10}(p) $$ In other words: The base of the logarithm doesn't matter because everything ends up being proportional.
H: Confusion about Decoder labels for training seq-to-seq models So in seq-to-seq models for say NMT, the decoder is a sequence model for the right-shifted intended output. My question is, during training, are the inputs and outputs of the decoder supposed to be the desired labels? Or are just the outputs the desired labels and the inputs are the actual predicted output from the last timestep? AI: It can be both. If you input the desired label and predict the next desired label, it's called teacher forcing. But using only this technique might hurt the performance at test time. So using the actual predicted output from the last time step is also a good idea. It's possible to do both : For each batch, with X% chance you use teacher forcing, otherwise you don't.
H: Bi-directionality in BERT model I am reading the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding that can be found here. It looks to me that the crux of the paper is using masked inputs to achieve bidirectionally. This is an excerpt from the Google AI blog here which states: "However, it is not possible to train bidirectional models by simply conditioning each word on its previous and next words, since this would allow the word that’s being predicted to indirectly “see itself” in a multi-layer model. To solve this problem, we use the straightforward technique of masking out some of the words in the input and then condition each word bidirectionally to predict the masked words." Can someone please help me understand how does bidirectionally allow the words to see themselves and how masking solves this problem? Thanks. AI: Let's take an example : "I went to the shop." Let's say you want to predict "to" and "the". With bidirectionality, you will predict : p(to | I, went, the, shop) : No problem here. p(the | I, went, to, shop) : Here we have a problem, because we already saw the word 'the' while predicting 'to'. It's trivial for the model to predict 'the'. With MLM, if we take the same example : p(to | I, went, [MASK], [MASK], shop) p(the | I, went, [MASK], [MASK], shop) There is no more problem because the word cannot see itself in other predictions. It's more difficult to predict, but anyway only 15% of the time the word is masked. Edit In MLM, BERT will see all the words of the sentence, including the word to predict itself. This is because BERT will create representations not only for the [MASK] token, but also for the other tokens of the sentence (but in this case we are interested only in [MASK]). Note : It is also because BERT will create representations of all tokens that we need to replace [MASK] sometimes by the original token, sometimes by a random token. You can find more detailed information in this blog in the "Pretraining" section.
H: Is it feasible to train a single model on 150+ classes? I have classes for different regions. (Let say 80 classes for 3 regions each). Will it be ok if I train my CNN model with 240 classes or should I create 3 models for each region? The classes for each region are similar to the classes of other region with same name but there is some difference in the elements of each class from each region and hence given some other class name. I currently have one trained model which gives good accuracy for one region with 80 categories. Now that I have to move with remaining classes, it will be time-saving if I know beforehand to train one single model with 240 categories or three different models with 80 categories each. I have 300 images for each class on average. AI: A Classifier's quality depends from the number of available observations for each class. If you have 300 images for each class, as you said above, it might be enough (it depends from the quality of your data). In any case, it would require a massive amount of data augmentation. Training a Network on 240 classes could be very challenging, if you don't have a proper infrastructure. That all depends on training times, but I'd suggest you to break down the problem in smaller subproblems. That would also make debugging and improvements easier to implement.
H: How to predict whether the client will renew the subscription or not based on given data structure I have a requirement where I want to predict whether the client will renew the subscription or not. And the data is something like below. Basically client's subscription end date can be anything. And I want to predict whether client A will renew the subscription in Oct or not. And Similarly client B will renew it in December or not. And I want to run the model every day because every client's subscription end date can be anything. Along with this I have various features of the client i.e. Feature 1, 2, 3 etc which can be used to train the model. My question is, how do I structure the data so that I can run the model everyday and what would be my output variable. What kind of model I can fit. I was thinking of survival analysis but not sure if the data structure I have can be used for survival analysis or not. Need your suggestion to approach this problem. AI: That data structure is fine. You need to have a dataset of historic subscriptions (I.E. subscriptions that have now finished), where the output variable is "Renewed/ Not renewed". You can train your model on that dataset and then make predictions each day as to whether or not each "Active" subscription is likely to be renewed when it reaches the end date. As to what kind of model you can train, the "usual suspects" should be fine for this. LogisticRegression, XGBoost, RandomForests and so on can all handle this kind of problem. EDIT So lets say for example I have a 3 month subscription that runs from 1st Jan to 31st March. I decide to renew for another 3 Months until 30th June. I then renew again until 30th September. YOU have the same subscription from 1st Jan to 31st March, and you also renew until 30th June. You then decide NOT to renew. In that case, your data should look like this: The Status column shows subscriptions which are Finished, and which are active. You should train the model against Finished subscriptions only and make predictions against Active ones. The target variable is the "Renewed" feature. For testing purposes you should split the Finished subscriptions into a 75/25; train the model on the 75% and test it against the 25% Hope that clarifies.
H: Why TREC set two task: document ranking and passage ranking TREC is https://microsoft.github.io/TREC-2019-Deep-Learning/ I am new to text retrieval. Still can not understand why set the two similar task. Thank you very much. AI: The idea behind the two tasks is to explore how document length affects the effectiveness of the different retrieval models. Ellen Voorhees TREC project manager NIST
H: Understanding the multidimensional-nature of the data being fed to a RNN and its output Assuming we have a time-series dataset whose window_size = 30 and the batch_size = 4, which makes the overall input = 4*30 (2D). But as RNN expects 3D input, tf.expand_dims is used to make it a 3D input (as per the lecture, new inut becomes 4*30*1, where the last dimension is 1 as the example deals with a univariate time-series). What I don't get is that what does adding a dimension mean? Eg. what will be the element [0,0,0] of the input? Also in keras, the typical format for fitting is model.fit(input, output, epochs=400) But in an RNN sample code for time-series data, I found model.fit(dataset, epochs=400) where dataset is a tf object containing the time-series data. Why is the input and output not given explicit for the model to train in case of the first code? The timestamp is already included in the input in a way(in the 4*30*1 input, the 2nd dimension is supposed to be time-stamps), but how does the keras know against what output labels the input has to be trained? AI: I think you’re confused about what a “batch” is. A batch has a very specific definition in machine learning. In my experience: Dimension 1 = number of bins or number of data points per each time step Dimension 2 = window size, the number of time steps Dimension 3 = batch size, the total number of examples which you’ll feed the network per training batch Looks like this for [4,3,2]: [[[0,1,4,3],[0,8,6,9],[9,6,7,4]],[[6,8,7,0],[1,7,7,9],[1,3,5,8]]] 4 values per time step, 3 total time steps per example, 2 examples in the batch. Where [0,0,0] returns 0, [3,1,1] returns 9. Also, one friendly piece of advice - don’t think in terms of time stamps. You’re not dealing with time in an RNN. You’re dealing with steps in a sequence. The sequence can be related to time. Edit: With respect to the tensorflow stuff, a tf.dataset object can contain both the input data and the labels for each example. The point is to make writing code easier, managing data easier etc. The process for training/testing is the same.
H: What machine learning algorithm should I use for specific user configuration? I have a data-set that contains thousands of employee data, including their role, department (Applications Developer, IT Support, Network Management etc.), and using one-hot encoding all of the hardware or software they have been given. I want to use this data so that when I input a new user with the data - role, department etc. their account will be compared to the model accounts and a prediction of what software or hardware they will need will be output. What machine learning algorithm should I use for this? I was thinking an unsupervised approach would be appropriate but I am new to machine learning and data science so I could be going about this completely the wrong way. Thanks for your help :) AI: Unsupervised Approach Unsupervised learning can be a good starting point. You can do a clustering (k-means/hierarchical) of existing user-base to find patterns in it (example: cluster 1 contains employees with Windows laptop with 4 GB RAM and photoshop pre-installed , cluster 2 is Mac with 16 GB memory). Then for a new employee, predict the nearest matching cluster and use the pre-dominant setup of that cluster. The problem is there may be lot of lot of variation in the same cluster, especially on softwares installed. Supervised Approach Other alternative is to convert it to a supervised learning problem with hand-rolled categories as outcome (example: 'laptops with lot of RAM and graphics software', 'laptops for development') and then predict that category. You will need upfront work and domain knowledge to create the categories. But it will reduce the predicted variable space by a large margin. But How to Create Categories? Maths can help us a little bit here. Let us assume that you have an N dimensional one-hot encoded vector representing the hardware and software setup. Let us make the assumption that there are a handful of latent K categories of the setup encoded by a large dimensional vector of size N. To go about finding those, you can take a low dimensional projection of the encoding vector. This is probably a mathematical way of finding the categories instead of what you'd do manually on the output of clustering/unsupervised approach. Radical Alternative: Recommendations If you have enough data, you can pose it as 'recommending right setup to a new employee' problem. Take any standard movie recommendation kind of setting with users and movie ratings. Replace users -> employees and movies with a unique hardware + software setup. Use standard recommendation algorithm applied to binary data. All these approaches are actually variant of low dimensional projection of the encoding vector + similarity calculation with existing users. Hope this helps.
H: Hybrid classification neural network I have product data and I need to classify products to categories (for example Lenovo laptop to Laptops category, etc.), each product has properties such as: description list with image URLs (typically 4 photos) product-specific properties (watches have a mechanism type attribute, etc.) manufacturer category ID Category ID is my target variable, do you know some resources (articles/books) where someone did something similar? I heard about the transfer learning (answer to this question Hybrid Convolutional and Conventional Neural Networks, is it a good approach?). My biggest problem is that I don't know how to connect CNN for images similarity and conventional neural network. Thanks for the help. AI: Basically, you need to create a whole system which contains multiple ML algorithms. We can go feature-wise and see how the system could work. Description: This will consist of text which could be easily vectorized using Doc2Vec. This vector will act as a feature for the final model. Product images: You will need to create a model which make some sort of classification. Once, the model is trained to remove the last layer ( mostly the Softmax layer ). This will fetch us the encoded representation of the image. You may use an AutoEncoder for this. Manufacturer and other properties: I think you may need to omit this feature as they will be product-specific and create complexities while training. Once, we have all the above models trained, we can feed the features ( generated from these models ) to a neural network, which will produce the final result.
H: Select samples from a dataframe in python I have a data set (pandas dataframe) with a variable that corresponds to the country for each sample. I have to take the samples that corresponds with the countries that appears the most. thanks AI: Let's say you have a dataframe df: import pandas as pd from faker import Faker import random fake = Faker() n = 10000 names = [fake.name() for i in range(n)] countries = [fake.country() for i in range(n)] ages = [random.randint(18,99) for i in range(n)] df = pd.DataFrame({'name':names, 'age':ages, 'country':countries}) If you want to extract the top 5 countries, you can simply use value_counts on you Series: df.country.value_counts()[0:5] Then extracting a sample of data for the top 5 countries becomes as simple as making a call to the pandas built-in sample function after having filtered to keep the countries you wanted: def is_top5_country(x, top5): if x in top5: return True return False mask = df.country.apply(lambda x: is_top5_country(x, list(df.country.value_counts()[0:5].index))) df[mask].sample(frac=0.5)
H: In elbow curve how to find the point from where the curve starts to rise? I am computing a distance metric on my data. The result is then being sorted in ascending order. The samples having distance more than a specific threshold are to be marked as outliers and will be discarded. Below is a plot of all distance values. As evident from the graph, after a certain point, the graph rises quite rapidly and even the datapoints get sparse. I need to calculate that point from where this happens and mark that point as threshold value. AI: TL;DR Use the two functions from below to get the index of the elbow: elbow_index = find_elbow(data, get_data_radiant(data)) **Edit:** I put all of the code below into a python package called [kneebow][1]. Now, you can simply do it like this: from kneebow.rotor import Rotor rotor = Rotor() rotor.fit_rotate(data) elbow_index = rotor.get_elbow_index() Long Answer If this curve is representative for all of the curves (e.g. unimodal and monotonic) then a quick and dirty method is to rotate it to some degree and simply take the minimum value. The rotation can be done by multiplication with the rotation matrix $$\left( \begin{array}{cc} \cos\theta&-\sin\theta\\ \sin\theta&\cos\theta \end{array} \right)$$ where $\theta$ is the desired angle in radians. In python, you can do this with the following function: def find_elbow(data, theta): # make rotation matrix co = np.cos(theta) si = np.sin(theta) rotation_matrix = np.array(((co, -si), (si, co))) # rotate data vector rotated_vector = data.dot(rotation_matrix) # return index of elbow return np.where(rotated_vector == rotated_vector[:, 1].min())[0][0] Note that theta is the angle in radians. You can caculate that by np.radians(angle). Important: One thing to remember is that the x- and the y-axes may have different scales. So on your plot, it may look like a 45° rotation would be enough, while actually, it is not. Therefore, you can use the following function to calculate which radiant you should use. It takes the slope from the minimum to the maximum values in your data and converts it to radians: def get_data_radiant(data): return np.arctan2(data[:, 1].max() - data[:, 1].min(), data[:, 0].max() - data[:, 0].min()) If you want to get the angle, run np.rad2deg(get_data_radiant(data)). Example How to use Let's test the approach on a sample data similar to yours: # Let's define our sample data: data = np.array([ [1, 1], [2, 2], [3, 3], [4, 4], [5, 5], [6, 6], [7, 7], [8, 8], [9, 16], [10, 32], [11, 64], [12, 128], [13, 256], [14, 512] ]) # y is linear until (8,8) and increases exponentially afterwards plt.scatter(data[:, 0], data[:, 1]) Plotting the data gives us the following figure: Now, let's try to find the elbow by combining the functions from above: elbow_index = find_elbow(data, get_data_radiant(data)) print(elbow_index) # 10 print(data[elbow_index]) # array([11, 64]) In Detail To sum it all up, we just calculated the slope from the min to the max value and then rotated the plot in such a way that the slope was zero. Subsequently, we took the min value of the data to get the elbow. We can get the angle of rotation the following way: angle = np.rad2deg(get_data_radiant(data)) print(angle) # 88.543 The plot on the left has the slope included as an orange line. The scales of the axes make it look like it would have an angle of 45° while in reality, it has an angle of 88.5°! After vector rotation, the data looks like the plot on the right. From this data, we took the minimum value which was the 11th data point. Drawbacks Note that this method has a drawback: The more inequal the scales of your axes are, the more it will choose points in favor of the larger axis. You can try to use the scikit-learn MinMaxScaler to scale your data before you use this method in order to reduce this effect. If you use the kneebow package, the data gets scaled by default.
H: How do you integrate nlp into an existing website? Let's say I have a db driven app which has two tables: people and organizations. I run my documents through a named entity recognition program. Now - what do I do with this additional ner information? Does it go in my existing db tables? Do I have to rewrite my existing tables to accommodate the ner data? Do I make new tables for the ner data? Do I connect them to my older tables with foreign keys? Do I need new functions that pull response data from these new ner objects instead of, or with, my original ones? Are these new entities "objects" in the traditional OOP sense? But they are different objects from the ones in my original db tables, so do they replace the old, non-ner objects? Does it matter what kind of datastore the new ner objects go into in order to be fully utilized? rmdbs? key/value? document based? The only thing I’ve been able to find online about this is a Medium post that pickles the model - it says nothing about any other kind of datastore - and the Flask website just displays the information by accessing it through an api. A search here on datascience.stackexchange.com got this: We couldn't find anything for integrate with existing website And now I get a warning that my question "appears to be subjective and is likely to be closed". I'm sure that's machine learning at work. Granted, the step by step details might be different if your site is php vs python, but the issue is a major one I assume a lot of people are going to run into sooner if not later. My site is in Django. How do you integrate the product of nlp into an existing, complex website with many apps and functions? AI: The answer is it depends on you architecture. Some things to consider: The tags are attributes of words in a (presumably) text field in a database. So Boyce Codd would prescribe putting it in a seperate table referencing the primary key (and possibly the position in the text) The tags are just a function over your data, so storing it at all should be for caching purposes. Which you might not want to do in your database, but at the presentation level, or perhaps in a separate database, or schema. The tags are in fact the product of an approximation of a function over your data. There is no such thing as a perfect NER-tagger. This means that you might want to switch the function later buy something more suited. It could make sense to extract the information on display time from a microservice (perhaps hosted by you), and cache the resulting presentation for better performance.
H: Probability calibration is worsening my model performance I'm using RandomForest and XGBoost for binary classification, and my task is to predict probabilities for each class. Since tree-based models are bad with outputting usable probabilities, i imported the sklearn.calibration CalibratedClassifierCV, trained RF on 40k, then trained CCV with a separate 10k samples ( with cv="prefit" option ), my metric ( Area Under ROC ) is showing a huge drop in performance. Is it normal for probability calibration to alter the base estimator's behavior? Edit : Since i'm minimizing logloss on my XGBClassifier, the output probabilities aren't that bad compared to RF's outputs. AI: The probability calibration is just stacking a logistic or isotonic regression on top of the base classifier. The default is logistic, and since the sigmoid is a strictly increasing function, the rank-ordering of samples will be unaffected, and so AUC should not change at all. (With isotonic regression, it's actually piecewise-constant, so within a span where the function is constant, all samples will have their scores made equal, and so your ROC curve will become more coarse, which would affect the AUC; but these effects ought to be small, as long as the isotonic fit produces sufficiently many/short constant segments.) Also, gradient boosting like XGBoost produces scores biased toward the extremes, not away from them like random forests, so the logistic calibration is unlikely to work out well.
H: Splitting training and test set with financial data I am using trees algorithms (decision tree, random forest and XGBoost) to forecast the sign of the returns in the stock market (classification). I am using this article as a reference: http://rpubs.com/raaraa/412512 When splitting the training and the test set, the author uses the following R code: index <- sample(1:nrow(stock_indicators)) size=0.2*nrow(stock_indicators) test <- stock_indicators[index, ] train <- stock_indicators[-index, ] Is it correct to use a random set of dates to split the test and the training set with time series financial data? It looks like look-ahead bias to me. This would be the alternative: train <-head(stock_indicators,round(0.70 * nrow(stock_indicators))) test <- tail(stock_indicators,round(0.30 * nrow(stock_indicators))) But with this split I cannot get significant accuracies even trying with different datasets. Do you have any suggestions? AI: I think what you're doing is correct, in fact it would be even more correct to introduce a gap between your test and train set, i.e. train <-head(stock_indicators,round(0.65*nrow(stock_indicators))) test <- tail(stock_indicators,round(0.25*nrow(stock_indicators))) The reason for this is time series data (usually) exhibits strong serial auto-correlation, so if you put the price for one day in your training set, and the next days price your test set they're a long way from independent and your test error is biased. The reason you're not getting a good fit is that the data is probably not stationary, i.e. what's driving stock prices is changing over time and it's hard to forecast - if it wasn't everyone would do it perfectly. Another way of doing the split is to split the data into blocks (say weeks) and then take 80% of the blocks for the training set and 20% for the test set. This is useful if say your data is seasonal (i.e. electricity demand), in which case using the last 20% isn't really representative of the full future you want to forecast. I'd argue stock prices aren't overly seasonal.
H: Unbalanced discounted reward in reinforcement learning : is it a problem? Discounted rewards seems unbalanced to me. If we take as example an episode with 4 actions, where each action receive a reward of +1 : +1 -> +1 -> +1 -> +1 The discounted reward for the last action is : 1 The discounted reward for the first action (considering gamma = 1 for simplicity) is : 4 Intuitively both action are as good as the other, because both received same reward. But their total reward is different, unbalanced. So when we will backpropagate, first action will be favored over last action ? AI: Most of the RL problems are sequential decision making processes. That is the action taken at time $t$, can influence the future rewards in a sequential way. From this perspective, the first action lead to a sequence of states/actions which produced positive rewards, which means it deserves more positive feedback than the last action. It only seems natural to prefer the first action in the first state unless we know some other action yields better cumulative rewards. Rewards are gains in short-term, an action might be good in the short term but not in the long run, which is why we use cumulative rewards called returns to estimate the value of an action in a given state.
H: Light GBM Regressor, L1 & L2 Regularization and Feature Importances I want to know how L1 & L2 regularization works in Light GBM and how to interpret the feature importances. Scenario is: I used LGBM Regressor with RandomizedSearchCV (cv=3, iterations=50) on a dataset of 400000 observations & 160 variables. In order to avoid overfitting/reguralize I provided below ranges for alpha/L1 & lambda/L2 parameters and the best values as per Randomized search are 1 & 0.5 respectively. 'reg_lambda': [0.5, 1, 3, 5, 10] 'reg_alpha': [0.5, 1, 3, 5, 10] Now my question is about: Feature importance values with optimized values of reg_lambda=1 & reg_alpha=0.5 are very different from that without providing any input for reg_lambda & alpha. The regularized model considers only top 5-6 features important and makes importance values of other features as good as zero (Refer images). Is that a normal behaviour of L1/L2 regularization in LGBM? Further explaining the LGBM output with L1/L2: The top 5 important features are same in both the cases (with/without regularization), however importance values after top 2 features has been shrunk significantly by the L1/L2 regularized model and after top 5 features the regularized model makes importance values as good as zero (Refer images of feature importance values in both cases). Another related question I have is: How to interpret the importance values and when I run the LGBM model with Randomized search cv best parameters do I need to remove the features with low importance values & then run the model? OR should I run with all the features & the LGBM algorithm (with L1 & L2 regularization) will take care of low importance features and won't give them any weight or may be give minute weight when it makes predictions. Any help will be highly appreciated. Regards Vikrant AI: With regularization, LightGBM "shrinks" features which are not "helpful". So it is in fact normal, that feature importance is quite different with/without regularization. You don't need to exclude any features since the purpose of shrinking is to use features according to their importance (this happens automatically). In your case the top two features seem to have good explanatory power, so that they are used as "most important" features. Other features are less important and are therefore "shrunken" by the model. You may also find that different features pop up as top of the list (the list may look different in general) when you run the model multiple times. This is because (if you don't fix a seed), the model will take different pathes to obtain a best fit (so the whole thing is not deterministic). Overall you should get a better fit with regularization (otherwise there is little need for it). I wonder if it makes sense to use both (l1 and l2)!? L1 (aka reg_alpha) can shrink features to zero while l2 (aka reg_lambda) does not. I usually only use one of the parameters. Unfortunately, the documentation does not provide too much details here.
H: How to rotate the plot and find minimum point? I want to rotate the below curve to 45 degree and then find the minimum point. For this, I have tried with below code: def rotate_vector(data, angle): theta = np.radians(angle) co = np.cos(theta) si = np.sin(theta) rotation_matrix = np.array(((co,-si), (si, co))) return np.matmul(rotation_matrix, data) rotated_vector = rotate_vector(data, -45) elbow = rotated_vector.min() But what I get is this curve: AI: Please refer to the original post. The problem was the np.radians(angle). I corrected that. Edit: The problem was not the radian conversion but the different scales of the x- and y-axis. 45 degrees rotation was simply not enough. Let's use a modified version of the code you posted: def rotate_vector(data, angle): # make rotation matrix theta = np.radians(angle) co = np.cos(theta) si = np.sin(theta) rotation_matrix = np.array(((co, -si), (si, co))) # rotate data vector rotated_vector = data.dot(rotation_matrix) # return index of elbow return rotated_vector Now, we define a square as test-data: data = np.array([ [1, 0], [2, 0], [3, 0], [4, 0], [5, 0], [0, 0], [0, 1], [0, 2], [0, 3], [0, 4], [5, 1], [5, 2], [5, 3], [5, 4], [5, 5], [0, 5], [1, 5], [2, 5], [3, 5], [4, 5], ]) plt.scatter(data[:, 0], data[:, 1]) Plotting the data gives us the following figure: Now, we rotate that vector using the function from above: rotated_data = rotate_vector(data, 45) plt.scatter(rotated_data[:, 0], rotated_data[:, 1]) If we plot the rotated data, we get the following figure: So it does work. However, if we want to rotate the graph that you posted to find the minimum, we need to remember that the scales of the two axes are different. That means the graph may look like a 45° rotation would be enough while actually, it is not. I corrected the original post again. Also, I added a method to find an approximate rotation angle to account for the different axis scales.
H: Unable to interpret this 100% stacked area chart I suffer from "graphical dyslexia". I have trouble interpreting charts. So I pore over them for long periods of time. For the life of me, I can't figure out how the conclusion from this chart is: the bottom 80% of men are fighting over the bottom 22% of women and the top 78% of women are fighting over the top 20% of men (Source) Please help me understand. AI: To be fair the concept of the graph is a bit special, I also struggled to get it. As one can read in the source, the author takes number of "likes" on Tinder as a proxy for attractiveness. This way they can rank men and women separately by how attractive they are: that's what the two axes represent. If one accepts a few questionable assumptions such as: equating "relationship wealth" with "proportion of people of the opposite gender attracted" the more a person is attractive the more they go for the top attractive people of the opposite gender, Then for each gender the average number of likes received by level of attractiveness is calculated, for example: In average a woman who has say 50% of attractiveness "likes" only 10% of the men, and we assume that she's going to like only the top 10% attractive men. Since this implies that the top 50% women are interested only in the top 10% men, it can be deduced by contrapositive that only the other 50% least attractive women can be interested by the 90% remaining (least attractive) men. Another example from the other side: a man who has a level of 50% attractiveness likes all the women with more than 5% attractiveness in average. Again it is assumed that a woman who can attract a man in the top 50% will not be interested in the top bottom 50%, therefore the bottom 50% men can only "access" the bottom 5% of the women. The whole graph is based on this assumption: if somebody at a particular level of attractiveness can attract the N% most attractive of the opposite gender, then anybody below this level of attractiveness is stuck with the remaining proportion. In other words it's not about how many people of the opposite gender one likes, it's about which level of attractiveness one can "afford" given their own level of attractiveness. That's what the graph shows: for example the 80% least attractive men can only afford the 22% least attractive women. By contrast the 78% most attractive women can afford to take their pick among the 20% top attractive men. Beyond the caveats, my personal conclusion is: all you need is love... but can you afford it?
H: Decision Trees - how does split for categorical features happen? A decision tree, while performing recursive binary splitting, selects an independent variable (say $X_j$) and a threshold (say $t$) such that the predictor space is split into regions {$X|X_j < t$} and {$X|X_j >= t$}, and which leads to greatest reduction in cost function. Now let us suppose that we have a variable with categorical values in {$X$}. Suppose we have label-encoded it and its values are in the range 0 to 9 (10 categories). If DT splits a node with the above algorithm and treat those 10 values are true numeric values, will it not lead to wrong/misinterpreted splits? Should it rather perform the split based on == and != for this variable? But then, how will the algorithm know that it is a categorical feature? Also, will one-hot encoded values make more sense in this case? AI: You are right on all counts: If DT splits a node with the above algorithm and treat those 10 values are true numeric values, will it not lead to wrong/misinterpreted splits? Yes absolutely, for exactly the reason you mention below: Should it rather perform the split based on == and != for this variable? But then, how will the algorithm know that it is a categorical feature? Yes, as you correctly assume a (true) categorical variable should be compared only for equality, not order. In general the algorithm cannot guess the nature of the feature, there has to be some parameters in the implementation which provide it with this information. Some implementations allow this, for example with Weka the features are typed with either a "numeric" or "nominal" (categorical) type. Also, will one-hot encoded values make more sense in this case? Correct again, that's what should be done for a categorical feature in case the implementation treats all the features as numeric values.
H: Why do we divide the regularization term by the number of examples in regularized logistic regression? So this is the formula for the regularized logistic regression cost function: $x^{(i)}$ - the $i$'th training example $\theta_j$ - the parameter of the $j$'th feature $m$ - the number of training examples $n$ - the number of features $y_{(i)}$ - the actual outcome of the $i$'th training example, such that $y \in \{0, 1\}$ $\lambda$ - the regularization term $h_\theta$ - the hypothesis function that produces a prediction in the interval $(0, 1)$ $h_\theta(x^{(i)})$ - predicted value of the $i$'th training example, such that: $h_\theta(x^{(i)}) = \sigma(\theta^{T}x^{(i)})$, where $\sigma(z) = \frac{1}{1+e^{-z}}$ (the sigmoid/logistic function) My question is about that last term: From my understanding, in order to do regularization, we need to find the sum of all the squares of the parameters $\theta$. That is clear enough. Also we multiply this sum by the regularization term $\lambda$, which we can change if we think the data is overfit/underfit. Good. Then, for convenience we divide this term by $2$ such that when we take the derivative we will get rid of that $2$ that would come from the exponent of $\theta$. All clear so far. BUT, why in the world do we also divide by $m$ (the number of training examples)? We divide the left term by $m$ in order to find the average error, and that makes sense since we have $m$ examples, therefore $m$ errors and after we find the sum of these $m$ errors, we would need to divide by $m$ to get the average error. But in this right term that I am confused about, we find the sum of the squares of the features, and the number of features is $n$. If we want to find the average of all the $\theta_j^2$ wouldn't we need to divide by $n$ instead of $m$, since we have $n$ features. Why does it make sense to divide that sum by $m$ and shouldn't we divide it by $n$? AI: There are several possible intuitions behind that as explained here, but the strongest argument for me is the following one. The „best“ form of the loss function is the one that requires the least amount of tweaking, e.g. for different dataset sizes. Dividing the regularization term by the number of samples reduces its significance for larger datasets. And, indeed, since regularization is needed to prevent overfitting, its impact should be reduced (in favor of the impact of the data itself) if a larger amount of data is available. This way the same $\lambda$ value has better chances for working with the whole dataset as well as with a small part of it without adjusting.
H: help understanding nested cross validation From what I read online, nested CV works as follows: I divide my whole data in k folds, where k-1 folds are the train set and one fold is the test set. There is the inner CV loop, where we may conduct a grid search using the train set to find the best hyperparameters for a model. There is the outer CV loop, where we measure the performance of the model that won in the inner fold, on the test set. We repeat the above procedure for different test and train sets until, at some point, all folds got his place as a test set once. what I cannot understand is that because we find the hyperparameters in each outer loop we run, we might have models with different hyperparameters being tested in the test set in each loop, so can we use this nested cross validation to find the best hyperparameters for a model? or the goal of this evaluation is to find the best algorithm, like SVM or Naive Bayes, for a data set? because, if we are getting different hyperparameters in each loop, we cannot say which one is the best one. English is not my first language, if it my text is hard to understand, please tell me, so I can fix it. AI: Generally, you consider the outer CV as just estimating the performance of the method of fitting. You don't expect a final model out at this point. You can then apply the same method of fitting a model (what was happening in the inner CV) to the entire dataset to produce a model, and you say that you expect the performance estimate to apply to this new model. https://stats.stackexchange.com/a/65158/232706 https://datascience.stackexchange.com/a/16856/55122 https://stats.stackexchange.com/q/11602/232706 https://stats.stackexchange.com/q/52274/232706
H: What is a color blob? Is it possible to use clustering algorithm to color blob detection problem? Wiki gives this definition of blob detection In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. The most common method for blob detection is convolution. based on which, is there 8 separate color blobs in this figure? Is it possible to use clustering algorithm to color blob detection problem? AI: You certainly can use DBSCAN to solve this trivial toy example. Because it can do connected-components, and this image is trivial to threshold. It will just be much much slower than the usual convolution-based methods.
H: what are effects of working with categorical dataset I am working on classification problem where the dataset contains 90% of features as categorical. It is binary classification problem, and the class is heavily imbalanced. I performed Smote over sample and created a model. I also tried similar approach with undersampling. Both the method with logistic regression performs mediocre. I want to know how having too many categorical variables impacts the model and possible efficient way to approach the problem feature1:1-3 feature2:0-1 feature3:0-3 feature4: 1-4 feature5: 0-2 feature6: 0-5 feature7: 1-4 feature8: continuous( max 10) feature9 continuous( max 10) class: 0-1 AI: You need to distinguish a few problems here. How well can your data classify some outcome, what features should be used, how to deal with imbalanced classes. Categorical features are not a problem per se. In the current stage, feature selection might be an issue. Thus, use logit with lasso or ridge to shrink features which are not too helpful (happens automatically). Also dummy/one-hot encoding would be worth a try (jointly with lasso). https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeClassifier.html
H: Solving Bayes Theorem equation -> I can't calculate proper result I am solving questions for an edx course on Machine Learning. One particular question is giving me a problem: Assume a patient comes into the doctor’s office to test whether they have a particular disease. The test is positive 85% of the time when tested on a patient with the disease (high sensitivity): P(test+|disease)=0.85 The test is negative 90% of the time when tested on a healthy patient (high specificity): P(test−|heathy)=0.90 The disease is prevalent in about 2% of the community: P(disease)=0.02 Using Bayes' theorem, calculate the probability that you have the disease if the test is positive. My solution: I have created a table sick | healthy 2% | 98% + 90% | 15% - 10% | 85% From this I calculated bayes theorem like this: (0,02*0,9) P(A|B) = ----------------------------------------------------- (0,02*0,9) + (0,15*0,98) I get P(A|B)=0,109 however this answer is wrong, where did I do mistake? AI: By the looks of it you just flipped the conditional probabilities when build the first table. P(test+|sick) = 0.85 according to the description. In your table and equation, however, you take it to be 0.90. Because this felt too short for an answer, I reworked it out : P(test+|sick)P(sick) P(sick|test+) = -------------------- P(test+) If you plug in the axioms: 0.85 * 0.02 P(sick|test+) = ----------- P(test+) P(test+) = P(test+|sick) * P(sick) + P(test+|healthy) * P(healthy): 0.85 * 0.02 P(sick|test+) = --------------------------------------------------------- P(test+|sick) * P(sick) + P(test+|healthy) * P(healthy) 0.85 * 0.02 P(sick|test+) = -------------------------- 0.85 * 0.02 + 0.1 * 0,98
H: Metrics for unsupervised doc2vec model I have just built a simple doc2vec model using the gensim library, pretty much followed the tutorial located here. The methods provided for checking the quality of the model are very manual and require reading the similar docs, is there a way to calculate some other metrics from the model to try and improve its performance? AI: Evaluation is based on the task, not the type of model used for it. In the tutorial that you link the task would be simple document similarity. Afaik a more common variant is the information retrieval setting, where the goal is to rank documents according to their similarity against a query document. The tutorial mentions that they use a test set for which a link to a paper is provided. This paper explains that human annotators rated the similarity level between pairs of documents. That would be the standard way to obtain annotated data, but of course it takes a lot of manpower. It doesn't seem that the ground truth information is provided with the test set used in the tutorial though, I assume that this is why the author only proposes a manual inspection of the results. In general there are probably many benchmark datasets (with ground truth annotations) publicly available for similar tasks, but they are not always easy to find and I'm not very knowledgeable about these. I think a good place to start would be the SemEval series of competitions, every year they release various datasets related to this kind of task.
H: Best approach for a simple self driving car I'm planning to build a small car with autonomous driving (maybe modifying my current rc car or using a robot car kit, using arduino and raspberry). I'll use a CNN, and I'm thinking how to collect data (I want to try a similar approach to the Udacity simulator). My doubt is if is better to aim at supervised learning or reinforcement learning. I'm more inclined to the supervised learning, but I don't know the best way to record data. If I'll go with the robot car kit, the front wheel doesn't steer (it use different wheels speed to turn left or right). So I could create 3 function for each direction, like "soft left", "medium left" and "hard left" to control the car when I train it, and then predict from 7 possibility: - soft left, medium left, hard left, center, soft right, medium right, hard right So 7 possible output. Otherwise if I use a car with the front wheel that are able to steer, I have 3 possible class, like -1 for left button, 0 for center (any button pressed) and 1 for right button. So 3 possible output, without any gradient for the steering angle. With the reinforcement learning honestly I don't know where to start. How can I tell to the car when it does something wrong and give her a reward? With a simulator is simple, but with a real car how can I assing a reward score for each frame that I collect? If you have any paper about this I appreciate it. What's the best approach in your opinion? AI: ~ If we consider the problem as an image classification task, Basically, we can use a CNN trained on images belonging to 7 classes ( which represent the steering levels ). We give an image to the CNN and it outputs a steering level. This is a classification problem which will take each frame ( image ) of the video and output the steering level. So as the car steers the probability for the centre steering level will increase assuming that the road is straight after taking the turn. See multiclass image classification. ~ else, if we train an agent for the problem ( reinforcement learning ), Reinforcement learning requires an agent ( in our case, the car ) which produces some action ( steering levels ) in an environment ( street, road etc. ) and gets an reward accordingly. The rewards need to be maximized. We will have a CNN which encodes an image to a vector representation. So, consider two cases. We have an image which needs to the car to steer hard left. In the first case, the car turns soft left and loses its path. Here, the reward for this action becomes -100. In the other case, the car turns hard left, stays on the road and hence gets a reward of 100. Basically, the agent gets a positive reward if it stays on the road doing some action ( steering level ) else it gets a negative reward showing that this action should not be made on such an image ( road ). See here for learning RL. Tip: You can try combining both the above methods. Generate class probabilities ( steering levels ) from CNN. If the car does not steer well, increase the loss by a certain factor. Try thinking from a human perspective. Imagine driving a car and the cautions you take to avoid going out the road.
H: Regression methods I want to understand what regression methods exist and their purpose. I know the least squares method with which you can build a linear and non-linear model and make predictions. The ARMA model is used to predict stationary time series, and the ARIMA model is used for non-stationary time series. When many data and signs are used neural networks. For time series, LSTMs are popular. What methods are applicable for solving multiple regression problems? What methods should I use in what cases? AI: All regression methods have the same purpose, but some methods are better suited to a given problem than others. For example, algorithms based on decision trees, like random forest or XGBoost, do not extrapolate outside the area that is covered by the input features. On the other hand, they can handle categorical features. SVMs are not very popular because they are computationally expensive but if you don’t have too much data, then that’s not a problem. Some algorithms, like deep networks, require a lot of data to train, so you can’t use them if you have little data. Linear regression only models a linear relationship between the input features (x) and the output, but you could add x squared as an input feature to model also quadratic relations. To model complex function shapes though, that quickly becomes unmanageable. If a fairly simple function shape is fine, linear regression generally allows you to understand/explain your model, so that can be a plus. In this way, you need to understand the pros and cons of each to know in advance which algorithm is going to work well, but how do you find out, eh? It’s like saying, you need experience. So, first, be aware that choosing the best algorithm is normally far less important than making good use of the algorithm you’ve chosen. Second, try it out, get your hands dirty, and you will find what works for what. Finally, start with the simplest option that may work. Eventually, you’ll have “Experience”.
H: some algorithms to identify of a pattern in text data I have few sentences like below in my project(around 25000), sentence1 = 'Must be able to multi-task in a fast-paced, deadline-driven environment' sentence2 = 'Strong organisational skills and proactive approach' sentence3 = 'in this role you will design develop and revise application simulations in alignment with product implementation timelines' sentence4= 'ensures appropriate cross-referencing between documents within and across qms chapters/topics' From these sentences, I am trying to determine what functionality is being expected from the person or like what person has to do. I need to identify those keywords. I can use any ML techniques or any other method to determine the keywords, for example in sentence1 = [multi-task, fast paced, deadline-driven environment] senetence3 = [design develop and revise application simulations] I tried to identify those words with help of POS but that wasn't very useful. Are there any algorithms or ideas or methods that can detect such keywords ? AI: Since you don't seem to have any annotated data, the best you can do is probably this: Optional first step: remove stop words (there are many such lists available, for example https://pythonspot.com/nltk-stop-words/) Calculate the Inverse Document Frequency for every word in the vocabulary. This is intended to measure the importance of a word based on how often it's used, in the sense that a word used in fewer sentences makes it more important for these sentences. Calculate TF-IDF for every word in a sentence, then rank the words by their TF-IDF weights. In general the top ones are supposed to be the most relevant. Optionally you can decide to select the top N words as keywords.
H: How to execute a git project with different input data? I would like to run this project and receive result from this project However I have problem to execute the step mailcorpus.json created by executing the sql script data/get_mail_corpus.sql on the Apache database because this is the sql query https://github.com/collab-uniba/Personality-Recognition-in-SD/blob/master/data/get_mail_corpus.sql SET group_concat_max_len=18446744073709547520; SELECT CONCAT( '[', GROUP_CONCAT( JSON_OBJECT( 'id',messages.message_id, 'mailing_list_url',messages.mailing_list_url, 'type_of_recipient',messages_people.type_of_recipient, 'email_address',messages_people.email_address, 'message_body',messages.message_body, 'is_response_of',messages.is_response_of ) SEPARATOR ',\r'), ']') AS list FROM messages LEFT JOIN messages_people ON messages.message_id = messages_people.message_id AND messages.mailing_list_url = messages_people.mailing_list_url WHERE email_address IN ('[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]'); but the database is not available to make the query. How is it possible to use my own input and take the rinal results? AI: Apparently the program expects a JSON file (probably) like this: [ { "id": ".......", "mailing_list_url": "......", "type_of_recipient": "......", "email_address": ".......", "message_body": "......", "is_response_of": "......" }, { "id": ".......", "mailing_list_url": "......", "type_of_recipient": "......", "email_address": ".......", "message_body": "......", "is_response_of": "......" }, ... { "id": ".......", "mailing_list_url": "......", "type_of_recipient": "......", "email_address": ".......", "message_body": "......", "is_response_of": "......" } ] So if your data can be somehow converted to this that would be ideal, I guess. Otherwise you might have to play with the code. Alternatively you could contact the authors by opening an issue.
H: ROC curve and optimal threshold I am doing a practice problem predicting a binary outcome. I have plotted an ROC curve and found the optimal threshold percentage to call future predicted observations a 1. I see that this threshold always matches the percentage of observations equal to 1 in my original data. Is there any conceptual explanation for this? AI: I see that this threshold always matches the percentage of observations equal to 1 in my original data. Is there any conceptual explanation for this? Yes, although the fact it always matches exactly is probably a coincidence or maybe due to a small sample. The training data contains a proportion $p$ of instances labelled 1. From the ROC plot you can see all the possible values for setting the threshold at a certain level and the resulting performance; for every possible level you can calculate the corresponding proportion $q$ of instances predicted as 1: if $q$ is much lower than $p$, then the system predicts many 0s, so there are many false negative errors and that makes the recall lower. Precision is high in this case. if $q$ is much higher than $p$, then the system predicts many 1s, so there are many false positive errors and that makes the precision lower. Recall is high in this case. I assume that you optimize on the F1-score right? The fact that the F1-score is based on the product of the precision and recall means that both values need to be reasonably high, otherwise the F1-score drops. As seen above, having very different values for $p$ and $q$ will cause either the precision or recall to be low. Therefore the optimal F1-score is achieved when $q$ is close to $p$.
H: Why do we determine the values of λ in regularization as ln λ, such as ln λ=-18 instead of for example λ=0.3? I'm studying Pattern Recognition and Machine Learning by Christopher Bishop. What I realized is, he defines values of λ as ln λ. For example: We see that, for a value of lnλ = −18, the over-fitting has been suppressed and we now obtain a much closer representation of the underlying function sin(2πx). If, however, we use too large a value for λ then we again obtain a poor fit, as shown in Figure 1.7 for lnλ = 0 What is the reason for this? Why he doesn't just use λ? AI: I can just suppose, that it's because then we could consider regularization together with a Log Likelihood function, for instance, and that is why it's more convenient to have such a representation. It's easier to make computations, using log, for example, when we want to minimize some functions (ln a + ln b = ln(a*b), (ln a)' = 1/a etc)
H: Is hyperparameter tuning more affected by the input data, or by the task? I'm working on optimizing the hyperparameters for several ML models (FFN, CNN, LSTM, BiLSTM, CNN-LSTM) at the moment, and running this alongside another experiment examining which word embeddings are best to use on the task of binary text classification. My question is: should I decide on which embeddings to use before I tune the hyperparameters, or can I decide on the best hyperparameters and then experiment with the embeddings? The task remains the same in both cases. In other words, is hyperparameter tuning more affected by the task (which is constant) or by the input data? AI: In other words, is hyperparameter tuning more affected by the task (which is constant) or by the input data? It's correct that the task is constant, but hyper-parameters are usually considered specific to a particular learning algorithm, or to a method in general. In a broad sense the method may include what type of algorithm, its hyper-parameters, which features are used (in your case which embeddings), etc. The performance depends on both the data and the method (in a broad sense), and since hyper-parameters are parts of the method, there's no guarantee that the optimal parameters stay the same when any part of the method is changed even if the data doesn't change. So for optimal results it's better to tune the hyper-parameters for every possible pair of ML model and words embeddings. You can confirm this experimentally: it's very likely that the selected hyper-parameters will be different when you change any part of the method.
H: multiplying two array in python3.7 I am trying to multiply two array in python 3.7 using numpy by using the following syntax: array1 = np.array([{1,2,3,4},{5,6,7,8}]) print (array1) array2=array1*array1 print(array2) but this error arises TypeError: unsupported operand type(s) for *: 'set' and 'set' AI: You are simply defining your array so that it is made of python sets. That is a different data structure which is not able to be multiplied, unlike an array. Just change your code to this: array1 = np.array([[1,2,3,4],[5,6,7,8]]) The only difference is using square brackets instead of curly ones. These are python list objects (or standard arrays). array2=array1*array1 print(array2) [[ 1 4 9 16] [25 36 49 64]]
H: Using random forest for selecting variables returns the entire dataframe I am in the process of dimensionality reduction. I am using Random Forest to find the columns with the highest level of correlation with the target SalePrice column. The problem is that the output is too large. Definitely not what I want from it. It is returning 259 columns. Some of these columns are a result of one-hot-encoding the categorical variables and adding them back into the dataframe, which logically increases the dimension of the dataset. However, I only wanted to return the columns with the highest correlation to the target variable 'SalePrice'. Not the whole damn dataframe. Here is the output: 0 1 2 3 4 5 6 ... 252 253 254 255 256 257 258 0 1 RL 65.0 8450 Pave NaN Reg ... 0 1 0 0 1 0 1 1 2 RL 80.0 9600 Pave NaN Reg ... 0 1 0 0 1 0 1 2 3 RL 68.0 11250 Pave NaN IR1 ... 0 1 0 0 1 0 1 3 4 RL 60.0 9550 Pave NaN IR1 ... 0 0 0 0 1 0 1 4 5 RL 84.0 14260 Pave NaN IR1 ... 0 1 0 0 1 0 1 ... ... .. ... ... ... ... ... ... .. .. .. .. .. .. .. 1455 1456 RL 62.0 7917 Pave NaN Reg ... 0 1 0 0 1 0 1 1456 1457 RL 85.0 13175 Pave NaN Reg ... 0 1 0 0 1 0 1 1457 1458 RL 66.0 9042 Pave NaN Reg ... 0 1 0 0 1 0 1 1458 1459 RL 68.0 9717 Pave NaN Reg ... 0 1 0 0 1 0 1 1459 1460 RL 75.0 9937 Pave NaN Reg ... 0 1 0 0 1 0 1 [1460 rows x 259 columns] Here is my code: import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.feature_selection import SelectFromModel from sklearn.model_selection import train_test_split train = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv") test = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/test.csv") categorical_columns = ['MSSubClass', 'MSZoning', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'Foundation', 'Heating', 'Electrical', 'Functional', 'GarageType', 'PavedDrive', 'Fence', 'MiscFeature', 'SaleType', 'SaleCondition', 'Street', 'CentralAir'] ranked_columns = ['Utilities', 'LandSlope', 'ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'HeatingQC', 'KitchenQual', 'FireplaceQu', 'GarageQual', 'GarageCond', 'PoolQC', 'OverallQual', 'OverallCond'] numerical_columns = ['LotArea', 'LotFrontage', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF','TotalBsmtSF', '1stFlrSF', '2ndFlrSf', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'Bedroom', 'Kitchen', 'TotRmsAbvGrd', 'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal', 'MoSold', 'YrSold'] def feature_encoding(df, categorical_list): # take one-hot encoding OHE_sdf = pd.get_dummies(df[categorical_list]) # drop the old categorical column from original df df.drop(columns = categorical_list, inplace = True) # attach one-hot encoded columns to original dataframe df = pd.concat([df, OHE_sdf], axis = 1, ignore_index = True) # Integer Encoding df['Utilities'] = df['Utilities'].replace(['AllPub', 'NoSeWa'], [2, 1]) # Utilities df['ExterQual'] = df['ExterQual'].replace(['Ex', 'Gd', 'TA', 'Fa'], [4, 3, 2, 1]) # Exterior Quality df['LandSlope'] = df['LandSlope'].replace(['Gtl', 'Mod', 'Sev'], [3, 2, 1]) # Land Slope df['ExterCond'] = df['ExterCond'].replace(['Ex', 'Gd', 'TA', 'Fa', 'Po'], [4, 3, 2, 1, 0]) # Exterior Condition df['HeatingQC'] = df['HeatingQC'].replace(['Ex', 'Gd', 'TA', 'Fa', 'Po'], [4, 3, 2, 1, 0]) # Heating Quality and Condition df['KitchenQual'] = df['KitchenQual'].replace(['Ex', 'Gd', 'TA', 'Fa'], [3, 2, 1, 0]) # Kitchen Quality # Replacing the NA values of each column with XX to avoid pandas from listing them as NaN na_data = ['Alley', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'FireplaceQu', 'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond', 'PoolQC', 'Fence', 'MiscFeature'] for i in na_data: df[i] = df[i].fillna('XX') # Replaced the NaN values of LotFrontage and MasVnrArea with the mean of their column df['LotFrontage'] = df['LotFrontage'].fillna(df['LotFrontage'].mean()) df['MasVnrArea'] = df['MasVnrArea'].fillna(df['MasVnrArea'].mean()) x_train, x_test, y_train, y_test = train_test_split(df, df['SalePrice'], test_size = 0.3, random_state = 42) sel = SelectFromModel(RandomForestClassifier(n_estimators = 100), threshold = 300 * "mean") sel.fit(x_train, y_train) sel.get_support() selected_feat = x_train.columns[sel.get_support()] return selected_feat print(feature_encoding(train, categorical_columns)) The code for Random Forest is right after the train-test-split. Update After changing the code to the above version, I am getting the following error: Traceback (most recent call last): File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\pandas\core\indexes\base.py", line 2657, in get_loc return self._engine.get_loc(key) File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 129, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index_class_helper.pxi", line 91, in pandas._libs.index.Int64Engine._check_type KeyError: 'Utilities' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/Users/security/Downloads/AP/Boston-Kaggle/Boston.py", line 66, in <module> print(feature_encoding(train, categorical_columns)) File "C:/Users/security/Downloads/AP/Boston-Kaggle/Boston.py", line 37, in feature_encoding df['Utilities'] = df['Utilities'].replace(['AllPub', 'NoSeWa'], [2, 1]) # Utilities File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\pandas\core\frame.py", line 2927, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\pandas\core\indexes\base.py", line 2659, in get_loc return self._engine.get_loc(self._maybe_cast_indexer(key)) File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 129, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index_class_helper.pxi", line 91, in pandas._libs.index.Int64Engine._check_type KeyError: 'Utilities' AI: You don’t need the for loop at all. def feature_encoding(df, categorical_list): # One Hot Encoding the columns gathered in categorical_columns # take one-hot encoding OHE_sdf = pd.get_dummies(df[categorical_list]) # drop the old categorical column from original df df.drop(columns=categorical_list, inplace = True) # attach one-hot encoded columns to original dataframe df = pd.concat([df, OHE_sdf], axis = 1, ignore_index = True)
H: Decision tree classifier prediction changes from one run of the model to the next I'm running a very basic gender ['male', 'female'] classifier using the sklearn DecisionTreeClassifier based on [height, weight, shoe size] in a Jupyter notebook. The prediction changes from male to female for the same input as I keep running the model. I don't understand how that's possible. Shouldn't the build of the model be completely deterministic and therefore output the same prediction each time for a specific input? Here's my code: X = [[190, 90, 43], [165, 65, 38], [170, 70, 39], [160, 56, 36], [190, 88, 45], [164, 63, 37]] Y = ['male', 'female', 'male', 'female', 'male', 'female'] clf = tree.DecisionTreeClassifier() clf.fit(X, Y) print(clf.predict([[200, 70, 37]])) AI: While this is a duplicate and the suggested link answers your question, for learning purposes I would like to suggest that you plot your DecisionTree every time you have a new run to see by yourself what happens behind the scenes: from sklearn.tree import DecisionTreeClassifier from sklearn.externals.six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus dot_data = StringIO() X = [[190, 90, 43], [165, 65, 38], [170, 70, 39], [160, 56, 36], [190, 88, 45], [164, 63, 37]] Y = ['male', 'female', 'male', 'female', 'male', 'female'] clf = DecisionTreeClassifier() clf.fit(X, Y) export_graphviz(clf, out_file=dot_data, filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) Once you may get: where splitting is done on X0 feature, where as next run you may get: where splitting is done on X1 feature. If you wanna reproduce your result at every run, you can use random_state = a_random_number and then you should expect to get the same result every time as the same tree has been constructed every time!
H: Reinforcement Learning Vs Transfer Learning? I recently saw a video lecture from Jeremy Howard of fast.ai in which he states that transfer learning is better than reinforcement learning. But I was unable to understand the reasoning behind it. Can someone explain to me or point to any evidence stating which is better and why? AI: I didn't watch this lecture, but, the way I see it, reinforcement learning and transfer learning are absolutely different things. Transfer learning is about fine-tuning a model, which was trained on one data and then striving to work with another data and another task. For example if you use weights of pretrained model on imagenet and then implement it to your dataset, while your dataset consists of small amount of different species of birds images (which might be not sufficient to train for example unet from a scrath). Reinforcement learning is about how some agent should response to environment condition to receive high reward. There is an illustrative example with a drone making a delivery, when there is some range of restrictions of the environment. There are two links, which might be useful: https://machinelearningmastery.com/transfer-learning-for-deep-learning/ https://skymind.com/wiki/deep-reinforcement-learning I guess, I can't answer, which approach is better, because they aim to solve different challenges.
H: Appropriate model metric for a truncated response variable? Here's a straightforward question I can't seem to find a good answer to. Let's say you're using some variables to predict age. I'm assuming a regression model is the right approach. In this case, what would be a suitable metric to evaluate the model performance? My sense is that MAE and RMSE aren't appropriate because they assume your predictions can be equally wrong in both directions, but that's not true: predictions can be larger in the positive direction than in the negative direction. For example, if the true age is 5, the prediction can be no less than 0, but could be infinitely high. Is RMSE/MAE an appropriate metric to evaluate the size of the error, and if not, what would be? Or if there isn't an appropriate "size" metric, should I use R^2 or some other "fit" metric? AI: Either RMSE or MAE will give you a sense of accuracy in prediction. The exact value of any error metric is up to your specific use case and tolerance. In other words, if you are predicting ages of 5-10 year olds and see a RMSE of 5 then you might think that you have a significant problem. But if you are predicting ages expecting them to be 40-80 year olds, then a RMSE of 5 wouldn't be a big concern right?
H: To One-Hot-Encode or not to One-Hot-Encode? I have been struggling to find proof for that but I couldnt Every time I prepare dataset I face the same issue when a column is a classification such as CountryCode or TaskType in this dataset TaskType CountryCode Target 1 61 Red 1 962 Yellow 2 1 Yellow 6 61 Yellow 4 81 Red 2 1 Blue 1 61 Red 2 962 Green 4 61 Blue if I applied the dataset as to different models such as linear regression, SVM, KNN, etc. will these model consider CountryCode and TaskType as numeric fields and treat them as continuous data? Shall I One Hot Encode these features before using them? what is the best way to handle this scenario? AI: An intuitive explanation, why we should encode categorical features, is that otherwise there will be absolutely another sense of "closeness" between features of the same type. The model will treat this features as continuous and as a result if you have two points in your feature space (p1 with CountryCode 61 and p2 CountryCode 962) and then you add the third point p3 with CountryCode 81, then model will miss out, that it can't evaluate distances between 61, 962 and 81.
H: Question of using gradient descent instead of calculus. I checked previous questions there are still points to clarify First of all I checked http://stats.stackexchange.com/questions/23128/solving-for-regression-parameters-in-closed-form-vs-gradient-descent, http://stackoverflow.com/questions/26804656/why-do-we-use-gradient-descent-in-linear-regression, https://stats.stackexchange.com/questions/212619/why-is-gradient-descent-required but couldn't find my answer. Gradient descent is: $w_{i}:=w_{i}-\alpha \frac{\delta }{\delta w_{i}}j(w)$ where w is a vector. At his book "Pattern Recognition and Machine Learning" Bishop says: "Because the error function is a quadratic function of the coefficients w, its derivatives with respect to the coefficients will be linear in the elements of w, and so the minimization of the error function has a unique solution..." So if we take derivative of $j(w)$ with respect to $w_{i}$ and equal to zero, in the end it will give us the smallest $w$. Which is actually the first exercise. In gradient descent we take the derivative too, so the problem can't be because of derivative. Such as an equation that its derivative can't be found. If we can find the answer with one iteration(making equation equal to zero for every feature) why do we iterate over and over again instead, which is the case of gradient descent. AI: If I understand correctly, your question is Why do we solve gradient descent problems iteratively rather than setting the derivative to 0 and solving for the weights which minimize the cost function? We can't do this (in all but extremely simple problems such as linear regression) because there is no easy closed-form solution when you set the derivative equal to 0. Let's take a look at a simple example: let's say you have a 1-layer neural net with no bias and a sigmoid activation function. Your network output is $\hat{y} = \sigma(WX)$. If you use L2 loss, your cost function would be $L = (\sigma(WX) - y)^2$. (For simplicity we'll assume that we're only using a batch size of 1). Let's take the derivative with respect to our weights $W$ and use the chain rule : $\begin{eqnarray*} \frac{dL}{dW} &=& 2(\sigma(WX) - y)\frac{d}{dW}\sigma(WX)\\ &=& 2(\sigma(WX) - y)\sigma'(WX)\frac{d}{dW}(WX)\\ &=& 2(\sigma(WX) - y)\sigma(WX)(1-\sigma(WX))\frac{d}{dW}(WX)\\ &=& 2(\sigma(WX) - y)\sigma(WX)(1-\sigma(WX))X\\ \end{eqnarray*}$ At this point, we get a bit stuck. If we set this derivative equal to 0, it still won't be easy to solve for $W$ since $W$ appears 3 times in this equation and can't easily be isolated by itself on one side of the equation. Since we cannot solve for the cost-minimizing weights directly, instead we use gradient descent. We take the derivative of the cost function to tell us which direction we should change each weight in order to most decrease the cost. The actual size of the update is determined by the learning rate $\alpha$.
H: Unable to exceute mean of list of lists I have a list of lists of the following type a=[[0. , 0.03846154, 0.34615385, 0.34615385, 0.42307692, 0.42307692, 0.53846154, 0.53846154, 0.61538462, 0.61538462, 0.65384615, 0.65384615, 0.73076923, 0.73076923, 0.76923077, 0.76923077, 0.80769231, 0.80769231, 0.88461538, 0.88461538, 0.92307692, 0.92307692, 0.96153846, 0.96153846, 1. , 1. ], [0. , 0.03846154, 0.61538462, 0.61538462, 0.69230769, 0.69230769, 0.73076923, 0.73076923, 0.76923077, 0.76923077, 0.80769231, 0.80769231, 0.84615385, 0.84615385, 0.88461538, 0.88461538, 0.92307692, 0.92307692, 0.96153846, 0.96153846, 1. , 1. ], [0. , 0.03846154, 0.34615385, 0.34615385, 0.42307692, 0.42307692, 0.61538462, 0.61538462, 0.69230769, 0.69230769, 0.73076923, 0.73076923, 0.76923077, 0.76923077, 0.80769231, 0.80769231, 0.84615385, 0.84615385, 0.88461538, 0.88461538, 0.92307692, 0.92307692, 0.96153846, 0.96153846, 1. , 1. ], [0. , 0.03846154, 0.42307692, 0.42307692, 0.57692308, 0.57692308, 0.76923077, 0.76923077, 0.84615385, 0.84615385, 0.88461538, 0.88461538, 0.92307692, 0.92307692, 0.96153846, 0.96153846, 1. , 1. ]] I am trying to compute the mean of this list of list. I am using the following code. mean_a=[float(sum(col))/len(col) for col in zip(*a)] But I am getting the following output. 0.03846154, 0.43269231, 0.43269231, 0.5288461525, 0.5288461525, 0.66346154, 0.66346154, 0.7307692325, 0.7307692325, 0.7692307675, 0.7692307675, 0.8173076925, 0.8173076925, 0.8557692299999999, 0.8557692299999999, 0.89423077, 0.89423077] As you can see the output is wrong as the all the last elements of each list is 1 and so the mean should be 1. But I am getting a different mean for the last element. AI: I think the number of elements in each lists are different. (26 , 22 , 26 , 18) Zip() function uniformly maps all elements of lists of same sizes. (And that's why the 1's are missing and would round off to 18 elements) Try maintaining the same number of elements in each List and try to find the mean again.
H: pandas - under a column, count the total number of a specific value, instead of using value_counts() value_counts() function outputs the number of all unique values in a column, for example apple 3 orange 2 banana 1 I want to search the total number of (value = 'apple') only, which function can replace value_counts()? AI: You have plenty of ways to do it. You won't see a big difference in performance. My suggestion is to use whatever feels more convenient for you or your team. import pandas as pd import numpy as np #Let's create a dataframe with 10 million integers from 0 to 100 df = pd.DataFrame(np.random.randint(0,100,size=(10000000, 1)), columns=list('A')) #And now count the value 5 with 4 different ways %timeit df[df.A == 5].shape[0] 10 loops, best of 3: 25.4 ms per loop %timeit len(df[df.A == 5]) 10 loops, best of 3: 25.4 ms per loop %timeit len(df[df.A == 5].index) 10 loops, best of 3: 25.6 ms per loop %timeit df['A'].value_counts()[5] 10 loops, best of 3: 149 ms per loop As you can see, only the last one takes more time to run. EDIT: Addition to your comment, you could try this df = data.groupby('a_1').get_group(a_2)['suffix'] len(df[df.suffix == 'a_3'])
H: How to build a hybrid graph with CNN and RNN Architecture? For an instance, we have M matrix of size n*k, I would like to treat each matrix independently and perform the convolution operation at each one in order to introduce to an LSTM layer the outputs of the convolution layer. can I treat each matrix independently or do I have to use the channel technique and how I can provide each LSTM cell? you find the architecture link below, thank you in advance!! AI: I write an answer here since I don't have enough reputation to write a comment. What about treating your M matrices as sequence, feeding them to a TimeDistributed(Conv2D) layer(s) and then to an LSTM layer? In this way the Conv layer should extract feature maps that are treated by the LSTM in a sequence-like fashion, updating/accumulating the states up to the M matrix. Afterwards you should put a Dense layer with sigmoid/softmax activation and get class probabilities.
H: can i make model that can take N shape input? as far as i try all data that model take are fixed size or fixed shape, i had problem that i give data to model but it has no fixed shape and i can not cut part of it as all of it are important,so can i make model that have unshaped data as input then take unshaped data to predict .. like have small problem that can be represented by N like famous problem travel sales man as in it N refer to number of cities AI: The first thing, which comes to mind speaking about unfixed input, is a recurrent neural network. Such a model adresses dynamic behavior. Maybe, it's possible to represent data as a sequence? I don't know nature and specificity of your data, but if each instance in the dataset consists of a different number of some "similar events", it might be reasonable to think in this direction. The other way which might help is to compute statistics on the data. Then it will be another feature space, but in some cases it could be enough to achieve high accuracy of predictions. For example if there are k features of the same nature (say, temperature in different days), then average, standard deviation, min, max, etc. could be considered. And statistics don't depend on the length of each instance in the dataset, so this number k could differ.
H: Conditional Function in R I have an R function for (i in 1:10) that reads in data and performs some cleaning. Is it possible to write a flexible statement to execute certain parts of the code conditionally, for example have different steps for i = 2 and 8? AI: You can add conditions inside the loop if you want: for (i in 1:10){ if (i<=2){print("1-2")} if (i>2){print(" >2")} }
H: Clustering analysis for observations with lists as data So I have several samples analyzed for their chemical composition. After data analysis, for each sample, I have a list of compounds found and their corresponding relative abundance. Some compounds are unique but most are actually found in most samples. I want to do clustering analysis based on these list of compounds. How do I go about this? Specifically how to vectorize my dataset since each observation is actually an array with both numerical (abundance) and categorical (compound label) variables. AI: K-means would be a fine clustering method for you to start with, though you will have to provide the number of clusters you wish for it to return (not sure if you know that/can figure it out). Otherwise check out DBSCAN. As for the mix of numerical vs categorical data types, all you will need to do is one-hot encoding on your categorical variables. What that does is that it will take all of the known possibilities for a category and it creates new features out of them. A 1 is assigned if the sample is part of that category, and a 0 if it is not. In this way you can use numerical and categorical at the same time, just make sure to normalize your numerical data!
H: Which are the appropriate prameters for lda modeling? I try to implement in R test for appropriate metrics for lda. Here the way I try to use LDA require(quanteda) require(quanteda.corpora) require(lubridate) require(topicmodels) dtext <- data.frame(id = c(1,2,3), text = c("This dataset contains movie reviews along with their associated binary sentiment polarity labels. It is intended to serve as a benchmark for sentiment classification. This document outlines how the dataset was gathered, and how to use the files provided.", "The core dataset contains 50,000 reviews split evenly into 25k train and 25k test sets. The overall distribution of labels is balanced (25k pos and 25k neg). We also include an additional 50,000 unlabeled documents for unsupervised learning.", "There are two top-level directories [train/, test/] corresponding to the training and test sets. Each contains [pos/, neg/] directories for the reviews with binary labels positive and negative. Within these directories, reviews are stored in text files named following the convention [[id]_[rating].txt] where [id] is a unique id and [rating] is the star rating for that review on a 1-10 scale. For example, the file [test/pos/200_8.txt] is the text for a positive-labeled test set example with unique id 200 and star rating 8/10 from IMDb. The [train/unsup/] directory has 0 for all ratings because the ratings are omitted for this portion of the dataset."),stringsAsFactors = F) corp_news <- corpus(dtext) dfmat_news <- dfm(corp_news, remove_punct = TRUE, remove = stopwords('en')) %>% dfm_remove(c('*-time', '*-timeUpdated', 'GMT', 'BST')) %>% dfm_trim(min_termfreq = 0.95, termfreq_type = "quantile", max_docfreq = 0.1, docfreq_type = "prop") dfmat_news <- dfmat_news[ntoken(dfmat_news) > 0,] dtm <- convert(dfmat_news, to = "topicmodels") lda <- LDA(dtm, k = 10) Which are the parameters which I have to test to make the perfomance of lda better? example there methods for the topic selection number algorithms like gibbs, recover and recoverl2 what other tunning it needs? AI: The main and by far most impactful parameter that you have to search for is k - the number of topics you assume. Conceptually, this is similar to K Nearest Neighbors, in which you have to pre-specify the number of clusters you are searching for and then determine the best fit clusters by properties of the clustering results. The other parameters, including the method of solving the mathematical problem may produce slightly different results, but none will be nearly as impactful as k. On a more general note, you can almost always find the answer to this question for an implemented algorithm you're trying to use from the API documentation of the package in question- the topicmodels documentation containing the LDA interface can be found here. Often, if you don't well understand the properties of the algorithm that you're trying to use, the documentation will also point you to resources which thoroughly describe the algorithm in question. It is this understanding of the algorithm and what it is doing that will enable you to answer the questions like "What parameters should I tune, over what values, and what is likely to change about the outputs when I do so?"
H: Feature addition/ subtraction and SVM model accuracy I am working on a text classification problem where I would like to improve the accuracy of my model. Presently, I am using SVM with linear SVC and OneVsRestClassifier. The model should correctly predict all of the subcategories for a parent category. For entered account name, I should get the right description for my test data file that I will use to test the model later. If I add another feature, +1 or -1, for debit and credit to my Account_Name field, will there any improvement in the accuracy? Or, is there anything else that will help improve the accuracy of the model? It is currently at 74% which is low. AI: It is hard to say without trying it. But you should add features as much as you can. When you have so many features, according to your success metrics, you can do future selection / regularization for both increasing the accuracy and speeding up the convergence of gradient descent. Also you can also try (after adding more features) other classification algorithms such as RandomForest, GBM, Xgboost etc - Tree based models are more accurate in recent years. Hope this is helpful!
H: How to normalize a position in 3d game? I'm pretty new to anything and everything related to this kind of stuff, I was wondering how would I normalize the coordinates of a entity in my game for a nerual network? Would it just be the same normalizng everything else? I don't assume it'd be the same just because how big these numbers are, you'd have to have a BUNCH of date (I think?) Edit: also I'd like to know if this would be the proper way of giving your neural network entitie's position in a 3d game? here's an example of the coordinates, 235.56, -2332.93, -8.57 AI: Assuming each coordinate axis has a fixed max and min value, and $|min_{x}| = max_{x}$, $|min_{y}| = max_{y}$, $|min_{z}| = max_{z}$, the standard normalisation procedure would be: $$ (x,y,z) \mapsto (\frac{x}{max_{x}},\frac{y}{max_{y}},\frac{z}{max_{z}})$$ I.e. mapping the values to within the range $(-1,-1,-1)$ to $(1,1,1)$. E.g. if your corrdinate system was fixed between $-100$ and $100$ for each axis, the coordinate $(10,-52,-8)$ would normalise to: $$ (10,-52,-8) \mapsto (\frac{10}{100},\frac{-52}{100},\frac{-8}{100}) = (0.1, -0.52, -0.08)$$
H: Is there any way to define custom entities in Spacy 1) I have just started working on NLP the basic Idea is to extract meaningful information from text. For this I am using "Spacy". As far as I have studied Spacy has following entities. ORG PERSON DATE MONEY CARDINAL etc. But I want to add custom entities like: Nokia-3310 should be labeled as Mobile and XBOX should be labeled as Games 2) Can I find some already trained models in Spacy to work on ? AI: For pretrained models, spaCy has a few in different languages. You can find them in their official documentation https://spacy.io/models The available models are: English German French Spanish Portuguese Italian Dutch Greek Multi-language If you want support for extra labels in NER, you could train a model in your own dataset. Again, this is possible in spaCy and from their official documentation https://spacy.io/usage/training#ner, here is an example LABEL = "ANIMAL" TRAIN_DATA = [ ( "Horses are too tall and they pretend to care about your feelings", {"entities": [(0, 6, LABEL)]}, ), ("Do they bite?", {"entities": []}), ( "horses are too tall and they pretend to care about your feelings", {"entities": [(0, 6, LABEL)]}, ), ("horses pretend to care about your feelings", {"entities": [(0, 6, LABEL)]}), ( "they pretend to care about your feelings, those horses", {"entities": [(48, 54, LABEL)]}, ), ("horses?", {"entities": [(0, 6, LABEL)]}), ] nlp = spacy.blank("en") # create blank Language class ner = nlp.create_pipe("ner") nlp.add_pipe(ner) ner.add_label(LABEL) # add new entity label to entity recognizer optimizer = nlp.begin_training() move_names = list(ner.move_names) # get names of other pipes to disable them during training other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "ner"] with nlp.disable_pipes(*other_pipes): # only train NER sizes = compounding(1.0, 4.0, 1.001) # batch up the examples using spaCy's minibatch for itn in range(n_iter): random.shuffle(TRAIN_DATA) batches = minibatch(TRAIN_DATA, size=sizes) losses = {} for batch in batches: texts, annotations = zip(*batch) nlp.update(texts, annotations, sgd=optimizer, drop=0.35, losses=losses) print("Losses", losses) If you want to use an existing model and also add a new custom Label, you can read the linked article in their documentation where they describe the process in details. Actually, it is quite similar to the code above.
H: Should I perform cross validation only on the training set? I am working with a dataset that I downloaded from Kaggle. The data set is already divided into two CSVs for Train and Test. I built a model using the training set because I imported the train CSV into a Jupyter Notebook. I predicted using the Train CSV itself. I would like to perform cross validation. Should I perform cross validation on the train CSV and split it into two parts again Train and Test? Or, should I import a new CSV file Test and combine both CSVs into ONE? AI: You shouldn't touch test data until you finish. To do cross validation you have to split the training dataset into train and validation sets or you can do k-fold cross validation (or any other method). Here is some information: https://en.wikipedia.org/wiki/Cross-validation_(statistics) But, never use test data to train or adjust your model, because if you do, your model will be trained for your test data and then your results won't be valid.
H: Decision Trees Should We Discard Low Importance Features? I just started to work with feature selection. Let's say I have a decision tree model. I get its feature importances by tree.feature_importances_. In my model out of around 30 features, 20 of them has importance value of zero. Does that mean that I should discard those low importance value features from my model? As I understood the answer is no, but I don't know the reason behind it. Can anyone explain? AI: As for many questions, the answer is "it depends": features which have a low individual importance may still add predictive power to your model, because the model benefits from combining their information together with information of other features. However they may introduce noise in the model and cause overfitting, thus decreasing the performance of the model. The best way to answer your question is to experiment: order the features by decreasing importance loop from say 5 features to 30, each time selecting the top N features by importance, and training/testing a model based on this subset of features. plot the performance You're likely to observe that the performance increases quite a lot at the start for each "important" feature added, then slows down as feature importance decreases and probably doesn't increase at all at some point, possibly even decreasing a bit.
H: Docker for data science I recently started to read articles about Docker. To me, in data science, Docker is useful because: 1) You have a totally different environment, which protect you against libraries and dependencies problems. 2) If your application modify, for example, the database of your company, you want first to make sure that the code works fine, and won't have bad consequences on the database. Thus, you use Docker to test your code first. my questions: Am I right if I say that only the second reason is about sand boxing? The first reason has nothing to do with sand boxing, right? Are there other reasons why Docker is useful in Data science?? I don't find a lot of interesting research papers about Docker for data science. Do you know some famous ones? AI: Instead of focusing on the technology term, will provide generalized answer and use the term “containers”. Containers run only what it supposed to run, assuming anything unknown is untrusted so what only reside in container only for the life of container, so modification of a code in database to test will be same approach in VMs(sandboxing) or containers(docker) and biggest difference is kn resource consumption and time for provisioning VMs vs spinning up containers/pods in couple seconds for application. More details: Part1_ From application point of view Containers are very important when does come to data science world from following points: Minimizing conflicting library versions for different development projects(each project has own container/pod to run). Consistency across environments and developers against specific design criteria. Rapid Prototyping with Edge Node On Demand. Avoiding a need to reinstall everything on new hardware after a refresh or failure. Portability of your compute environment, develop in your machine and if you need more resources like CPU, RAM or GPU you can port your environment to cloud, powerful clusters, powerful machine, group of containers that can be orchestrated with Kubernetes or Dockerized cluster called Swarm. Maximize collaboration clarity across groups. Consistent libraries, result sets, etc. Extending on-premises autonomy/agility to cloud scale and reach. Project workspaces powered by Docker containers for control over environment configuration. You can install new packages or run command-line scripts directly from the built-in terminal. Resources used by containers are very few comparing with VM or Bare metal ... you can have multiple containers running in VM or Bare metal so this will greatly reduce license cost for OS and projects/apps will scale based on resources needed and not full resources of the host( here to note: you will run multiple apps/models in a machine when using containers comparing without containers you’ll use full machine resources for 1 single app/model and this is waste of resources. In a cluster environments you can spin up multiple pods/containers(apps/models across the cluster and with zero cost of the pod comparing with VMs you pay for OS of each host and you run all resources to that task). easy to have new image with new packages installed and share with everyone instead to deal with virtual environments into VMs and have conflict of packages and many more(would like to see virtual environments to be moved from traditional approach to containers, that will be a good improvement and will help a lot for Data scientists to bypass the configuration of each project individually with all conflicts and activate/deactivate then search for requirements when putting in production when with containers all you need is a config file, this is my point of view from what I see daily in Data Science world). Infrastructure independent platform allows data scientists to run the the analytics applications on the infrastructure best optimized for the applications Empower data scientists to build models with the tools and packages best suited for the research without worry of application and environment conflicts. Most important Research Reproducibility: The SCALE provided by containers, you scale easily with containers and will be identical across all environments and you don’t care about the host OS, ensuring repeatability of analysis and research with immutable containers and eliminating issues with different environments Secure Collaboration: Integrated security in the platform and lifecycle allows for collaboration without risk of tampering and data integrity(more details in Part2&3. Here comes in play Cloudera with Cloudera Data Science Workbench and Anaconda with Anaconda Enterprise both does use containers so can bring quick results for the business and deploy easy the models from dev to QA and ultimately to production. Why is important last statement? is to have portability from dev to prod without any changes to environments and without cost DevOps part to operationalize. Part2_ From security point of view One notorious security advantage is that the host OS security is separate from the container, meaning that is patched separated, patching host OS wouldn’t impact your containerized application(how many times we have the issue when we do patch the OS and does affect the app and service in that OS(paths, ports, services etc)? For example: if you have a malicious library on your app/code than that malware will reside only on that container for the time of container being live, containers does run as endpoint all the time and didn’t saw a case that will spread the malware into network. monitoring a container node is robust and you can put add-on or service to monitor behaviors of application or node, and will only replicate new node/container only and only based on config file. Comparing VMs vs containers: With containers is a different story, you take care of OS separate from containers(containers is separate task when security is in place). Docker security does provide detailed information of major security points. Docker standards and compliances provides a full list of security compliances and standards available for containers. "Docker container with a well crafted seccomp profile (which blocks unexpected system calls) provides roughly equivalent security to a hypervisor." Folder sharing. With containers, you can share a folder by setting up a shared mount, and since the Docker/kernel enforces file permissions that is used by containers, the guest system can't bypass that restrictions. This is very important for Data Science applications and users because in corporate is used most of the time sensitive/restricted data and multiple layers of security or restrictions are in place, one approach to solve this security problem is using VDI or VMs that with AD group in place to restrict/share data access and does become problematic and costly to maintain and allocate resources. When does come to debugging applications or OS with all services and logs generated I think containers are winning and evolving now to NLP approach: An experimental natural language processing (NLP) utility is also included, for proofreading security narratives. Here I will quote Jianing Guo from Google: A properly secured and updated VM provides process level isolation that applies to both regular applications as well as container workloads, and customers can use Linux security modules to further restrict a container’s attack surface. For example, Kubernetes, an open source production-grade container orchestration system, supports native integration with AppArmor, Seccomp and SELinux to impose restrictions on syscalls that are exposed to containers. Kubernetes also provides additional tooling to further support container isolation. PodSecurityPolicy allows customers to impose restriction on what a workload can do or access at the Node level. For particularly sensitive workloads that require VM level isolation, customers can use taint and toleration to help ensure only workloads that trust each other are scheduled on the same VM. On top of all, kubernetes cluster has following additional security features: Use Transport Layer Security (TLS) for all API traffic API Authentication API Authorization Easy remediation of security flaws and minim impact on application/environment compared with VMs or dedicated Bare metal. Part3_CIS benchmarks for hardening containers: Docker & Kubernetes. First step making containers more secure is to prepare the host machine that is planed to use for executing containweized workloads. By securing containers host and following infrastructure security best practices would build a solid and secure foundation for executing containerized workloads. Stay up to date on Docker updates, vulnerabilities in the software. Have a specific dedicated directory for Docker related files and allocate just enough space to see for containers to run(default path is /var/lib/docker but change to other mounting point and monitor at OS level using auditd or aide services for any changes or size/unwanted workload run, keep the logs and configure according to the needs. Note: The best part about step 2 is that with VMs you need to monitor way more locations for your data science project(libraries in different location, multiple versions of packages, locations/path for even python, run cron jobs or systemd to ensure some process does run, log all logs etc but with containers is single point for all this jobs to run and monitor only a path instead of multiple ones). Verify all the time the users into docker group so you prevent unauthorized elevated access into the system(Docker allows the share of directory between the Docker host and a guest container without limiting the access rights of the container) so remove any untrusted users from the docker group and do not creat a mapping of sensitive directories frim the host to container volumes. Here I would say to use a separate user for the installation and specific container tasks "NEVER use root for containers run dedicate a PID only for this task(will have elevated access but will be task based, I use gravitational for the cluster and when installing I NEVER use root). Audit all Docker daemon activities(be aware that does take space the logs in containerized "world" so prepare separate partition with decent space to hold the logs and required configuration(rotation and period to store the logs). Audit all Docker files and docker.service , in etc, var and what else is aplicable. Restrict all inter-container communication, Link specific containers together that are require communication(best will be to create a custom network and only join containers that need to communicate to that custom network). This hardening approach will prevent unintended and unwanted disclosure of information to other containers. All applications in containerized infrastructure should be configured or at least to have th eoption to Encryp All Sensitive Information(this is very importan for Data Scientist because most of the time we do login to platforms to get data, including sensitive data for the company. Have option to have Encrypted All Sensitive Infomration in Transit. Uses only specific approved Ports, Protocols and Services, VMs has more open surface when does come to run an app/project, with containers you specify only what will be used and not wondering for all other ports listening, protocls and services that does run at OS level to protect or monitor, this does minimize the "attack surface". Sensitive information stored on systems is encrypted at rest and does require secondary authentification mechaninsm , not integrated into operating system, in order to access the information. Does allow to be enabled Operating System Anti-Exploitation Features/Deploy Anti-Exploit Technologies: such as Data Execution Prevention(DEP) or Address Space Layout Randomization (ASLR). The best simple security difference between VMs and Containers is: when updating or running a project you don't need elevated access to do it over entire VM or network, you just run as a defined user and if is elevated access then does exist only for the time of the container and not shared across of the host(here does come Data Science libraries install, update, executing projects code etc). Part4_ More resources(on top of links incorporated in Part1-3) related to containers for Data Science: Data Science on docker Conda, Docker, and Kubernetes: The cloud-native future of data science (sponsored by Anaconda) https://devblogs.nvidia.com/making-data-science-teams-productive-kubernetes-rapids/ Why Data Scientists Love Kubernetes A very good book to understand docker usage for Data Science: Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server.
H: Gradient boosting how can accuracy increase when we lower the depth of tree? What I don't understand about gradient boost is, doesn't lowering height of the tree means we use fewer features in our model? From my model I get the highest accuracy when the depth is one. Meaning there is just root node at my trees, and uses one feature. How can a model that uses one feature gives such accuracy? AI: I find it really hard to imagine how tree-based boosting works. I think there are two important components: Boosting can be seen as an ensemble method, so in essence it is not one "small" tree, but many (in fact a huge number of) small trees which learn together. During learning, there is feedback on the learning process ("adaptive boosting"). So observations which are "hard to predict", receive a higher weight (more attantion if you like). This allows the ensemble to learn "deeper" than other methods. The xgboost docs come with a nice introduction on boosted trees. Chapter 10 of "Elements of Statistical Learning" covers boosting. It is worth a look. Below is the AdaBoost routine as covered by the book. There are also other boosting methods which do not work with trees. Here is a simple L2 boosting routine implemented in R.
H: Transfer learning on new image size Transfer learning: Take a trained neural network and use it for a new classification task. When we want to use transfer learning with a convolutional neural network, we don't have to use the same image size as input than the image size used for training. But if we change the input size we will have to re-train fully connected layers. See this post on Stackoverflow. I don't undertand why changing input shape will not affect convolutional layer weight and why it will affect fully connected layer weights. Tell me if my question is not understanble. AI: To understand this you need some basic mathematics of how matrix operations work. So let us start, consider you are using resnet-34 architecture which is trained on imagenet with 1000 classes so when you use tranfer learning you load the model architecture and its weights a model like resnet34 has two things backbone that is the convolution part and FCL's the neck of the network now when you train a model using resnet 34 it will only train the neck part now let us understand, convolutions work on whole images they are filters of some nxn dimension which work on a image of suppose dxk dimension now convolution filter sizes remain the same what changes is the feature map size now if you see usisg a nxn filter on dxk image will give you a feature map of (d-n+1)x(k-n+1) dimension for that particular filter now for the other convolution units they will perform convolutions in the same way producing these feature maps and different image size wont affect how convolutions take place thus this answers why input shape doesn't require weight changes for convolution but if image shape that has been changed and when convolotions are performed on them using tranfer learning models which are trained on different image sizes(Imagenet data) , then you will see that just before fcl's there is a flattening operation which flattens the final feature maps which then act as input for the fcl's now just see imagenet model will produce diffrent size feature maps which when flattened will have diffrent dimension as compared to yours when it will be flattened thus total number of input neurons for the fcl's may get reduced or increased depending upon your image size(feature map size after convolution +pooling) and when you reduce the number of neurons you will need to retrain them to get new weight values and also your output layer is also modified depending upon total number of classes you are predicting hence this ANN that is the fcl's there architecture is modified as per your image size and output classes hence changing the architecture requires changing the weights which is done using training through backpropogation.Thus this concludes image sizes and your problem dont affect the convolution(because weights of filters are updated and image size doesn't affect filter size but only the size of the feature map produced after convolution) process but they do affect the ANN or the fcl's hence weights are updated through training. P.S- you can actually even update the weights of the convolutions by unfreezing the whole model and then re-training the whole network even the backbone fastai provides a great support for this visit there documentation and site it is quite helpful.
H: Make column as dictionary key and row as value in pandas dataframe I have Dataframe as below. name value id A 123 1 B 345 5 C 567 4 D 789 2 I need to create a dictionary of the form below { {"name":"A", "value":123, "id":1 }, {"name":"B", "value":345, "id":5 }, {"name":"C", "value":567, "id":4 }, {"name":"D", "value":789, "id":2 } } I tried below code, but it makes 'name' as the key. data_set.set_index('name').to_dict(orient='list') How to get the dictionary of the form i have mention? Do i need to iterate through rows? AI: Check this: data_set.to_dict(orient = 'records')
H: Explain forward filling and backward filling (data filling) Can I understand in this way? Let me know if any statement is wrong or not accurate. Reason of data filling: Assume I have a consecutive data (e.g., daily log data), and partial data are missing. In order to make some calculation (e.g., mean value), we first need to assign values to the missing parts (e.g., equal to existing data) Forward filling and backward filling are two data filling methods. The difference is the filling direction? E.g., Tuesday data (missing) equals to Monday data (existing) is forward filling. The opposite is backward filling. AI: Yes you are correct. Forward filling and backward filling are two approaches to fill missing values. Forward filling means fill missing values with previous data. Backward filling means fill missing values with next data point. You can refer the below url for more on missing values: https://www.kaggle.com/juejuewang/handle-missing-values-in-time-series-for-beginners https://www.kaggle.com/dansbecker/handling-missing-values https://www.kaggle.com/meikegw/filling-up-missing-values
H: How can I have the same initialization for all my networks? I want to have the same weights for layer initializations in all my networks, so that when I'm comparing their first epoch loss they all start from the same value. Is there a way in keras to do this? I have set the random seed for the numpy and tensorflow, but still I get different results in initializations. AI: You need to specify the seed in the initializer, e.g: from keras.initializers import RandomUniform seed = 0 model.add(Dense(64, kernel_initializer = RandomUniform(minval = -0.05, maxval = 0.05, seed = seed)))
H: How to convert trained data (feature extracted) into a prediction model? Background: I am analyzing and labeling some log data. (parsed already, sample data below) I have extracted the major features of data. For examples, the classification results ("1 - normal" or "0 - anomaly") largely depends on columns "duration", "mean", "std". For example, records/sec usually <= 10 std usually < 0.5 , etc Then I want to convert current result into a prediction model, in order to classify the future log data. Sample data: | ID | datetime | No. of Records | duration(s) | mean | std | labels | | 1 | 26/7/2019 8:06:00 PM | 5 | 1.0 | 0.33 | 0.47 | normal | | 2 | ... | anomaly | ... | 1,000,000 | ... | normal | Question: How to convert such parsed & feature extracted & labelled data into a prediction mode (or prediction function)? I am NOT asking the theory. I have read some articles or projects of machine learning models. But many of them introduces too high-level. I still don't understand how to implement those machine learning models. I need to convert them into a real prediction function running in the computer. I am asking the detailed / hands-on implementation steps. Maybe it becomes a call function f(x1, x2, x3, ...) or API in the code? The new data may be the input of a function call? The output will be normal or anomaly in this case. In other words, how to present a prediction model in the form of code? AI: This question is insanely broad… you should better go for an API documentation/tutorial... Python is usually the preferred porgramming language and two major API are used: scikit-learn for machine learning and Keras/tensorflow for Deep learning… (but there are quite a few more) I'll give you some links... You should firstly consult these kind of sources of information and coming here later on with more specific and detailed issues. Scikit-learn examples: https://scikit-learn.org/0.20/auto_examples/plot_anomaly_comparison.html (for your problem) https://towardsdatascience.com/5-powerful-scikit-learn-models-e9b12375320d (general) Really good blog with Machine/Deep Learning examples: https://machinelearningmastery.com/blog/ https://machinelearningmastery.com/introduction-python-deep-learning-library-keras/
H: Is DQN limited to working with only image frames? I have few questions about Deep Q Network. Does DQN only accept image frames as input? I have never hear (read) a paperwork where it doesn't use image frames. If the first is a No, then does image frames as input for training faster than any other options? For example, in any atari game, is frames input faster to train while having the same perfomance quality? I know an image frames is but numbers too, like other datas, but from what I know even a low quality frame is considered huge. AI: DQNs don't only accept image frames as input. For instance, this DQN for the CartPole environment takes in a state with only 4 elements (the position and velocity of the cart and the angle and velocity of the pole.) Images aren't necessarily more efficient representations than other options. For instance, here is another DQN for the same CartPole environment which uses an image representation and is much less efficient. A couple reasons people often use image representations include: It's the only available representation for the task. Other models uses image representations and you want to compare. You want to pass the network the same information available to humans. If none of these apply, though, if you have a representation which gives you more insight into the position or trajectory of elements in the screen it's probably best to use that rather than a raw image.