text
stringlengths
83
79.5k
H: Group neural networks outputs using Keras/Tensorflow I am trying to group the outputs of my neural network, in order to have them perform a separated classification. Let's take the example where the groups are constituted of two nodes and we previously have four output nodes, then the network should look like this : How can I achieve this using Keras or Tensorflow ? I was thinking that I need to implement a custom layer and do the operation inside of it, but I wonder if there is an easier solution using the Keras Functional API (or lower level Tensorflow ?) :) AI: Using the keras functional API will get you what you need. I'm assuming that you are currently using the standard keras sequential model API, which is simpler but also restricts you to a single pipeline. When using the functional API, you do need to keep track of inputs and outputs, instead of just defining layers. For the example in your question: from keras.layers import Input, Dense from keras.models import Model # Left side sub-model: L1 = Input(shape=(2,)) L2 = Dense(2, activation='softmax')(L1) # Right side sub-model: R1 = Input(shape=(2,)) R2 = Dense(2, activation='softmax')(R1) # Combining them together: merge = concatenate([L2, R2]) # Some additional layers working on the combined layer: merged_layer_1 = Dense(4, activation='relu')(merge) # Output Layer: output = Dense(2, activation='softmax')(hidden1) # Defining the model: my_model = Model(inputs=[L1, R1], outputs=output) The depths of the layers are made up, but you should get the general idea.
H: Is my model overfitting? The validation loss keeps on fluctuating I have trained a 4 layer neural network model = Sequential() #get number of columns in training data n_cols = X_train.shape[1] #add model layers model.add(Dense(8, activation='relu', input_shape=(n_cols,))) model.add(Dense(8, activation='relu')) model.add(Dropout(rate = 0.05)) model.add(Dense(8, activation='relu')) model.add(Dense(1)) #adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) model.compile(optimizer='adam', loss='mae') history = model.fit(X_train, y_train, epochs= 200, validation_split=0.2, batch_size=128) When I plot the graph between train and validation loss the graph seems to be like The validation loss is fluctuating. Am I doing it right? AI: Overfitting is a situation when model starts to perform more better on training set than on validation (example of such behaviour: loss curves are moving to different ways). According to your plot the model hasn't overfitted. Validation loss seems to fluctuating more than train, because you have more points in training dataset and errors on test have higher influence while loss is calculated.
H: Why my svm.SVC.fit( ) (linear kernal) run so long time? I am using sklearn.svm.SVC( ) to train & test my dataset. 80% are used for training, 20% are used for testing. Here is my Python code: data = pd.read_csv(trainPath, header=0) X = data.iloc[:, 5:17].values y = data.iloc[:, 17:18].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) print(X_train.dtype, y_train.dtype) # float64 int64 clf = svm.SVC(kernel='linear').fit(X_train, y_train.ravel()) print('done') y_pred = clf.predict(X_test) print("Accuracy:", metrics.accuracy_score(y_test, y_pred)) print("Precision:", metrics.precision_score(y_test, y_pred)) print("Recall:", metrics.recall_score(y_test, y_pred)) tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel() print(tn, fp, fn, tp) For data.shape = 30,000 x 13, it runs around 15 mins. For data.shape = 130,000 x 13, it runs more than 1 hour. Why it runs so long time, I don't think it is normal. i5, 2.8GHz, 16.0 GB memory AI: From scikit-learn documentation: The implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of samples. For large datasets consider using sklearn.linear_model.LinearSVC or sklearn.linear_model.SGDClassifier instead, possibly after a sklearn.kernel_approximation.Nystroem transformer. Yo can change clf = svm.SVC(kernel='linear').fit(X_train, y_train.ravel()) by from sklearn.svm import LinearSVC clf = LinearSVC(random_state=0, tol=1e-5) clf.fit(X_train, y_train.ravel())
H: Why is data science not yet widely applied to Law? Law (judiciary) contains such a huge corpus to apply NLP to, but yet there are only search engines designed for Law. Why is NLP not yet extensively applied? Is it because of dimensionality? AI: Welcome to the site and thanks for the great question! I recently led an NLP project that dealt with a lot of laws. While I have to obfuscate my actual work, here's a general view: The laws themselves may not be the best source data. It would take a massively transformed recordset in order to make most laws actionable for modeling. I'm talking about big rooms, full of lawyers providing an annotated version of laws in order to create a recordset that can actually be useful The above assumes that the laws have been digitized in some easy to digest format. That may not always be the case. In a lot of instances, you are referring back to classic OCR approaches as part of your data prep and I don't know anyone that likes working with OCR :-) The human-in-the-loop requirements are very high. So you have an algorithm, now what? That's not something you can just put out on Mechanical Turk for the layman to verify. You need more lawyers to help with the verification of your approach and correct mistakes that are happening Finally, you must get very sophisticated with your embedding layers in how you create and apply them. That's not an easy thing to do and very processor intensive - a GPU is highly recommended and not a lot of grassroot efforts are going to have this processing power Good luck!
H: Gradient Boosted Decision Trees How to Find Prediction of Each Tree? I'm doing a project. I have a classification problem that I should solve using gradient boosted decision trees. What I want to do is create a matrix that gives prediction of each decision tree for each sample. For example if I have 100 samples and 100 trees, I should have 100x100 matrix. i, j th entry gives the prediction of jth tree for ith sample. I'm using sklearn and problem is I can't get prediction by each tree. So far I tried: newgb=gb.estimators_[0][0].fit(X_train, y_train) print(newgb.score(X_train, y_train)) where gb is already a fitted model. What I understood from documentation of sklearn https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html#sklearn.ensemble.GradientBoostingRegressor.staged_predict .estimators_ should return (number-of-trees x 1) matrix, each entry contains a tree that used by our model. By gb.estimators_[0][0] I tried to access to the first tree, and predict it with score. What I get as output is: [0.12048193 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.12048193 0.95 0.95 0.95 0.12048193 0.12048193 0.12048193 0.12048193 ...] None of them are 1 or 0, like it should be(it is binary classification) and values repeat themselves like 0.95 and 0.12. I didn't use any likelihood function either so .score() supposed to give me only 1's and 0's. I don't know how to get predictions for each individual tree. I don't know what I do wrong either. AI: Sklearn's GradientBoostingClassifier is not implemented using trees of DecisionTreeClassifiers. It uses regressors for both classification and regression. You can read it here: GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage n_classes_ regression trees are fit on the negative gradient of the binomial or multinomial deviance loss function. Binary classification is a special case where only a single regression tree is induced. This means it will not be as simple as calling predict on the tree estimators. I previously suggested it could be implemented using sklearn's private methods, but as BenReiniger pointed out sklearn has already implemented this for us in the method staged_predict.
H: Resampling for imbalaced datasets: should testing set also be resampled? Apologies for what is probably a basic question but I have not been able to find a definitive answer either in the literature or in the Internet. When dealing with an imbalanced dataset one possible strategy is to resample either the minority or the majority class to artificially generate a balanced training set that can be used to train a machine learning model. My doubt stems from the fact that a testing set is supposed to be a representation of what real-world data is going to look like. Under this assumption, my understanding is that the testing set is not to be resampled, unlike the training set, since the imbalance we are trying to deal with is present in the live data in the first place. Could someone please clarify if this intuition is right? AI: The resampling of the training data is to better represent the minority class so your classifier would have more samples to learn from (Oversampling) or less samples to better differientiate your minority class samples from the rest (Undersampling). Not only your test data must be untouched during oversampling or undersampling but also your validation data. One logical argument that prevents you from touching your test data is that in a real-world scenario, you wouldn't have access to the target variable ( that's what you want to predict ) and in order to perform resampling, you need to know which class a sample belongs to for you to remove it (undersampling) or find it's nearest neighbor(s) (oversampling) Example of an oversampling during cross-validation just below : What i'm basically doing here to avoid leaking information from trainset to testset( and valset ), every iteration, at each fold, i oversample the remaining folds, train a model with the oversampled new trainset, get my preds, and iterate over and over again. Each time i get a new fold for validation, i oversample all the others, and get predictions for that validation fold. for ind,(ind_train,ind_val) in (enumerate (kfolds.split(X,y))): # Stratified Kfold X_train,X_val = X.iloc[ind_train],X.iloc[ind_val] y_train,y_val = y.iloc[ind_train],y.iloc[ind_val] sm = SMOTE(random_state=12, ratio = 1.0) X_train_res, y_train_res = sm.fit_sample(X_train, y_train)##oversampled trainset xgb = XGBClassifier(max_depth=5,colsample_bytree=0.9,min_child_weight=2,learning_rate=0.09,objective = "binary:logistic",n_estimators=148) xgb.fit(X_train_res,y_train_res) val_pred = xgb.predict(X_val) ##out of fold predictions on my validation set train_pred = xgb.predict(X_train)##oof preds on my trainset test_pred = xgb.predict(X_test)##oof preds on my whole test set
H: Oversampling only balances the training set, what about the testing set? In a case of imbalanced data classification, I know that we only oversample the training set (to prevent data leakage from training to testing subsets), but what if there are no positive data points in my testing set? The testing set is still highly skewed and has only 1% of my positive class. I am using XGBoost, Random Forest, Logistic Regression, and KNN for the classification task. Also, I have tried SMOTE, SMOTE-NC, and Class_weight to oversample my training set. To increase the chance of having more data from the minority class, I changed the 10-fold to 5-fold cross-validation (when developing the models), no improvement! PS: I have >100K data points in my dataset. AI: Use a stratified split (as pointed out by louic in the comments). It will distribute your classes evenly across all folds. It can be done using sklearn's StratifiedKFold.
H: Why is my Python Altair chart printing a blank line? Wondering if anyone can help me understand why python altair chart is not printing... I’m only seeing this when I run the code: It literally isn’t outputting anything. This is what my chart looks like converted to_dict; recommended for troubleshooting to ensure you have data. Any ideas about why it’s not displaying a graph/chart/etc, and only printing a blank line? Adding the head of the csv in pandas form. AI: The issue was enabling the vega module/extension in the notebook. Installing it is only half the battle, then you must enable it. :) sudo jupyter-nbextension enable vega --py --user
H: What are the factors to consider when setting the depth of a decision tree? In scikit learn, one of the parameters to set when instantiating a decision tree is the maximum depth. What are the factors to consider when setting the depth of a decision tree? Does larger depth usually lead to higher accuracy? AI: Yes, but it also means you're likely to overfit to the training data, so you need to find the value that strikes a balance between accuracy and properly fitting the data. Deciding on the proper setting of the max_depth parameter is the task of the tuning process, via either Grid Search or Randomised Search with cross-validation. This page from the scikit-learn documentation explains the process well: https://scikit-learn.org/stable/modules/grid_search.html
H: How will a decision tree cope if it cannot find a suitable feature to choose as root node from which to split further? When I look at decision trees, they start with a root node choosing the most suitable feature from which to split further. What if the decision tree is unable to find the most suitable feature from the data as root node? How does the decision tree cope in that situation? AI: In scikitlearn at least the code is in this file: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_splitter.pyx. Basically the determination of the root node is the same as the determination of any other node, it's just the first one that's done. What I think it's doing is either random splits (in which case your question is irrelevant) or "best" split, where best is determined by the evaluation of some criterion like decrease of gini impurity for each feature. In the case of a "Best" split it's evaluating the criterion for each feature in turn and keeping track of the best one, only overwriting that where a new feature's criterion evaluation is strictly better than the currently stored best feature. That strictly better evaluation means that, where two features have the same value for the criterion, the model will select the first feature it encounters as the one to split. In other words, if Feature A and Feature B would result in an equally good split, it will set the root node to be Feature A purely because it evaluated that first.
H: Why are bigger embedding vectors not necessarily better? I'm wondering why increasing the dimension of a word dimension vector in NLP doesn't necessarily lead to a better result. For instance, on examples I run, I see sometimes that using a pre-trained 100d GloVe vector performs better than a 300d one. Why is this the case? Intuitively a larger dimension should become almost like a one-hot encoding and be in some more "accurate," no? AI: You can think about phenomenons close to the curse of dimensionality. Embedding words in a high dimension space requires more data to enforce density and significance of the representation. A good embedding space (when aiming unsupervised semantic learning) is characterized by orthogonal projections of unrelated words and near directions of related ones. For neural models like word2vec, the optimization problem (maximizing the log-likelihood of conditional probabilities of words) might become hard to compute and converge in high dimensional spaces. You’ll often have to find the right balance between data amount/variety and representation space size.
H: Difference between sklearn make_pipeline and imblearn make_pipeline Can anybody please explain the difference between sklearn.pipeline.make_pipline and imblearn.pipeline.make_pipline. AI: The imblearn package contains a lot of different samplers for easy over- or under-sampling of data. These samplers can not be placed in a standard sklearn pipeline. To allow for using a pipeline with these samplers, the imblearn package also implements an extended pipeline. This pipeline is very similar to the sklearn one with the addition of allowing samplers. If you want to include samplers in the pipeline, use the imblearn pipeline. Otherwise, use the sklearn one. The code for the imblearn pipeline can be seen here and the sklearn pipeline code here. Note that make_pipeline is just a convenient method to create a new pipeline and the differnece here is actually with the pipelines themselves.
H: sklearn.model_selection: GridSearchCV vs. KFold Here is the explain of cv parameter in the sklearn.model_selection.GridSearchCV: cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: integer, to specify the number of folds in a (Stratified)KFold For example, can I replace CV = 5 to CV = KFold(n_splits=5, random_state=None, shuffle=False) or replace in the opposite way. CV is called in the function GridSearchCV(estimator=XXX, ... , cv = CV) I am not sure whether any difference? AI: Yes, you can replace the cv=5 with cv=KFold(n_splits=5, random_state=None, shuffle=False). Leaving it set to an integer, like 5, is the equivalent of setting it to either KFold(n_splits=5) or StratifiedKFold(n_splits=5), depending on the model you pass to the estimator parameter of GridSearchCV()
H: Track underlying observation when using GridSearchCV and make_scorer I'm doing a GridSearchCV, and I've defined a custom function (called custom_scorer below) to optimize for. So the setup is like this: gs = GridSearchCV(estimator=some_classifier, param_grid=some_grid, cv=5, # for concreteness scoring=make_scorer(custom_scorer)) gs.fit(training_data, training_y) This is a binary classification. So during the grid search, for each permutation of hyperparameters, the custom score value is computed on each of the 5 left-out folds after training on the other 4 folds. custom_scorer is a scaler-valued function with 2 inputs: an array $y$ containing ground truths (i.e., 0's and 1's), and an array $y_{pred}$ containing predicted probabilities (of being 1, the "positive" class): def custom_scorer(y, y_pred): """ (1) y contains ground truths, but only for the left-out fold (2) Similarly, y_pred contains predicted probabilities, but only for the left-out fold (3) So y, y_pred is each of length ~len(training_y)/5 """ return scaler_value But suppose the scaler_value returned by custom_scorer depends not only on $y$ and $y_{pred}$, but also knowledge of which observations were assigned to the left-out fold. If I have only $y$ and $y_{pred}$ (again: the ground truths and predicted probabilities for the left-out fold, respectively) when the custom_scorer method is called, I don't know which rows belong to this fold. I need a way to track which rows of training_data get assigned to the left-out fold at the point when custom_scorer is called, e.g. the indices of the rows. Any ideas on the easiest way to do this? Please let me know if clarification is needed. Thank you! AI: Firstly; this is a really clear, well written question. Kudos! I think the answer is to take the folding out of the CV and do this manually. You can generate the indices of the training and testing data using KFold().split(), and iterate over them in this manner: from sklearn.model_selection import KFold, GridSearchCV import pandas as pd from sklearn.ensemble import RandomForestClassifier iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv') kf = KFold(n_splits=3) for train_idx, test_idx in kf.split(iris): print(train_idx, test_idx) And what you'll get is three sets of 2 arrays, the first being the indices of the training samples for this fold and the second being the indices of the testing samples for this fold. Using that, you could manually cross-validate like this: kf = KFold(n_splits=3) x = iris.drop('species', axis=1) y = iris.species max_depths = [5, 10, 15] scores = [] for i in range(len(max_depths)): rfc = RandomForestClassifier(max_depth=max_depths[i]) scores.append({'max_depth':max_depths[i], 'scores':[]}) for train_idx, test_idx in kf.split(iris): rfc.fit(x.iloc[train_idx], y.iloc[train_idx]) scores[i]['scores'].append(custom_scorer(y.iloc[test_idx], rfc.predict(x.iloc[test_idx]), train_idx, test_idx) So that's running once per value in max_depths, setting that parameter to the appropriate value in a RandomForestClassifier. It's then fitting 3 times, once per fold defined in KFold() and passing several things to the call to custom_scorer()... y.iloc[test_idx] which is our y_true rfc.predict(x.iloc[test_idx]) which is our y_pred train_idx which is the indices of the samples of our training data test_idx which is the indices of the samples of our testing data Hope that helps. Out of interest: why do you need to know which observations are left out?
H: What is the purpose of standardization in machine learning? I'm just getting started with learning about K-nearest neighbor and am having a hard time understanding why standardization is required. Reading through, I came across a section saying When independent variables in training data are measured in different units, it is important to standardize variables before calculating distance. For example, if one variable is based on height in cms, and the other is based on weight in kgs then height will influence more on the distance calculation. Since K nearest neighbor is just a comparison of distances apart, why does it matter if one of the variables has values of a larger range since it is what it is. Considering 3 points A,B & C with x,y co-ordinates (x in cm, y in grams) A(2,2000), B(8,9000) and C(10,20000), the ranking of the points as distance from origin for example (or any other point), will be the same whether the y values are in grams,pounds, tonnes or any other combinations of units for both x and y so where's the need to standardise. Every example or QA i see brushes through with the same statement of 'one variable influencing the other' without a real example of how this might occur. Again, how does one know when this influence is too much as to call for standardization. Also,what exactly does standardization do to the values? One of the formulas does it by Xs = (X-mean)/(max-min) Where does such a formula come from and what is it really doing? Hopefully someone can offer me a simplified explanation or give me a link to a site or book that explains this in simple terms for beginners. AI: Considering 3 points A,B & C with x,y co-ordinates (x in cm, y in grams) A(2,2000), B(8,9000) and C(10,20000), the ranking of the points as distance from origin for example (or any other point), will be the same whether the y values are in grams,pounds, tonnes or any other combinations of units for both x and y so where's the need to standardise. This is true for the example you provided, but not for euclidean distance between points in general. Look at this example: def euclidian_distance(a, b): return ((a[0] - b[0])**2 + (a[1] - b[1])**2)**0.5 a1 = 10 #10 grams a2 = 10 #10 cm b1 = 10 #10 gram b2 = 100 #100 cm c1 = 100 #100 gram c2 = 10 #10 cm # using (grams, cm) A_g_cm = [a1, a2] B_g_cm = [b1, b2] C_g_cm = [c1, c2] print('[g, cm] A-B:', euclidian_distance(A_g_cm, B_g_cm)) print('[g, cm] A-C:', euclidian_distance(A_g_cm, C_g_cm)) # using (kg, cm) A_kg_cm = [a1/1000, a2] B_kg_cm = [b1/1000, b2] C_kg_cm = [c1/1000, c2] print('[kg, cm] A-B:', euclidian_distance(A_kg_cm, B_kg_cm)) print('[kg, cm] A-C:', euclidian_distance(A_kg_cm, C_kg_cm)) # using (grams, m) A_g_m = [a1, a2/100] B_g_m = [b1, b2/100] C_g_m = [c1, c2/100] print('[g, m] A-B:', euclidian_distance(A_g_m, B_g_m)) print('[g, m] A-C:', euclidian_distance(A_g_m, C_g_m)) # using (kilo, m) A_kg_m = [a1/1000, a2/100] B_kg_m = [b1/1000, b2/100] C_kg_m = [c1/1000, c2/100] print('[kg, m] A-B:', euclidian_distance(A_kg_m, B_kg_m)) print('[kg, m] A-C:', euclidian_distance(A_kg_m, C_kg_m)) Output: [g, cm] A-B: 90.0 [g, cm] A-C: 90.0 [kg, cm] A-B: 90.0 [kg, cm] A-C: 0.09000000000000001 [g, m] A-B: 0.9 [g, m] A-C: 90.0 [kg, m] A-B: 0.9 [kg, m] A-C: 0.09000000000000001 Here you can clearly see that the choice of units influence the ranking, thus the need for standardization.
H: How to extract trees in XGBoost? I want to extract each tree so that I can feed it with any data, and see the output. dump_list=xg_clas.get_booster().get_dump() num_t=len(dump_list) print("Number of Trees=",num_t) I can find number of trees like this, xgb.plot_tree(xg_clas, num_trees=0) plt.rcParams['figure.figsize']=[50, 10] plt.show() graph each tree like this. When I do something like: dump_list[0] it gives me the tree as a text. But I couldn't find any way to extract a tree as an object, and use it. https://github.com/dmlc/xgboost/issues/117#ref-commit-3f6ff43 I found this but didn't really understand what is suggested. Progress: I tried to somehow turn dump_list[0] string object into a sklearn DecisionTreeClassifier object. Still no luck. I uploaded my notebook if you want to check it out: https://github.com/sciencelove11/Question AI: This is an open feature request (at time of writing): https://github.com/dmlc/xgboost/issues/2175 https://github.com/dmlc/xgboost/issues/3439 There, a very wasteful but working solution is mentioned: predict using ntree_limit for each number of trees of interest. I've put together a quick demonstration Colab notebook here. It also has been asked several times over at SO, see e.g. https://stackoverflow.com/questions/51681714/extract-trees-and-weights-from-trained-xgboost-model https://stackoverflow.com/questions/37677496/how-to-get-access-of-individual-trees-of-a-xgboost-model-in-python-r and their Related questions. In the first link, another workaround is mentioned: by dumping to text/PMML, you should be able to reload each individual tree (or subsets thereof) and make the predictions. It's not clear how to make this work though: XGB itself doesn't have an easy way to load a model except from its own binary format. You might be able to do it by parsing the output (JSON seems most promising) into another library with tree models.
H: What's a good book (or other resource) to learn Imitation Learning? I must learn and apply imitation learning on a robot for my thesis. I'm looking for decent sources of information on this topic. AI: I found a great book, it's just what I was looking for. In case someone finds it useful: An Algorithmic Perspective on Imitation Learning: T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbel, & J Peters.
H: Fit to a power law I often encounter data which I hypothesize to be from a shifted power law, $ y(x) = A x^k + B$. I have in mind samples from an unknown deterministic function here, but you can think about a probability distribution if you prefer. What is the best way to fit such data using Python? The shift means fitting a straight line in log-log space doesn't work. Ideally I would prefer something a bit more convenient than using least squares directly, but if that is the best way, it is what it is. In that case, is a particular algorithm especially useful? AI: I would just spell out y(x) in pytorch and use autograd with squared loss together with one of the built in gradient descent algorithms on A, B and k. Unfortunately I don’t know any clever tricks to do it in a very simple way, if that was what you’re after. I had a go at it, here's the code import torch from torch.nn.functional import mse_loss from collections import namedtuple from itertools import count from random import gauss, seed seed(49) # Our model def f(x, A, B, k): return A*(x**k)+B # Generate sample data A = gauss(0, 10) B = gauss(0, 10) k = gauss(0, 10) x = torch.arange(1, 10, dtype=torch.float) y = f(x, A, B, k) print(f"A={A}, B={B}, k={k}") print(x) print(y) # Define the parameters Params = namedtuple("Params", "A B k") params = Params(A=torch.tensor([1.0], requires_grad=True), B=torch.tensor([1.0], requires_grad=True), k=torch.tensor([1.0], requires_grad=True)) # Fit the parameters optimizer = torch.optim.Adam(params, lr=1) tol = 1e-6 # MSE Loss error tolerance for i in count(): optimizer.zero_grad() y_hat = f(x, params.A, params.B, params.k) # Take the logarithm for improved numerical stability L = mse_loss(torch.log(y_hat), torch.log(y)) if L < tol: break if i % 1000 == 0: print(f"Loss ({i}): {L.item()}") L.backward() optimizer.step() print(params.A.item(), A) print(params.B.item(), B) print(params.k.item(), k) Output: A=9.42764034916727, B=4.212889473371394, k=12.835248103593624 tensor([1., 2., 3., 4., 5., 6., 7., 8., 9.]) tensor([1.3641e+01, 6.8901e+04, 1.2542e+07, 5.0349e+08, 8.8279e+09, 9.1657e+10, 6.6290e+11, 3.6795e+12, 1.6686e+13]) Loss (0): 421.6722717285156 Loss (1000): 0.0005562781007029116 Loss (2000): 9.59674798650667e-06 9.461631774902344 9.42764034916727 4.17124080657959 4.212889473371394 12.833184242248535 12.835248103593624 ```
H: is gradient descent also used during feed forward propagation in Neural Network? To the best of my understanding , weights are updated during back propagation only using gradient descent and no gradient descent is used during feed forward propagation. is it correct ? AI: The feed-forward pass computes the outputs of each layer, and hence the output of the network, given a current state, i.e. the values for all the parameters for this pass or iteration. The parameters are the weights and biases of the neurons. Once the feed-forward pass has completed the next step is to update all the parameters in order that in the next feed-forward pass the cost function is smaller. To update the parameters, the parameters are moved in the opposite direction of the gradients of the cost function w.r.t them. This is called Gradient Descent Algorithm, and basically, taking into account that the gradients indicates the direction of steepest increase of the cost function, if the parameters are moved towards the opposite direction, the next step the cost function is supposed to be smaller. The problem with neural networks is that we don't know the gradients of the cost function because the number of the parameters are massive (billions) and the cost function themselves are quite complex (not convex, with pathological curvatures...) To overpass this issue, backpropagation has invented as a means of efficiently compute the gradients of the cost function (it uses the chain rule and propagates the errors from the lat layer to the first one) To the best of my understanding , weights are updated during back propagation only using gradient descent and no gradient descent is used during feed forward propagation. is it correct ? Yes and no... Yes if you think of gradient descent as only the updating stage of the parameters No if you understand Gradient Descent as a whole, in which you need the results obtained in the forward pass to compute the values of the gradients in the backwards pass I'm personally prone to the second option, in which Gradient Descent comprises two stages: a gradient computation and parameter updating
H: Suggestions for Matchmaking Algorithm I run a heterosexual matching making service. I have my male clients and my female clients. I need to pair each of my clients with their "soul mate" based on several attributes (age, interests, personality types, race, height,horoscope, etc.) After I create all my pairings, there will be some sort of score to grade the quality of my matches. I can't match a man with multiple women or vice versa. I also want to minimize the number of unmatched clients. What's the best algorithm to come up with a way to match my clients based on their attributes? Again, this is a toy example. My actual use case is completely different. Edit: The score is computed at the pair level and then summed. I can calculate how the score changes when I swap partners by looking at the new scores of two pairs. I do have access to the internals of the metric, but it's complicated. I don't have any constraints, other than I'd prefer it to be fast and simple for my own sanity. AI: Although you might find a way to apply machine learning (ML) to this optimisation problem, it does not look necessary, and is probably a distraction. ML might help if the scoring system was complex or if you had incomplete data for most matches, and needed to compute matching score estimates from some more limited set of attributes. Instead here you seem to have a combinatorial optimisation problem. A well known example of this is the Travelling Salesman Problem. There are many possible algorithms to attack these kinds of problem. Which to choose may depend on other traits of the data, such as how quickly you can calculate the scores - both for the whole set and for individual changes. If calculating for changes is fast enough, you can use optimisers that work from a complete (but not yet optimal) solution and make changes. There is a free PDF/book called Clever Algorithms (Nature-Inspired Programming Recipes) covering selection choice amongst all the varied optimisers. This may allow you to find something optimal in terms of speed and reliability of algorithm. Here's a simple thing you could try though Create a "greedy" solution Shuffle one set of items that need to be paired For each item in turn, pair it with the best scoring partner Calculate the score for this solution Refine the solution Sample some subset of pairs (e.g. 2, 3, 4, 5 pairs). You could do this deterministically for small enough dataset, or stochastically, or use some algorithm to filter to "at least has some chance of improving". Find the best score amongst all permutations in this small subset (i.e. brute force all 24 pairings amongst 4 couples) Put the best subset back into the solution Repeat until no improvement found after some number of tests This routine can be altered in various ways to take advantage of aspects of your problem. A nice thing in your case, making this easier that the Travelling Salesman Problem, is that you don't have any constraints on valid pairings. In TSP you care about making a single circuit, and not multiple separate loops which limits how you can make changes, whilst in your case each of your pairs is entirely separate. One example of possible algorithm improvement: You could pre-calculate the scores for the top N matches for each person, and only search amongst those when considering local changes.
H: can we make a word2vec NN of more than 3 layers using tensorflow? To the best of my understanding , word2vec crated using gensim is of 3 layers only. I was wondering can we customize word2vec NN and create word2vec NN of more than 3 layers to experiment with it using tensorflow ? AI: Technically, any Word2vec is based on an encoder-decoder Neural Network architecture with a hidden layer that learns the word embedding. In theory it is perfectly feasible to implement a "deeper" word2vec model in TensorFlow. You are going to find some practical problems, though. Word vectors are super long (for a small corpus, we are in the order to thousands). Training a deep Network with, say, 10.000 input nodes is going to be practically infeasible without an appropriate infrastructure. To overcome this problem, word2vec libraries (such as gensim) recur to negative sampling. The inventors of word2vec (Mikolov et al.) found that the results from training a simple word2vec model can be approximated by a sequence of logistic regressions. You can find more on negative sampling here and on Google. But if you make the Network deeper, logistic models (and therefore negative sampling) couldn't work anymore, making this task too computationally intensive for normal computers. In other words: yeah you can create a word2vec Neural Network with more than 3 layers, but the computational power required makes it almost impossible for individual researchers with common hardware.
H: Populate free space between two dates I'm new to data science. I'm trying to increase the time-series length for a special calculation. In the original time-series I have 20 weekly reports and I want to increase the amount of occurrences to 200. Is it ok just to use the range from the first date value to second date value between two neighbor dates? For example if I have 1 for the first date and I have 5 for the second one, is it fine to populate the empty space between them with 2, 3, 4. Or do I need to use more advanced techniques here? AI: It depends on the further analysis that you want to perform. Increasing amount of occurrences can significantly effect the properties of your time series data, because you have only 10% of the data available. In general, this is an interpolation question. There are fundamentally different statistical models that you can use to find the missing data. The Pandas dataframe.interpolate() is a good way to start.
H: What do I need to learn in order to really understand Machine Learning? I've graduated with a math major in undergrad, but mostly focused on algebra (Galois Theory, Knot theory, etc.). I work in something unrelated right now, but now I want to study machine learning. The question is, what kind of knowledge should I have if I want to really understand machine learning? Say, here are some things I can think of, but obviously I'm missing a lot, I'm assuming. "Fundamentals" (Calculus, Linear Algebra, Discrete Mathematics, Coding, etc.) Probability (but what specific areas?) Statistics (but what kind?) Algorithms Differential Equations But what else? Or what subfields of what I've mentioned above are particularly important (i.e. Bayesian statistics)? Edit: I am currently considering a graduate program in ML, and wanted to know if this is something I really want to do / know more about it / prepare myself. AI: From somebody with a PhD in Probability working with AI/ML for a living. Basics of Probability theory, maybe wikipedia/cousera/... for the very basics if you never had a class in it, followed by e.g. “Probability with Martingales” by Williams. The papers here will also give you a good feel. As for books, this one on “classical” machine learning is pretty good and free For Deep Learning https://www.deeplearningbook.org/, this one also has the basics of probability theory. For reinforcement learning http://incompleteideas.net/book/the-book-2nd.html. For the applied side, slides from https://web.stanford.edu/class/cs224n/ and http://cs231n.stanford.edu/. As for statistics don’t bother, I’ve never seen any machine learning work reference a theorem in “pure” statistics, if there is such a thing. Of course standard undergraduate calculus and linear algebra. And learn some Python while you’re at it
H: GPS generate street I am working on GPS tracking with a huge data from vehicle. Dataset have: vehicleId, speed, orientation(0-360), coordinate (x,y) and timestamp. Can you recommend me how to clean data and model to generate street(route) from data? just 1000 GPS. I have about 500k GPS Thank you so much AI: If you want to use ml or dl (as you've choose such tags to the question), you should clarify what is the final target. Is it correct streets representation? But if it's so, I guess it would be enough to develop algorithm without ml implementation, which will use input data to calculate length and angle of each street. Basically the timestamp of gps data isn't big (less than half minute, if I understand correct, based on this article https://towardsdatascience.com/how-tracking-apps-analyse-your-gps-data-a-hands-on-tutorial-in-python-756d4db6715d), I think it will be enough for the algorithm. If you have some specific noise or whitespaces in data, then it's reasonable to add ml. According to the street topic, it's usual can be represented as graph and maybe the problem can be considered as passing through the graph (but it's just a hint for thinking) in order to optimize high amount of data (not to include the same route twice)
H: Appraise the statement: “For the model = 0 + 1 + , 1 reflects the causal effect of on .” Ask not sure if this was the right place to ask my question, but I saw some questions regarding linear regression so I'd thought I would try to get some answers here. I just started learning about linear regression so this is the homework posed to me. I assume that the statement is true since 1 is the coefficient for . And its the coefficient that would determine if the slope (i.e. the relationship) is positive or negative. Am I missing out anything or what should I expound on? Thanks for reading and for the guides and opinions. AI: Hi and welcome to the forum. Homework is not so well received here in the forum. But still a fair question in my view. You have two aspects here. In principle you are right that $\beta_1$ is the slope of $x_1$ (you can say the marginal effect of $x_1$ on $y$) and $\beta_0$ is the intercept. This is simply a linear function of form $f(x)=\beta_0 + \beta_1 x$. However, to claim "causality", a few more things are required. First, you need to make the assumption that there is a causal relation between $x$ and $y$ and $x$ must be exogenous. Another important aspect is, that if there are additional variables with a causal influence on $y$, say $x_2$, you cannot really claim that $\beta_1$ is the causal effect on $...$, because you omitted $x_2$, so that your model suffers from the omitted variable bias. To claim for causality you need to make sure that your model reflects the data generating process in a proper way.
H: What are the best practises to decide whether a variable is categorical? What are some of the systematic ways to categorise variables into categorical or numeric? I believe using only intuition in such scenarios can many-a-times lead to major irreversible errors. What are the best strategies when categorising variables? For example, the dataframe I'm working has several categorical variables such as is_holiday that has labels for several holidays. However certain variables like visibility_in_miles suggest that those too need to be treated as categorical. part of the reason is that while most variables have hundreds of unique values, some have only 9 points. AI: The number of categories within a variable does not matter whether a variable is categorical or not. Categorical variables are mutually exclusive, unordered groups. For example, Christmas and Halloween are different holidays but have no order in the concept of "holiday-ness". Ordinal variables which are mutually exclusive, ordered groups with no consistent measure of the distance between the ordering. For example, ranking items (e.g., first, second, third, …). There could be big (or small) differences between each of the places. Numeric variables have consistent differences between values. The difference between 10 and 11 miles is the same as the difference 20 and 21 miles because "mile-ness" is a consistent measure.
H: Logistics regression with polynomial features vs neural networks for classification I am taking Andrew Ng's Coursera class on machine learning. He mentions that training a logistic regression model with polynomial features would be very expensive for certain tasks compared to training a neural network. Why is that though? I mean, when we talk about neural networks we're usually looking at a model with a very large number of parameters so why would logistic regression be more computationally expensive? PS: Here's some context (at the beginning of an exercise on neural networks): In the previous part of this exercise, you implemented multi-class logistic regression to recognize handwritten digits. However, logistic regression cannot form more complex hypotheses as it is only a linear classifier (You could add more features, such as polynomial features, to logistic regression, but that can be very expensive to train). AI: I expect what he's referring to is the combinatorial explosion in the number of terms (features) as the degree of the polynomial increases. Let's say you have $N$ measurements/variables you're using to predict some other variable. A $k$th degree polynomial of those $N$ variables has $ N+k \choose k$ terms (see here). This increases very quickly with $k$. Example: Say we have N = 100 variables and we choose a third degree polynomial, we'll have $ 103\choose3$ = 176,851 features. For a five-degree polyomial it goes to: $105 \choose 5$ = ~96 million features. You'll then need to learn as many parameters as you have features. Compare to a NN: Compare this to using a fully connected NN say we choose K fully connected hidden layers with $M$ units each. That gives: $ (NM) + (K-1)(MM) + M$ parameters. This is linear in $K$ (though the $MM$ term attached to it might be big). For N = 100 variables again, two hidden layers, and 350 features nodes per layer we get 157,580 parameters - less than we'd need for logistic regression with third degree polynomial features. Representational power (Added this section after seanv507's great comment) * See caveat below The argument above was just a numbers game - how big of a NN can you get while still having the same parameter count as logistic regression with polynomail features. But you're getting at something when you say in your question: I mean, when we talk about neural networks we're usually looking at a model with a very large number of parameters so why would logistic regression be more computationally expensive? Well said. Efficiency Neural nets are universal function approximators, and we know that polynomials can approximate functions too. Which is better? What should one use? I'd bet that, for a given parameter "budget", a NN could better approximate "more" functions than a polynomial with the same number of parameters. It wouldn't surprise me if there was theory to back it up, but I don't know it off hand. seanv507's statement One possibility is to assume that the nonlinear activation function is quadratic...and identify what range of polynomials the NN could represent. is an interesting idea. Empirically, NN's have done better for many hard tasks than polynomial representations, and that's pretty strong evidence. *Caveat As seanv507 says - this is the hard part. The above statements won't always be true - I'd argue they're probably mostly true. If a low-degree polynomial basis nicely and reliably separates your classes, then it's probably worth using / trying polynomial features.
H: It seems that the output of sklearn.metrics.pairwise.euclidean_distances is different to the formula on doc The doc of sklearn.metrics.pairwise.euclidean_distances() gives this formula dist(x, y) = sqrt(dot(x, x) - 2 * dot(x, y) + dot(y, y)). Apply this formula to this example X = [[0, 1], [2, 3]] Y = [[1, 2], [3, 4]] np.dot(X,X) - 2*np.dot(X,Y) + np.dot(Y,Y) gives this result array([[ 3, 5], [-1, 1]]) whilst calling sklearn.metrics.pairwise.euclidean_distances() euclidean_distances(X , Y, squared = True) gives array([[ 2., 18.], [ 2., 2.]]) It seems that the output of euclidean_distances() is not consistent to the formula from the doc. AI: The sklearn docs' formula says it is applying to row vectors $x$ and $y$. When you call np.dot on the matrices $X$ and $Y$ it takes the matrix product. EDIT (responding to question in comments): It's not straightforward, as the row-vs-row operations needed aren't quite the usual matrix operations. The source code for euclidean_distances does it this way (except that it does lots of input checks, operates on sparse inputs when possible, etc.): (X*X).sum(axis=1)[:, np.newaxis] - 2*np.dot(X,Y.T) + (Y*Y).sum(axis=1)[np.newaxis, :] That's not exactly straightforward itself, so I'll say a little more. Say $X$ has $m$ rows and $Y$ has $n$ rows. The middle term, by taking $Y^T$, gives us a $m\times n$ matrix whose $(i,j)$-entry is the dot product of the $i$th row of $X$ with the $j$th row of $Y$. In the other terms, * on numpy arrays is the coordinate-wise product; summing along rows gives us the rows' squared-norms. The newaxis is a nice trick: casting the first term to now be a $m\times 1$ matrix, adding it to the middle term's $m\times n$ matrix actually adds it to every column of that matrix (without needing to actually build the matrix of repeated columns of $X$'s squared-norms). And of course similarly for the last term: casting to a $1\times n$ matrix makes it add to every row of the result.
H: Face detection for different poses more robust than MTCNN? I am using the MTCNN model described on machinelearningmastery here: MTCNN ipazc But it won't detect certain orientations, ie. somebody lying on the ground so the top of the head points to the right of the frame and their chin to the left. Thus, I am going to cv2.flip on the y-axis and also rotate using the method described on pyimagesearch here: rotation of images to try the detect() method on at least 8 different orientations of the same frame and then output the one with the most detections. This may or may not work very well on video when faces and objects need to be detected on every frame. Thus, I am looking for links to pretrained model zoos for MTCNN and other face detection, object detection and face recognition algorithms. AI: If anyone is looking for something that is more robust than MTCNN, try this but you need GPU with Cuda I believe. It says Tesla P40 on requirements but that is for training only. I could infer on new images with the provided parameters on a GTX 1080 with Cuda. It's called DSFD from Tencent in China. Their results on some well known benchmark datasets surpass MTCNN. I can concur that it detected the above-mentioned side faces (not profile but lying sideways). It is on Github: DSFD Tencent There is a demo.py where you can designate your own image as an arg on command line. If you figure out how to use this with CPU only, please let me know.
H: Extracting tokens from a document: applying Deep Learning or Classification? I have a legal document from Law. That document is 4-pages of evidence from the plaintiff. I want to identify the Dates, Addresses and Financial transactions in that document. Can I apply deep learning, the data with me is very small, on just one 4-page document, or should I apply Text Classification to solve my problem? AI: Trying to find certain things in text can be done quite easily if you have the real text, and not a scanned document that is a PDF or even an image. This is actually quite a big topic and can become quite difficult. Pure text If you have pure text, you can parse out parts you need using custom regular expressions, e.g. to find a date, you might use this: ^(19|20)\d\d[- /.](0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])$ matches a date in yyyy-mm-dd format from 1900-01-01 through 2099-12-31, with a choice of four separators (source). I believe there are even a few libraries that specifically find dates for you within text. PDF There are actually many types of PDF, i.e. there are many ways a pdf can be encoded behind the scenes. Some types are easier to parse that others, but luckily there are libraries that can help with that. For example, check out PDFMiner. After using such a library, you will hopefully be left with the pure text, and can maybe go back to using methods from that section. Images If you are unlucky enough to have an image as a starting point, then you are now in the realms of OCR - Optical Character Recognition. I would recommend reading this blog post for a more complete description of possible methods,but in a nutshell, you can try using either: a traditional algorithm from compute vision (applying filters and looking for edges etc.) a trained model specialised for text (e.g. EAST: an Efficient and Accurate Scene Text Detector) a general model A nice model to help out with OCR is the Tesseract library. You said you are learning NLP, so actually extracting tokens from a PDF might not be the best example with which to start. I would recommend first deciding exactly what you really want to learn and follow a course or a tutorial on that topic.area.
H: Overfitting Question Would you consider that overfitting? AI: No, it's not an example of overfitting! It would be overfitting if valid loss started to increase while training loss was going on to decrease. Edit: the answer for the second question It's worth considering how auc is calculated. We have the probabilities of each instance to belong to the positive class. Then we sort these probabilities. If all positive instances appear in the first part of the sorted list and all the negative are in the second, then auc is 1 (the "perfect performance" according to auc observation). Now let's consider loss computation. For example binary cross entropy. The formula is $loss = -1/N * \sum{y_i * log(p(y_i)) + (1 - y_i)*log(1 - p(y_i))}$ where $y_i$ - true lable, $p(y_i)$ - probability that $y_i$ belongs to the positive class. We can predict for each negative observation, that the probability is 0.998, then loss will be huge. But if predicted probabilities for positive observations are 0.999 (higher than for negative), then in terms of AUC we will have perfect performance. That is why I guess, we have to evaluate loss.
H: how to handle values that only appear once in a column? Counting the values of a column using pandas I got the following result: Human 195 Mutant 62 God / Eternal 14 Cyborg 11 Human / Radiation 11 Android 9 Symbiote 8 Kryptonian 7 Alien 7 Demon 6 Atlantean 5 Alpha 5 Asgardian 5 Cosmic Entity 4 Inhuman 4 Human / Altered 3 New God 3 Animal 3 Saiyan 2 Eternal 2 Frost Giant 2 Human-Kree 2 Demi-God 2 Human / Cosmic 2 Vampire 2 Metahuman 2 Amazon 2 Icthyo Sapien 1 Czarnian 1 Rodian 1 Martian 1 Clone 1 Zombie 1 Maiar 1 Yoda's species 1 Human-Vulcan 1 Zen-Whoberian 1 Mutant / Clone 1 Korugaran 1 Dathomirian Zabrak 1 Parademon 1 Kaiju 1 Flora Colossus 1 Human-Spartoi 1 Yautja 1 Ungaran 1 Human-Vuldarian 1 Neyaphem 1 Xenomorph XX121 1 Bizarro 1 Human / Clone 1 Gungan 1 Bolovaxian 1 Talokite 1 Luphomoid 1 Tamaranean 1 Kakarantharaian 1 Spartoi 1 Strontian 1 Gorilla 1 Name: Race, dtype: int64 I am new to data science, but I think that all those value appearing only once in the dataset is not going to help the classifier, so is there a good way to handle those values? I was thinking about grouping all those values that appear less than 5 times, or maybe I should remove the lines. By the way, I do not know if it is important to know, but I want to apply the gaussian naive bayes, knn and logistic regression to this dataset. This column is a feature to predict a binary value. AI: Probably the best thing to do is use domain knowledge to relabel those into the larger categories. You may be able to replace domain knowledge with an imputation method: remove the rare labels, then fill the newly missing data using the other columns. Finally, the quickest sound idea, which you and Brian have both mentioned: just lump them into an "other" category. I wouldn't just drop them, or predicting on future examples outside the surviving categories will be harmed (both in that the model won't understand them, and that your package will have to know how to even pass them to the model).
H: Use correlation matrix scores as starting weights and bias inputs for neural network? I have a neural net that's generating an average 15% error across the three outputs it gives. My problem is two of the three really have about a 2% error while the third has around 40%. I was wondering if anyone has used results from a correlation matrix on the raw data to set starting parameters for what features are valued more relative to the labels. If this is possible, or logical, how would you initialize this for two input features if your model's first layer has say eight nodes - or at least more than two? I'm using Keras btw, but even a theoretical explanation would be helpful. If it doesn't make sense to follow this path, does anyone have recommendations at correctly the imbalance? AI: The way I see it is that NN can learn nonlinear features, so I wonder: does your correlation matrix on the raw data take into account nonlinear connections between features to see the full picture? Probably, it doesn't. As the topic is not about the single-layer nn (which would be an example of linear regression), the final connections (values of each weight) will be created by the contribution of many layers. Therefor it seems to me, that setting start weights on the proposed way wouldn't be high relevant. Speaking about imbalancing, I'd recommend to experiment with loss functions. You can set higher penalty for mistakes on a certain class.
H: How to find feature importance with multiple XGBoost models My problem statement : Time Series forecasting(Month wise data), training on 96 months of data and predicting next 12 months with a 3 months empty window in between. Example : Batch 1 ***Training Data index*** <2010-01-01 -----------------------------2017-12-01> ***Unused Month Window*** <2018-01-01---2018-03-01> ***Test Month*** <2018-04-01> [Trained model with Batch 1 training data can ONLY be used for predicting this month, not any other] Batch 2 ***Training Data index*** <2010-01-01 -----------------------------2018-01-01> ***Unused Month Window*** <2018-02-01---2018-04-01> ***Test Month*** <2018-05-01> [Trained model with Batch 1 training data can ONLY be used for predicting this month, not any other] and so on till Batch 12... I am training 12 XGBoost models to get predictions for each of the 12 months of FY 18, hence getting 12 different feature importances against each model for the predictors used. But i want to report the feature importance of entire FY 18, instead of giving 12 different set of feature importances against each month. How would I approach that?? Evaluating a single model on the entire test dataset is not an option. Any help is appreciated. Thanks. AI: I suspect you may be confusing the terms "test set" and "training set". Normally, the model is trained on the training set, and evaluated on the test set. The feature importance is independent of the test set, it's a property of the model that you trained. The obvious answer is to combine all months into a single training dataset, and train a model on that, but you say it’s not an option. Then I think your only option is to combine the results from individual months, for example by summing or averaging them and presenting the final ranking. Whether this is valid depends on the criteria used to train the trees (how the splits are determined). If there is a valid way to combine them, I still recommend against it, because I suspect it will obscure reality and discussion over the methods may distract from the result. So I would sidestep the problem by presenting a graph like is sometimes shown for sports leagues or polls before elections. On the horizontal axis are the months, on the vertical axis the rankings. This way you visualize all info easy to grasp and you also show any trends and seasonal effects.
H: Intuition behind the number of output neurons for a neural network I am reading Michael Nielsen's book on deep learning. In the first chapter, he gives the classic example of classifying 10 handwritten digits, and uses it to explain the intuition behind choosing the number of output neurons. Initially, before reading the book, I thought it was intuitive that you would want 10 output neurons, each representing a given digit, but I'm starting to question why I thought so. Anyways, here's what Nielsen explains: You might wonder why we use 10 output neurons. After all, the goal of the network is to tell us which digit (0,1,2,…,9) corresponds to the input image. A seemingly natural way of doing that is to use just 4 output neurons, treating each neuron as taking on a binary value, depending on whether the neuron's output is closer to 0 or to 1. Four neurons are enough to encode the answer, since 2^4=16 is more than the 10 possible values for the input digit. Why should our network use 10 neurons instead? Isn't that inefficient? The ultimate justification is empirical: we can try out both network designs, and it turns out that, for this particular problem, the network with 10 output neurons learns to recognize digits better than the network with 4 output neurons. But that leaves us wondering why using 10 output neurons works better. Is there some heuristic that would tell us in advance that we should use the 10-output encoding instead of the 4 -output encoding? To understand why we do this, it helps to think about what the neural network is doing from first principles. Consider first the case where we use 10 output neurons. Let's concentrate on the first output neuron, the one that's trying to decide whether or not the digit is a 0. It does this by weighing up evidence from the hidden layer of neurons. What are those hidden neurons doing? Well, just suppose for the sake of argument that the first neuron in the hidden layer detects whether or not an image like the following is present: It can do this by heavily weighting input pixels which overlap with the image, and only lightly weighting the other inputs. In a similar way, let's suppose for the sake of argument that the second, third, and fourth neurons in the hidden layer detect whether or not the following images are present: As you may have guessed, these four images together make up the 0 image that we saw in the line of digits shown earlier: So if all four of these hidden neurons are firing then we can conclude that the digit is a 0. Of course, that's not the only sort of evidence we can use to conclude that the image was a 0 - we could legitimately get a 0 in many other ways (say, through translations of the above images, or slight distortions). But it seems safe to say that at least in this case we'd conclude that the input was a 0. Supposing the neural network functions in this way, we can give a plausible explanation for why it's better to have 10 outputs from the network, rather than 4. If we had 4 outputs, then the first output neuron would be trying to decide what the most significant bit of the digit was. And there's no easy way to relate that most significant bit to simple shapes like those shown above. It's hard to imagine that there's any good historical reason the component shapes of the digit will be closely related to (say) the most significant bit in the output. Now, with all that said, this is all just a heuristic. Nothing says that the three-layer neural network has to operate in the way I described, with the hidden neurons detecting simple component shapes. Maybe a clever learning algorithm will find some assignment of weights that lets us use only 4 output neurons. But as a heuristic the way of thinking I've described works pretty well, and can save you a lot of time in designing good neural network architectures. I don't understand what he means in the second to last paragraph. I included the rest for context, but it's this second to last paragraph that's confusing me. Could someone clarify what he's talking about? AI: The reason 10 neurons work better than 4 is because it allows the network to encode all possible answers independently. You could have 4 neurons and train the first one to encode the least significant bit, the second neuron the second-least significant bit, and so on. It would work. Images of a 1 (bits: 0001) are quite like images of a 7 (bits 0111). Imagine an image where we know it’s a 1 or a 7, but we have no idea which one. With 4 outputs (as described) you would output (0, 0.5, 0.5, 1), meaning the correct answer is either 1, 3, 5, or 7. With 10 outputs, all classes would have 0 probability and 1 and 7 would have 50% each, which conveys more accurately what can be interpreted from the image. You could have a single output that just outputs the value of the digit directly. But a 1 and 7 are now far apart in the output node (the whole range is from 0 to 9), so it’s impossible for this network to say “1 or 7”. It will probably say: 4 (midpoint between 1 and 7). That’s why it doesn’t work well. Very poorly explained by “Nielsen’s book”, IMHO.
H: How does an encoder-decoder network work? Let's say I trained an encoder-decoder network on a cat dataset using reconstruction error as loss function. The network is fully trained and the decoder is able to reconstruct good cat images. Now what if I use the same network and input a dog image. Will the network be able to reconstruct dog image or not? AI: It probably won't. The whole point of the training was to encode cat images and thus the network has tried to learn what information is the most necessary to keep to ensure a low reconstruction error (i.e. what separates one cat from another) and what information can it throw away (i.e. what characteristics appear in all cat images and can be discarded). That being said, a dog image would produce a fairly decent reconstruction because most features are shared between both animals. If you try, however, to reconstruct something completely different (e.g. a car) then it would probably fail.
H: On design of the training set: conceptual question I am curious to know how training data should be constructed so that it scales to examples that are not a part of the training data. For example, the problem that I am facing right now is in the application of identifying or distinguishing the frequency response of time series that are generated from different distributions. So I constructed $p$ number of examples each from Gaussian, Uniform, Poisson and a kind or colored noise say pink. The White noise examples (Gaussian, Uniform and Poisson) are labelled as 1 and colored noise as 0. Using Neural Network the classification works fine. Now I wanted to do sensitivity analysis by checking if the trained network can classify white noise from another distribution and also colored noise say red. Both the tests failed. NN failed to classify them. But, as soon as I included the red and the new kind of white noise in the training data and tested on a different trail (time series), the NN could classify it. QUESTION: This behavior makes me wonder if machine learning algorithm sare incapable of distinguishing examples from different systems eventhough the examples in testing have similar properties as those used in training. In this case, eventhough white noise appears similar but since they are generated from different distribution or say systems the training data must include examples from all the generating mechanism or systems, otherwise in testing the ML model fails to recognize it. Is this the usual behavior? AI: One of the basic assumptions governing Machine Learning is that samples from the training set must follow the same underlying distribution as samples from the test set (and so must any other sample you want fed to your model)! This is why, usually, we randomly partition the same dataset into training and test sets. This is actually one of the main reasons ML models underperform in some real world applications. You might have trained your model in a specific dataset, but overtime the data slightly changed its characteristics and new data differ from the old ones that were used to train the deployed model. In this case you need to retrain your model on the new data.
H: XGBoost: # rounds is equal to n_estimators? I'm running a regression XGBoost model and trying to prevent over-fitting by watching the train and test error using this code: eval_set = [(X_train, y_train), (X_test, y_test)] xg_reg = xgb.XGBRegressor(booster='gbtree', objective ='reg:squarederror', max_depth = 6, n_estimators = 100, min_child_weight = 1, learning_rate = 0.05, seed = 1,early_stopping_rounds = 10) xg_reg.fit(X_train,y_train,eval_metric="rmse", eval_set = eval_set, verbose = True) This prints out as follows: [93] validation_0-rmse:0.233752 validation_1-rmse:0.373165 [94] validation_0-rmse:0.2334 validation_1-rmse:0.37314 [95] validation_0-rmse:0.232194 validation_1-rmse:0.372643 [96] validation_0-rmse:0.231809 validation_1-rmse:0.372675 [97] validation_0-rmse:0.231392 validation_1-rmse:0.372702 [98] validation_0-rmse:0.230033 validation_1-rmse:0.372244 [99] validation_0-rmse:0.228548 validation_1-rmse:0.372253 However, I've noticed the number of training rounds printed out and in the evals_results always equals the n_estimators. In [92]: len(results['validation_0']['rmse']) Out[92]: 100 If I change the number of trees to 600, the # of rounds goes up to 600, etc. I was under the impression that what's being printed is the metric result from each round of training, which includes training all the trees at once. What is going on here? Is each layer of trees considered a separate training round? AI: For gradient boosting, there really is no concept of a "layer of trees" which I think is where the confusion is happening. Each iteration (round) of boosting fits a single tree to the negative gradient of some loss function (calculated using the predictions from the updated model in all prior iterations), in your case, root mean squared error. The algorithm does not fit an ensemble of trees at each iteration. Each tree is then added (with optimal weighting) to all prior fitted trees + optimal weights in previous iterations to come up with final predictions. The validation scores you see are, as a result, the score of your complete model up to that iteration of tree fitted. So this line here for instance: [93] validation_0-rmse:0.233752 validation_1-rmse:0.373165 is the performance of your model on validation_ 0 and validation_1 for a model that has fit 93 trees on past gradients generated in prior iterations.
H: forecasting - likelihood of customers participating in next month sales I have historical transaction information of customers for the last 2 years and other information about the customers like what type of card (gold/platinum) they used for transactions etc. is also there in the dataset. Using this dataset, I will have to forecast the likelihood of each customers transacting next month. What are the different approaches that I can analyze before I choose one? EDIT More info on the problem: I have customer credit information and transaction history of the cards for an online ticket booking portal. From this portal customers can book flights, cruises and cars. I have to predict the customers who are most likely to book something (flights, cruises and cars) in the next month. Now what possible approaches can I take for this? AI: Forecasting is about predicting a variable as a function of time, such as the number of sales during coming months). Your problem is a regression problem, where the output is between 0 (no chance) and 1 (certain). Any method to solve a regression problem is valid, but I would recommend starting with the simplest thing you think may work, to establish a baseline performance that you can try to improve. This problem is often phrased as churn prediction, which is just the opposite chance from what you need (although definitions of churn vary), so you can google and see how people do that. One easy approach that allows you to calculate this is to use the RFM model, where R, F and M stand for recency (time since last purchase), frequency (of purchases) and monetary (total money spent). You could use these features and a logistic regression as your baseline model.
H: Conditional Statement to update columns based on range I have a dataframe called clients that has 5000+ rows with the following: ID Ordered Days 1 101565 131 2 202546 122 3 459863 78 4 328453 327 5 458975 -27 I'm trying to create a loop that looks at the numbers of days and replace them with a new columns if it does meet the days critetaria based on the following: Days NEW_COLUMNS 0-119 0-3 Months 120-209 4-6 Months 210-299 7-9 Months 300+ 10+ Months -280 to -196 Reach out clients -195 to -104 Send promotion -103 to -1 Close case < -280 Plan I have the following code but hasn't worked so far: if(days <-280)(NEW_COLUMNS ="plan") else if (days>-280 && days <-196)(NEW_COLUMNS=" Reach out clients";) else if (days>-195 && days<-104)(NEW_COLUMNS =" Send promotion";) else if (days>-103 && days <-1)(NEW_COLUMNS="Close case";) else if(days> 0 && days <119)(NEW_COLUMNS="0-3 Months";) else if(days > 120 && days <209)(NEW_COLUMNS="4-6 Mos";) else if(days > 210 && day s<299)(NEW_COLUMNS="7-9 Mos";) else if(days > 300)(NEW_COLUMNS="10+ Mos";) Eventually I want a table that looks like this: ID Ordered Days New_Columns 1 101565 131 4-6 Months 2 202546 122 4-6 Months 3 459863 78 0-3 Months 4 328453 327 10+ Months 5 458975 -27 Close case AI: Check out the below code: from pandas import DataFrame Cars = {'ID': [1, 2, 3, 4, 5], 'Ordered': [101565,202546,459863,328453,458975], 'Days': [131, 122, 78, 327, -27] } df = DataFrame(Cars, columns=['ID', 'Ordered', 'Days']) if "New_Columns" not in df: df["New_Columns"] = "" for index, row in df.iterrows(): days = row['Days'] val = '' if days < -280: val = 'Plan' elif -280 < days < -196: val = 'Reach out clients' elif -195 < days < -104: val = 'Send promotion' elif -103 < days < -1: val = 'Close case' elif 0 < days < 119: val = '0-3 Months' elif 120 < days < 209: val = '4-6 Months' elif 210 < days < 299: val = '7-9 Months' elif days > 300: val = '10+ Months' df.at[index, 'New_Columns'] = val print(df) Output : ID Ordered Days New_Columns 0 1 101565 131 4-6 Months 1 2 202546 122 4-6 Months 2 3 459863 78 0-3 Months 3 4 328453 327 10+ Months 4 5 458975 -27 Close case
H: How to shuffle only a fraction of a column in a Pandas dataframe? I would like to shuffle a fraction (for example 40%) of the values of a specific column in a Pandas dataframe. How would you do it? Is there a simple idiomatic way to do that, maybe using np.random, or sklearn.utils.shuffle? I have searched and only found answers related to shuffling the whole column, or shuffling complete rows in the df, but none related to shuffling only a fraction of a column. I have actually managed to do it, apparently, but I get a warning, so I figure even if in this simple example it seems to work, that is probably not the way to do it. Here's what I've done: import pandas as pd import numpy as np df = pd.DataFrame({'i':range(20), 'L':[chr(97+i) for i in range(20)] }) df['L2'] = df['L'] df.T 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 L a b c d e f g h i j k l m n o p q r s t L2 a b c d e f g h i j k l m n o p q r s t For now, L2 is simply a copy of column L. I keep L as the original, and I want to shuffle L2, so I can visually compare both. The i column is simply a dummy column. It's there to show that I want to keep all my columns intact, except for a fraction of L2 that I want to shuffle. n_rows=len(df) n_shuffle=int(n_rows*0.4) n_rows, n_shuffle (20, 8) pick_rows=np.random.permutation(list(range(n_rows)))[0:n_shuffle] pick_rows array([ 3, 0, 11, 16, 14, 4, 8, 12]) shuffled_values=np.random.permutation(df['L2'][pick_rows]) shuffled_values array(['l', 'e', 'd', 'q', 'o', 'i', 'm', 'a'], dtype=object) df['L2'][pick_rows]=shuffled_values I get this warning: C:\Users\adumont\.conda\envs\fastai-cpu\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy """Entry point for launching an IPython kernel. df.T I get the following, which is what I expected (40% of the values of L2 are now shuffled): 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 L a b c d e f g h i j k l m n o p q r s t L2 e b c l i f g h m j k d a n o p q r s t You can see the notebook here (it's rendered better on nbviewer than here): https://nbviewer.jupyter.org/gist/adumont/bc2bac1b6cf7ba547e7ba6a19c01adb6 Thanks in advance. AI: I don't think there is any idiomatic way of doing this since it's quite unusual operation, normally the whole row or column should be shuffled. What you are doing looks like a good approach. The error SettingWithCopyWarning you get is a common warning that you could be operating on a copy of the original data and not a view (the origianl). For more information I would recommend checking the answers here: https://stackoverflow.com/questions/20625582/how-to-deal-with-settingwithcopywarning-in-pandas. To avoid the error and make the code more compact you could do it as follows: import random fraction = 0.4 n_rows = len(df) n_shuffle=int(n_rows*fraction) pick_rows = random.sample(range(1, n_rows), n_shuffle) df.loc[pick_rows, 'L2'] = np.random.permutation(df.loc[pick_rows, 'L2']) Note that the use of loc here will make sure that no copy is created and everything is done on the origianl dataframe (i.e. this will not give the SettingWithCopyWarning warning).
H: Getting a transition matrix from a Adjacency matrix in python I have a 3*3 Adjacency matrix and I'm trying to sum the elements of each column and divide each column element by that sum to get the transition matrix. Can you please help me code this part? Thanks in advance. AI: Considering a is your adjacency matrix 2D numpy array : a / a.sum(axis=0) Should do the trick (divide all elements by columns sum)
H: How does one decide when to use boosting over bagging algorithm? What kind of problem, circumstances and data makes it more suitable to apply boosting instead of bagging methods? AI: Bagging and boosting are two methods of implementing ensemble models. Bagging: each model is given the same inputs as every other and they all produce a model Boosting: the first model trains on the training data and then checks which observations it struggled most with, it passes this info to the next algorithm which assigns greater weight to the misclassified data Because of this both bagging and boosting reduce variance. However boosting is better at improving accuracy vs a single model whilst bagging is better at reducing overfitting. I would advise training a single version of the ensemble aka decision tree for random forest and then seeing where improvements can be made. Good article explaining boosting vs bagging
H: Is loss the same thing as variance? In Keras, the first object returned in the score list is loss. score = model.evaluate(X_test, y_test,verbose=1) print(score) [1] [0.025217213829228164, 0.99487179487179489] Is this the same thing as variance? AI: No. Loss measures the error between your predicted values and true values in a given train set. Whereas variance mesures how your model performs (i.e. how the loss changes) on different training sets. Also, variance is an atribute of a given model whereas loss depends on your dataset. In other words, we can also say that loss on different sets is used to measure variance of a given model.
H: What does high variance mean in a binary classification machine learning model? My understanding of high variance is that the targets are spread widely around. The output values are "all over the place". In a binary classification model, there can only be 2 outcomes. I am at a loss when visualizing what high variance mean in a binary classification model. AI: You have correctly intuited that variance isn't as useful a concept in this case. Statisticians typically look at the binomial deviance instead (see here for a thorough technical development). If you do want to think about variance, we can recognize that many binary classifiers output an estimate of the label $\hat{Y}$. It helps here to think of a randomized classifier which estimates $P(Y=1|X=x) = E[Y|X=x]$ (call this estimate $\hat{p}(x)$) and generates $\hat{Y} = 1$ with probability $\hat{p}(x)$, $\hat{Y} = 0$ otherwise. We have $$var(\hat{Y}|X=x) = E[(\hat{Y} - E[\hat{Y}|X=x])^2|X=x].$$ This will be smaller when the classifier is more certain about the classification once it sees the covariates. I.e., if $\hat{p}$ is always either 1 or 0, then the variance is 0 (but of course the bias will be huge). If there are equal numbers of each class in the training data and the classifier is able to learn nothing from the covariates, then $\hat{p} = \frac{1}{2}$ for any $x$ and so the variance is $\frac{1}{4}$--a coin flip, as we'd expect.
H: Error using decision tree regressor I'm new to data science , while i'm implementing decision tree. I'm facing the following error. Where i went wrong; Sample data in csv is: x=dataset.iloc[:,:-1].values y=dataset.iloc[:,:2].values from sklearn.preprocessing import LabelEncoder,OneHotEncoder from sklearn.compose import ColumnTransformer labelencoder_x=LabelEncoder() x[:,0]=labelencoder_x.fit_transform(x[:,0]) onehotencoder=OneHotEncoder() columntransformer=ColumnTransformer([('dummy cols',onehotencoder,[0])],remainder='passthrough') x=columntransformer.fit_transform(x) from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=1) from sklearn.tree import DecisionTreeRegressor regressor=DecisionTreeRegressor(random_state=0) regressor.fit(x,y) #Error occurance Error: ''' ValueError: could not convert string to float: 'Business Analyst' ''' AI: The problem is with the y value. x=dataset.iloc[:,:-1].values -> selecting all columns but last #this one is fine y=dataset.iloc[:,:2].values -> selecting all columns till 2nd #this one is wrong Change this to : y=dataset.iloc[:,2].values #selecting only 2nd column.
H: What is "data scaling" regarding StandardScaler()? I'm trying to figure out the purpose of StandardScaler() in sklearn. The tutorial I am following says "Remember that you also need to perform the scaling again because you had a lot of differences in some of the values for your red and white [wines]" So I looked up the function in the sklearn docs. "Standardize features by removing the mean and scaling to unit variance" https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html What good would removing the mean do? What is scaling the data? Hard to google that. # Scale the data with `StandardScaler` X = StandardScaler().fit_transform(X) AI: I will use k-Nearest Neighbor algorithm to explain why we must do scaling as a preprocessing step in most machine learning algorithms. Let's say you are trying to predict if I transaction is fraudulent or not, that is, you have a classification problem and you only have two features: value of transaction and time of the day. Both variables have different magnitudes, while transactions can vary from 0 to 100000000 (It is just an example), and time of the day between 0 to 24 (let's use only hours). So, while we are computing the nearest neighbor, using euclidean distance, we will do distance_class = sqrt( (new_value_transaction - old_value_transaction)**2) + (new_time_of_day - old_time_of_day)**2) ) Where old is the reference to our train data and new is related to a new transactions we want to predict the class. So now you can see that transactions will have a huge impact, for example, new_value_transaction = $100 new_time_of_day = 10 old_value_transaction = $150 new_time_of_day = 11 class_distance = sqrt(($50)**2) + (1)**2) Now, you have no indication that transaction value is more important than time of the day, that is why we will scale our data. Between the alternatives, we can have a lot of different, such as MinMaxScaler, StandardScaler, RobustScaler, etc. Each of them will treat the problem different. To be honest? Always try to use at least two of them to compare results. You can read more about in the sklearn documentation: https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing I hope you got the feeling why we should use standardization techniques. Let me know if you have any further questions. To complement, here is a visual explanation of what I explained above. Credit: https://www.youtube.com/watch?v=d80UD99d4-M&list=PLpQWTe-45nxL3bhyAJMEs90KF_gZmuqtm&index=10 In the video they give a better explanation. Also, I highly recommend this course, the guys are amazing.
H: normal conda install vs forge install In order to install some libraries, we just use: conda install pandas While in other cases we use: conda install -c conda-forge html5lib What is the difference between them? AI: conda-forge is just an alternative channel where you can upload to or download packages from. Usually it makes no difference where you download the package from but sometimes conda-forge has the latest version.
H: Bagging with Neural Networks Best practices I am trying to build a majority vote system for 3 Neural Networks, and I came across the concept of Bagging method. Actually, I want to use neural networks as weak learners (I know it's debatable, but some papers have tried it and I want to try it too). For more information about the voting system I tried to construct/constructed, please read the following thread (The softmax Layer is better the the function in the thread, because the majority function only gives equivalent accuracies of the 3 NNs, but doesn't improve the overall accuracy). I read that bagging can improve the overall accuracy of the weak learners, but as you can see I only have 3 learners and there aren't any clear information about bagging with Neural Networks. I only found after some reading that I can use ensemble learning with Neural networks by using the output of the trained NNs in a linear fashion. If I want to detail this into steps, I would write: Divide Dataset into training and validation sets From the Training sat, construct 3 bootstraps samples ?, Or I need more? Train/Develop the 3 neural Networks on the 3 bootstraps samples? Test the Neural Networks on the validation/test set How can I join decisions of the NNs by bagging? If possible, I need your insight on these implementation steps and to know if they follow the best practices of bagging with NNs. PS: I just started reading about bagging and boosting, so I apologize for any conceptual mistakes and contradictions I might have said. Best regards, AI: I think you have the right general idea. Divide the dataset into training and test. Divide the training dataset into k folds, call this M. For each iteration of hyperparameter tuning (whatever you choose to tune); For each fold i in M; Set the validation set to be equal to fold i, and the (inner) training set to be equal to all folds that are not i. Using the inner training set, generate N bootstrap resamples. Fit a single neural network model to each bootstrap resample, generating N models. Predict the validation set using each of the N neural network models. Count the votes of each neural network model. Set the final prediction for an observation in the validation set to whatever has the most votes as predicted by your N models. You should have a matrix of m observations by N predictions. Since you are actually classifying objects, try to keep N odd so that there are no ties. Alternatively, don't classify objects, and instead generate probabilities from your N models for each observation in the validation set. Then, calculate the mean predicted probability for each observation in your validation set, and choose a threshold for classification based off the costs of false positives/false negatives (probably superior, but I won't discuss this). Calculate your error using your final predictions from step 6 and the actual ground truth labels from the validation set. Save these error scores. end Calculate the average error from all of the folds. If this average error is lower than a previous set of hyperparameters error, then save these hyperparameters from this iteration of tuning. end Generate N bootstrap resamples of the entire training set (the training set created in step 1) and fit a single neural network model to each of the N bootstrap resamples, using the optimal set of hyperparameters that you found above for each neural network model. Generate predictions on the test set (created in step 1) using each of your N models. Set final predictions for the test set using the exact same methodology you used in step 6. Calculate the error using your final predictions from step 10 and the ground truth labels found in the test set. This is your final, unbiased estimate of model performance. In general, the more bootstrap resamples you can generate, the better. However, since you are fitting neural networks you might need to be conservative with how many models you decide to bag due to computational costs. I recommend setting the number of bootstrap samples, N, to be as large as possible while being mindful of your own computational/time constraints. EDIT: In response to the comments in which we require a faster method, one can replace step 2 (and delete step 3) and instead divide the training dataset into an inner training set and a validation one. We then validate our model to a single validation set rather than k sets, which has the potential to increase the likelihood of overfitting to a specific validation set (though since we are presumably dealing with a large dataset, it probably will be fine). All other steps should remain the same, though there is clearly no loop over folds anymore. In step 8, there will also be no need to calculate the average error since we will only have one estimate of model error from the single validation set.
H: How Linear SVM Regression and Multiple Linear Regression different in terms of the regression result? They starts from the same equation as below. y = w*x + b But they solve it differently. MLR specified the w and b by minimizing the square error whereas SVM specified w and b by minimizing the loss function defined by C and epsilon. I am wondering if the result of regression is significantly different. I guess that if the given data set is clean and well-explained by input features, the resultant w and b between SVM and MLR will be close. Putting my original question differently, I don't find any reasons to use linear SVM regression over multiple linear regression. AI: They could potentially be very different, because linear regression penalizes using squared error while SVRs penalize absolute error. You might have learned that "outliers" can have an outsized influence on the regression line because of this--this won't be the case with SVR, however. Furthermore, if the SVR you're talking about is like the one described here (great resource there, by the way) then choosing a larger $\epsilon$ will yield a flatter regression line/hyperplane, whereas you don't have such a tuning parameter for regular ol' linear regression. This parameter enables you to try to fight overfitting by flattening the regression line.
H: How to retrieve images from a url in a pandas dataframe and store them as PIL object in a new column I'm trying to store as a PIL object in a new column of a dataframe pictures that are located in a column of the same dataframe in the form of URL's. I've tried the following code: import pandas as pd from PIL import Image import requests from io import BytesIO pictures = [None] * 2 df = pd.DataFrame({'project_id':["1", "2"], 'image_url':['http://www.personal.psu.edu/dqc5255/gl-29.jpg', 'https://www.iprotego.com/wp-content/uploads/google.jpg']}) df.insert(2, "pictures", pictures, True) for i in range(2): r = requests.get(df.iloc[i,1]) df.iloc[i,2] = Image.open(BytesIO(r.content)) df I expected to get a dataframe with this structure but including both training examples: project_id image_url pictures 0 1 http://www.personal.psu.edu/dqc5255/gl-29.jpg <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=400x300 at 0x116EF9AC8> But instead got the following error: OSError: cannot identify image file <_io.BytesIO object at 0x116ec2f10> ``` AI: I just changed the User-agent in the for loop so that now the request line in the loop is: r = requests.get(df.iloc[i,1], headers=headers) with headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/XXX.XX (KHTML, like Gecko) Chrome/XX.X.XXXX.XXX Safari/XXX.XX'} and this solved the error. I also added a r.raise_for_status() to check the status before using the r.content Final code: import pandas as pd from PIL import Image import requests from io import BytesIO df = pd.DataFrame({'project_id':["1", "2"], 'image_url':['http://www.personal.psu.edu/dqc5255/gl-29.jpg', 'https://www.iprotego.com/wp-content/uploads/google.jpg']}) pictures = [None] * 2 df.insert(2, "pictures", pictures, True) headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/xx.x.xxxx.xxx Safari/xxx.xx'} for i in range(2): r = requests.get(df.iloc[i,1], headers=headers) r.raise_for_status() df.iloc[i,2] = Image.open(BytesIO(r.content)) df ````
H: Using a LinearSVC() for multilabel classification with MultiOutputClassifier() in a pipeline in sci-kit learn My input data is a (23948,) pandas.Series of strings containing newspaper headlines. My target are 20 labels of the headline (e.g. 'crime', 'politics') each binarily encoded with [0, 1]. The labels are not exclusive, a headline could be about crime and politics at the same time. I would like to compare three algorithms for this problem. I use the following pipeline to predict the labels: pipeline = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize)), ('tfidf', TfidfTransformer()), ('clf', MultiOutputClassifier(RandomForestClassifier())) ]) parameters = [ {"clf": [RandomForestClassifier()], "clf__n_estimators": [10, 100, 250], "clf__max_depth":[8], "clf__random_state":[42]}, {"clf": [LinearSVC()], "clf__C": [1.0, 10.0, 100.0, 1000.0], "clf__random_state":[42]} {"clf": [GaussianNB()]} ] rkf = RepeatedKFold( n_splits=10, n_repeats=2, random_state=42 ) cv = GridSearchCV( pipeline, parameters, cv=rkf, scoring='accuracy', n_jobs=-1) The pipeline works fine for the random forest, but breaks at the LinearSVC() with the following error: ValueError: bad input shape (20972, 20) If I remove the LinearSVC(), it stops at: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array. There is a list "Support multilabel:" in the sci-kit learn documentation on multilabel classification (https://scikit-learn.org/stable/modules/multiclass.html) and only the random forest is included. However, the documentation states that "Multioutput classification support can be added to any classifier with MultiOutputClassifier." I am bit confused, do LinearSVC() and GaussianNB() support multilabel classification when wrapped in MultiOutputClassifier()? If not, is there a workaround? AI: When you run your grid search, the clf step of the pipeline is replaced by each of RandomForestClassifier, LinearSVC, GaussianNB; you never actually use the MultiOutputClassifier. You should be able to just wrap the two offending classifiers with a MultiOutputClassifier. You'll need to prefix your hyperparameters with estimator__ to get through the MOC into the underlying classifier: clf__estimator__C.
H: How to convert sequence of words in to numbers which are input to RNN/LSTM? I am watching online videos and tutorials about use of RNN/LSTM for NLP but none of them explain how to convert the sequences of words into digitized input to the neural networks? I am looking for intuitive understanding but answers with python code are also welcome. For example, how do I input; "grass is green and sun is hot" to my RNN/LSTM? AI: from keras.preprocessing import Tokenizer samples = ["grss is green and sun is hot"] tokenizer = Tokenizer(num_words=1000) tokenizer.fit_on_texts(samples) sequences = tokenizer.texts_to_sequences(samples) The Keras library uses it's tokenizer function but you have other well known libraries like nltk, gensim to convert them into sequences and pass it into your neural network. There are other ways like TF/IDF and CountVectorizer in Sklearn for more simpler algorithms. num_words takes the most 1000 frequent words and tokenizes them. Link : Keras text processing
H: What variance threshold to consider in feature selection? Consider a numerical dataset with continuous variables, that has been scaled to end up with values in the [0,1] range. How can I compute a reasonable variance threshold for all the variables? AI: Usually 3 is taken as a threshold Value by standard deviation Using z-score we can look for value above or below threshold like in code below: threshold=3 mean = np.mean(data) std =np.std(data) for x in data: z_score= (x - mean)/std if np.abs(z_score) > threshold: ...
H: Extract imperative sentences from a document(English) using NLP in python I am very new to NLP, hence require some help on extracting imperative sentences from a document. I am working on a project where I need to get all the imperative sentences from the entire document(English documents). I understand I need to use POS tagging. But how do I proceed further. Thanks. AI: In order to maximize accuracy you would need to use not only a POS tagger but also a syntactic parser. Nevertheless for this task POS tags can probably give you reasonable results indeed, here is a general method: Segment the data into sentences and tokens Apply the POS tagger (it predicts a POS tag for every token) A sentence is (likely) imperative if the following conditions are satisfied: the sentence ends with a full stop or exclamation mark the POS for the first token corresponds to a verb This heuristic is probably all you need, but if you want to go further you could generate instances containing these features (and possibly add a few others) for every sentence, annotate a training set and train a supervised model.
H: How does the forward method get called in this pyTorch conv net? In this example network from pyTorch tutorial import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 3x3 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net) net = Net() input = torch.randn(1, 1, 32, 32) out = net(input) print(out) Why is the method forward() not explicitely called? I mean how does just calling net(output) calls forward() ? (which is what happens as far as I understand) By the way I dont understand what this line means: super(Net, self).__init__() I can imagine super() is calling the constructor of a parent class but …? AI: If you look at the Module implementation of pyTorch, you'll see that forward is a method called in the special method __call__ : class Module(object): ... def __call__(self, *input, **kwargs): ... result = self.forward(*input, **kwargs) As you construct a Net class by inheriting from the Module class and you override the default behavior of the __init__ constructor, you also need to explicitly call the parent's one with super(Net, self).__init__().
H: Layer shape computation in convolutional neural net (pyTorch) How can you know the expected input size (image input size (tensor size)), for example for this network (cf. pyTorch tutorial example ): import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 3x3 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net) Since it is nowhere explicitely stated. Moreover the comment self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension is unclear. What does this shape: (16 * 6 * 6, 120) have to do with image size (e.g. 32x32 as claimed by the authors of the tutorial) ? I cannot find a way, by looking at the code, to know what input size is expected by the net? AI: Well, with conv layers in pyTorch, you don't need to specify the input size except the number of channels/depth. However, you need to specify it for fully connected layers. So, when defining the input dimension of the first linear layer, you have to know what is the size of the images you feed. You can find information on the output size calculation of conv layers and pooling layers here and here or here If you feed images of size 32x32, the outputs layer by layer of this model are : conv1 : $6$ feature maps of size $\left \lfloor{\frac{32 + 2\times0 -1\times(3-1)-1}{1}+1}\right \rfloor = 30$ max_pool2d: $6$ feature maps of size $\left \lfloor{\frac{30 + 2\times0 -1\times(2-1)-1}{2}+1}\right \rfloor = 15$ conv2 : $16$ feature maps of size $\left \lfloor{\frac{15+ 2\times0 -1\times(3-1)-1}{1}+1}\right \rfloor = 13$ max_pool2d: $16$ feature maps of size $\left \lfloor{\frac{13+ 2\times0 -1\times(2-1)-1}{2}+1}\right \rfloor = 6$ Therefore, for the flattened size of the output before the first linear layer is $16\times6\times6$ : self.fc1 = nn.Linear(16 * 6 * 6, 120) By doing all sizes calculations in reverse, you could have find that the input size must be $32\times32$.
H: Subtract Rows of Matrix from rows of another matrix numpy I have two matrix V_r of shape(19, 300) and vecs of shape(100000, 300). I would like to subtract rows of V_r from from rows of vecs. Currently, I am achieving this with the following code. Is there a way to do it using broadcasting? list=[] for v in V_r: a=np.linalg.norm(vecs-v,axis=1) list.append(a) M=np.vstack(list) AI: This is what you're looking for : np.linalg.norm(vecs-V_r[:,np.newaxis], axis = 2)
H: Why my Keras CNN model isn't learning My project have to decide if a image is 'pdr' or 'nonPdr', and I have 391 images (22 of PDR class, and the 369 of nonPdr).. In my first model i was trying this: https://stackoverflow.com/questions/57663233/my-keras-cnn-return-the-same-output-value-how-can-i-fix-improve-my-code .. and my return was always the same... Now I made some changes in my model file: TRAIN_DIR = 'train_data/' #TEST_DIR = 'test_data/' def ReadImages(Path): LabelList = list() ImageCV = list() # Get all subdirectories FolderList = os.listdir(Path) # Loop over each directory for File in FolderList: if(os.path.isdir(os.path.join(Path, File))): for Image in os.listdir(os.path.join(Path, File)): # Convert the path into a file ImageCV.append(cv2.imread(os.path.join(Path, File) + os.path.sep + Image)) # Add a label for each image and remove the file extension classes = ["nonPdr", "pdr"] LabelList.append(classes.index(os.path.splitext(File)[0])) else: ImageCV.append(cv2.imread(os.path.join(Path, File) + os.path.sep + Image)) # Add a label for each image and remove the file extension classes = ["nonPdr", "pdr"] LabelList.append(classes.index(os.path.splitext(File)[0])) return ImageCV, LabelList model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu", input_shape=(605,700,3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(4,4), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(2, activation='sigmoid')) model.compile(optimizer='RMSprop', loss='binary_crossentropy', metrics=['accuracy']) data, labels = ReadImages(TRAIN_DIR) data = np.array(data, dtype="float") / 255.0 le = LabelEncoder() labels = le.fit_transform(labels) labels = np_utils.to_categorical(labels, 2) model.fit(data, labels, epochs=8, batch_size=20) model.save('model.h5') ... but running this code give me a Loss = 8.0 and a acc = 0.50 What can I do? I appreciate any answer.. UPDATE I forgot that I reduce my train imgs to 20/20 AI: It seems that you have an output size of 2 in your final layer, while you should rather have size 1 (because of your sigmoid output and binary cross entropy loss). Also, don’t use the to_categorical transformation as you only have two classes so no need to one-hot encode. Try to change this and see if training improves.
H: Sanity check: low PPV but high AUC scores? I have two algorithms running on a piece of data, both of which perform differently. One of them (call it A) consistently gets a positive predictive value of about 0.75-0.78. Looking at the AUC of the Receiver operating characteristic it has a score of about 0.82 The second algorithm (let's call it B) consistently get's a lower predictive value of about 0.72-0.75. However this gets a higher AUC of the Receiver operating characteristic value of about 0.85. Does this definitively indicate an error of some kind as AUC is so tightly associated with the positive predictive value? Or is this entirely reasonable, subject to other factors? AI: AUC of ROC is not affected by PPV (precision), but by recall (true positive rate) and fall-out (false positive rate). As per your results, model A gets a better PPV hence a lower recall and a lower ROC AUC. Inversely, model B gets a better recall hence a better ROC AUC and a lower PPV. So this is entirely reasonable due to the definition of the two metrics.
H: What do we learn from training a dataset for logistic regression What do we learn from training our dataset in Logistic Resgression? Like in Linear Regression, with the help of training set we are able to generate a best fit line(y = mx+c) where m and c come from training our dataset. Similarly, once we train our model for logistic regression, what is that the model learns which is then used to predict a class for a particular input? AI: I hope this is what you are looking for: In Linear regression, the parameters are learned (e.g. theta1,theta2, etc.) which are then used to help us in predicting the real value ( as it is the regression problem) But In case of logistics regression (as it is used for classification purposes) the same parameters are learned but different activation function is used like sigmoid which results in the range of 0 to 1( which actually shows the probability of an outcome). So, based on your requirement, if the result of the sigmoid or any other activation function crosses a certain threshold value which can be any between 0 to 1, then it is classified as one class otherwise the other class. For example, If after multiplying all the parameters and input values and then applying activation function on the multiplication result, the output value is 0.7 which let say is great than our threshold value(e.g. 0.5) then it is classified as YES otherwise NO.
H: Explained definition of the norm in Ordinary Least Squares I have recently started learning Scikit-learn and I am not able to understand the below equation. Could anybody please explain? AI: The norm indicates the Eucleadan norm, which gives the ordinary distance between the points. https://en.wikipedia.org/wiki/Norm_(mathematics) The subscript 2 with norm sign indicates "square root of the sum of squares". The superscript 2 indicates sign power.
H: Normal distribution and QQ plot I have data that I plotted with a normal distribution and a QQ plot. I was wondering especially in QQ plot it seems that 95.4% of the data is normally distributed. My question is what does numbers meanings that above 2 sigmas in QQ plot? Should remove them or I need to transform this variable and make it more normally distributed? AI: The normal distribution is a theoretical model of data. Empirical data can be distributed more similarily or more dissimilarly to a normal distribution. That empirical data has a couple of notable divergences from a theoretical normal distribution: Presence of outliers Not symmetric Relative "peakedness" Depending on your goal, you can pick a better model or transform the data to fit the normal distribution model. The power-law distribution might be a better model for the data. If you want to transform the data to better fit to a normal distribution, you can drop the outliers and then apply a log transformation.
H: Are DBSCAN and dbscan from the sklearn.cluster package different? I'm new to DBSCAN. I was looking at a few examples online and came across a few instances where the following lines were used while importing the dbscan module: from sklearn.cluster.dbscan_ import DBSCAN from sklearn.cluster.dbscan_ import dbscan I would like to know if there is anything different between them? Or is it necessary for me to import both DBSCAN and dbscan? Here's one link where both lines are used in the import: https://gemfury.com/stream/python:scikit-learn/-/content/cluster/tests/test_dbscan.py AI: Such questions are easily answered if you check the source code yourself. https://github.com/scikit-learn/scikit-learn/blob/1495f69242646d239d89a5713982946b8ffcf9d9/sklearn/cluster/dbscan_.py#L350 One is an object-oriented wrapper with a more common API, the other the underlying function that does the actual work. Since the wrapper just calls the function, the results are not substantially different, only the presentation. Hence, usually you won't need both.
H: How to create dictionaries out of pandas dataframes? I import three csv-files with characteristics of countries (rows) and years (columns): country_data_m = 'country_data_m.csv' m_year = pd.read_csv(country_data_m, nrows=161, index_col=0, header=0, sep=';', na_values=[""]) country_data_e = 'country_data_e.csv' e_year = pd.read_csv(country_data_e, nrows=161, index_col=0, header=0, sep=';', na_values=[""]) country_data_i = 'country_data_i.csv' i_year = pd.read_csv(country_data_i, nrows=161, index_col=0, header=0, sep=';', na_values=[""]) The datasets look like this: 1995 1996 1997 \ Afghanistan NaN NaN NaN Albania NaN NaN NaN Angola 5.538749e+09 7.526447e+09 7.648377e+09 Antigua and Barbuda 5.772807e+08 6.337306e+08 6.806171e+08 1995 1996 1997 1998 1999 \ Afghanistan NaN NaN NaN NaN NaN Albania NaN NaN NaN NaN NaN Angola 0.8565 0.8369 0.8173 0.7976 0.7777 Antigua and Barbuda 0.6957 0.6352 0.6513 0.6401 0.6171 1995 1996 1997 1998 1999 \ Afghanistan NaN NaN NaN NaN NaN Albania NaN NaN NaN NaN NaN Angola 0.0612 0.0626 0.0641 0.0655 0.0670 Antigua and Barbuda 0.1852 0.2264 0.2147 0.2147 0.2030 For each country, I need a dictionary, where the key is the year, and where the values are the variables from the three different datasets. So far, I tried this code: afghanistan = {m_year.loc["Afghanistan", (year)],e_year.loc["Afghanistan", (year)], i_year.loc["Afghanistan", (year)]): year for year in range(1995, 2017)} albania = {m_year.loc["Albania", (year)],e_year.loc["Albania", (year)], i_year.loc["Albania", (year)]): year for year in range(1995, 2017)} ... zimbabwe = {m_year.loc["Zimbabwe", (year)],e_year.loc["Zimbabwe", (year)], i_year.loc["Zimbabwe", (year)]): year for year in range(1995, 2017)} However, the code cannot find the year in the dataframes and gives me the following error: TypeError: cannot do label indexing on <class 'pandas.core.indexes.base.Index'> with these indexers [1995] of <class 'int'> Any help would be very much appreciated! AI: I have one solution to your problem. please be careful with the names of the columns for every step. You may have different ones: Concat all transposed data frames z=pd.concat([m_year.T,i_year.T,e_year.T]) Then melt them: z=z.reset_index().melt(id_vars='index') print(z.head()) index country value 0 1995 Afghanistan nan 1 1996 Afghanistan nan 2 1997 Afghanistan nan 3 1995 Afghanistan nan 4 1996 Afghanistan nan then do this: z=pd.DataFrame(z.groupby(['country','index'])['value'].apply(lambda x: [i for i in x])).reset_index() print(z.head) country index value 0 Afghanistan 1995 [nan, nan, nan] 1 Afghanistan 1996 [nan, nan, nan] 2 Afghanistan 1997 [nan, nan, nan] 3 Afghanistan 1998 [nan, nan] 4 Afghanistan 1999 [nan, nan] Fianly, a for loop: final_dict={} for i in z.country.unique(): final_dict[i]=z[z['country']==i].drop('country',axis=1).set_index('index').T.to_dict() it will return this: {'Afghanistan': {1995: {'value': [nan, nan, nan]}, 1996: {'value': [nan, nan, nan]}, 1997: {'value': [nan, nan, nan]}, 1998: {'value': [nan, nan]}, 1999: {'value': [nan, nan]}}, 'Albania': {1995: {'value': [nan, nan, nan]}, 1996: {'value': [nan, nan, nan]}, 1997: {'value': [nan, nan, nan]}, 1998: {'value': [nan, nan]}, 1999: {'value': [nan, nan]}}, 'Angola': {1995: {'value': [5538749000.0, 0.0612, 0.8565]}, 1996: {'value': [7526447000.0, 0.0626, 0.8369]}, 1997: {'value': [7648377000.0, 0.0641, 0.8173]}, 1998: {'value': [0.0655, 0.7976]}, 1999: {'value': [0.067, 0.7777]}}} And then with list comprehetion we can transform it into your final dictionary : final={i:{i:j['value'] for i,j in final_dict[i].items()} for i in final_dict.keys()} final {'Afghanistan': {1995: [nan, nan, nan], 1996: [nan, nan, nan], 1997: [nan, nan, nan], 1998: [nan, nan], 1999: [nan, nan]}, 'Albania': {1995: [nan, nan, nan], 1996: [nan, nan, nan], 1997: [nan, nan, nan], 1998: [nan, nan], 1999: [nan, nan]}, 'Angola': {1995: [5538749000.0, 0.0612, 0.8565], 1996: [7526447000.0, 0.0626, 0.8369], 1997: [7648377000.0, 0.0641, 0.8173], 1998: [0.0655, 0.7976], 1999: [0.067, 0.7777]}}
H: Interpreting a curve val_loss and loss in keras after training a model I am having trouble understanding the curve val_loss and loss in keras after training my model. Can anyone help me understand it? Also, is my model overfitting or underfitting? AI: The loss curve shows what the model is trying to reduce. The training procedure tried to achieve the lowest loss possible. The loss is calculated using the number of training examples that the models gets right, versus the ones it gets wrong. Or how close it gets to the right answer for regression problems. The loss curves are going smoothly down, meaning your model improves as it is training, which is good. Your test loss is slightly higher than your training loss, meaning your model is slightly overfitting the training data, but that’s inevitable, it doesn’t seem problematic. All seems ok from this plot. Now your model is getting an accuracy of 30% or so. Unless you tell us what the model is doing and how you define accuracy, there’s no way of knowing if that’s ok or not.
H: Combining 'class_weight' with SMOTE This might sound a weird question, but I could not find enough details in sklearn documentation about 'class_weight'. Can we first oversample the dataset using SMOTE and then call the classifier with the 'class_weight' option? As my testing set is highly imbalanced, I want to penalize misclassifications for minority classes. Thank you! AI: I tried different classifiers using a combination of SMOTE and class_weight, the results are almost the same as using only the SMOTE approach, and this new config made almost no difference (which could be expected, following the logic behind the class_weigh approach). PS: I have a pretty large dataset with multiple classes. This might result in different performance in different contexts.
H: Calculate coefficient w* I'm learning ML from Bishop's book. But I don't know that How should I calculate w* in the below picture. AI: Just looked into the book. It was an example of the data that is presented in Fig 1.4. Numpy is a good package to derive the sum of squares fit to the data. This is an optimization problem. Look at the notes section of numpy.polynomial.polynomial.polyfit. I think the data is not given (you can digitize it!). The book is showing some qualitative behavior to give some intuition.
H: How to change a Pandas dataframe into feature vector? I have a Pandas dataframe with 10 columns, 9 of which are features to be used to predict the 10th column. How is it ossible to convert this Pandas dataframe into X and y vectors to use in a linear regression problem? AI: If you have your dataframe loaded as the variable df, you can simply use this X = df[['A','B','C']] y = df['Z'] where A, B and C are your independent variables and Z is your dependent variable.
H: How to create correlation matrix but with only part of the rows? I would like to have correlation matrix like this one, but with only 3 bottom rows but all the columns. How can I generate it? corr = corrdata.corr() sns.heatmap(corr, mask=np.zeros_like(corr), annot=True, cmap=sns.diverging_palette(220, 10, as_cmap=True), square=True, ax=ax, fmt='.2f' , ) AI: You can use a slice of the correlation matrix dataframe. import pandas as pd import seaborn as sns df = pd.DataFrame({'A':[0,1,2,3,],'B':[4,5,6,7],'C':[8,9,10,11]}) sns.heatmap(df.iloc[2:3, 1:3], annot=True, cmap=sns.diverging_palette(220, 10, as_cmap=True),square=True, )
H: How to compare a sentence with a paragraph and get its probability in terms of correctness? This is my first post on stackoverflow network and I am dealing with a machine learning. Lets say I have a paragraph describing a rabbit and tortoise story. The story concludes that tortoise is a winner. Now I want to train ML engine based on this paragraph. If I provide a statement as "Rabbit won the race", how can it show me the correctness of my statement in terms of probability of correctness? Need help. AI: This is a complex problem related to Natural Language Understanding (NLU). The key part in such a system is certainly textual entailment, but it could also use techniques such as Question Answering and summarization. I'm not aware of any direct model or tool to carry out this task exactly, so I think you will have to study the literature in these fields.
H: ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() Firstly I have a pandas series of recommended product (recmd_prdt_list). In this series there is a possibility of presence of deleted products. So as to remove deleted products from the recommended products, I did the following : recmd_prdt_list = user_lookup['Recommended items'] recmd_prdt_list 0 PLV08, PLPD04, PBC07, 555, PLF02, 963, PLF07, ... 1 123, 345, R922, Asus009, AIMAC, Th001, SAM S9,... 2 LGRFG, LG, 1025, COFMH, 8048, BY7, PLHL4, 569,... 3 COFMH, 5454, 8048, 1025, LG, len123, Th001, PL... 4 LGRFG, AIM-Pro, 569, Asus009, PLHL3, PL04, PLH... 5 PLV08, PLF09, PLF02, PBC04, PLF07, AIM-Pro, PL... type(recmd_prdt_list) pandas.core.series.Series DataFrame of product status product_status ItemCode Status DeletedStatus AIMAC 2 True AIM-Pro 2 True SAM S9 2 True SH MV 2 True COFMH 2 True LGRFG 2 True type(product_status) pandas.core.frame.DataFrame first_row = user_lookup['Recommended items'][0] first_row 'PLV08, PLPD04, PBC07, 555, PLF02, 963, PLF07, HG8, jealous21, 4' type(first_row) str Converting the str to list first_row_list = list(first_row .split(",")) first_row_list ['PLV08', ' PLPD04', ' PBC07', ' 555', ' PLF02', ' 963', ' PLF07', ' HG8', ' jealous21', ' 4'] From the first row i took first itemcode to check the deleted status : product_details = product_status.loc[product_status['ItemCode'] == 'PLV08'] product_details ItemCode Status DeletedStatus PLV08 2 False type(product_details) pandas.core.frame.DataFrame product_details['DeletedStatus'] 693 False Name: DeletedStatus, dtype: bool So as to check the deleted status of each product in the respective row and save to a new list. I wrote the following code : itemcode = 'PLV08' activ_product = [] if itemcode in product_status['ItemCode'].values: print(itemcode) product_details = product_status.loc[product_status['ItemCode'] == itemcode] print(product_details) if product_details['Status'] == 2 & product_details['DeletedStatus'] == 'False': activ_product.append(itemcode) Error : PLV08 ClientId ItemCode Status DeletedStatus 499 2213 PLV08 2 False --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-35-9507e1ada5f7> in <module>() 5 product_details = product_status.loc[product_status['ItemCode'] == itemcode] 6 print(product_details) ----> 7 if product_details['Status'] == 2 & product_details['DeletedStatus'] == 'False': 8 activ_product.append(itemcode) ~/.virtualenvs/sysg_python3/lib/python3.5/site-packages/pandas/core/generic.py in __nonzero__(self) 951 raise ValueError("The truth value of a {0} is ambiguous. " 952 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." --> 953 .format(self.__class__.__name__)) 954 955 __bool__ = __nonzero__ ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). How to get solve of this error? AI: First of all, to make logical test in Python, you should not use & for a single values equalities (see this) and you should not use question marks around boolean values False and True. Now, concerning you specific error : When writing product_details['Status'] and product_details['DeletedStatus'] you get each time a Series, which you cannot test for a logical and between them. If you have unique item codes, you can use: if product_details.iloc[0]['Status'] == 2 and product_details.iloc[0]['DeletedStatus'] == False: activ_product.append(itemcode) It will simply select the first row of product_details and subset the desired column so that the result is a single value and you can compare it.
H: Cross validation while preserving a column (not the target ) distribution So i'm doing cross validation and then i'm predicting using all the data on a test set ( a hold-out set ). My hold-out set has the same ratio on a column than the train ( seems thats how the test set was generated, a function that sampled it and tried to preserve the ratio for the target classes, and a particular column ) . My local CV is a bit lower than my score on the test set, and i think the problem is stemming from the fact that i'm using stratification only for 'y'. Can lack of stratification of that feature be the reason of Cv & test scores aren't really close? And if so how can i perform stratification for the target and a feature! Thanks Edit : i'm already doing stratification on the target since my data is imbalanced. AI: One idea would be to combine the two columns (one predictor and the target) and then stratify using the combined column. Example: say for some observations, the target and the column take the following values: target = [0,1,0] and column = [A,A,B]. Then the combined column could look something like [A0,A1,B0] and could be used for stratification. I'm obviously assuming that the predictor column is categorical - you might have to take a different approach for a continuous variable.
H: Conditional entropy calculation in python, H(Y|X) Input X: A numpy array whose size gives the number of instances. X contains each instance's attribute value. Y: A numpy array which contains each instance's corresponding target label. Output : Conditional Entropy Can you please help me code the conditional entropy calculation dynamically which will further be subracted from total entropy of the given population to find the information gain. I tried something like the below code example. But the only input data I have are the two numpy arrays. can you please help me correct this ? [code] def gain(data, attr, target_attr): val_freq = {} subset_entropy = 0.0 for record in data: if (val_freq.has_key(record[attr])): val_freq[record[attr]] += 1.0 else: val_freq[record[attr]] = 1.0 for val in val_freq.keys(): val_prob = val_freq[val] / sum(val_freq.values()) data_subset = [record for record in data if record[attr] == val] conditional_entropy += val_prob * entropy(data_subset, target_attr) AI: Formula for conditional entropy is: $H(X|Y)=\sum_{v\epsilon values(Y)}P(Y=v)H(X|Y=v)$ for X given Y. Mutual information of X and Y: $I(X,Y)=H(X)-H(X|Y)=H(Y)-H(Y|X)$ I assume you already know the formula for H(X), the entropy. For more information I would suggest: http://www.cs.cmu.edu/~venkatg/teaching/ITCS-spr2013/notes/lect-jan17.pdf After knowing these formulas coding part shouldn't be that hard. Python takes care of most of the things for you such as: log(X), when X is matrix python just takes log of every element. For the sum you can use iterative approach or use np.sum(). If you have a code consider posting it so we can revive and tell you what is wrong, right and how to improve.
H: Which classifier performs better when using 'class_weight'? I have used the 'class_weight' method to balance my multi-class classification problem, using Logistic Regression, Random Forest, and XGBoost classifiers. Among these three methods, logistic regression's performance for the minority classes is substantially higher than the other two models. Could someone please explain why does LR beat decision-tree-based classifiers in this scenario? Thank you. AI: Seems I found some reasonable justifications for this behavior! Class_weight in the LogisticRegression approach is applied to sample_weight, which is used in a few internal functions like >_logistic_loss_and_grad, _logistic_loss, etc.: #Logistic loss is the negative of the log of the logistic function. . out = -np.sum(sample_weight * log_logistic(yz)) + .5 * alpha * np.dot(w, w) NOTE: ---> ^^^^^^^^^^^^^ Likewise, in decision-tree based approaches like RandomForest and XGBoosting, the class_weigh is applied to the gini or entropy function, which impacts both nominator and denominator --> less impact on the purity function! Source code
H: Can I arbitrarily eliminate 20% of my training data if doing so significantly improves model accuracy? My dataset contains 2000 records with 125 meaningful fields 5 of which are distributed along highly skewed lognormal behavior. I've found that if I eliminate all records below some threshold of this lognormal behavior (by combining the fields together then filtering for Nth percentile), my model improves in accuracy from ~78% to ~86%, using a highly tuned random forests classifier. This filter is only done after splitting my data into train, test (which is done after SMOTE). What makes this particularly odd is that that filter improves results across multiple sampling methods. Is this filtering acceptable behavior? Why might it be resulting in better predictions? AI: One flaw in your procedure is the use of SMOTE before splitting in train/test. This should be avoided as you may have synthetic examples in the test data which generation depends on training data and that will be highly close to this data in your feature space (as SMOTE uses Euclidean distance). Moreover, if most of the minority data belongs to the not-skewed region of your specific variables, these points will be also over sampled and so this reduction in the variables space will produce an overly optimistic performance which does not reflect the real distribution of the data.
H: How to create a parquet file from a query to a mysql table Updating a legacy ~ETL; on it's base it exports some tables of the prod DB to s3, the export contains a query. The export process generates a csv file using the following logic: res = sh.sed( sh.mysql( '-u', settings_dict['USER'], '--password={0}'.format(settings_dict['PASSWORD']), '-D', settings_dict['NAME'], '-h', settings_dict['HOST'], '--port={0}'.format(settings_dict['PORT']), '--batch', '--quick', '--max_allowed_packet=512M', '-e', '{0}'.format(query) ), r's/"/\\"/g;s/\t/","/g;s/^/"/;s/$/"/;s/\n//g', _out=filename ) the mid term solution with more traction is AWS Glue, but if I could have a similar function to generate parquet files instead of csv files there would be much needed big short term gains AI: it seams that there is no direct way to do it, other than through Spark / PySpark; as long as that holds true, the answer is in SO: https://stackoverflow.com/questions/27718382/how-to-work-with-mysql-and-apache-spark
H: General equation for getting an idea of the scale of a machine learning project I'm writing an application for a project where we intend to teach a model to predict one aspect of an environment (traffic safety) using a database with 10 images (about 300x300px and, say, 256 colors) for each of either 100 000 or 15 million locations. I must come to grasp with if both, one or none of these projects are feasible with our hardware constraints. What can I expect? Is there some formula or benchmark that one can refer to? Will one be able to do this on a laptop with a decent GPU, a dedicated ML computer or does it require the level of infrastructure that Google and Amazon use? AI: Interesting but difficult question! It depends on the efficiency of your algorithm, both in terms of training and scoring/predicting, but to get a first idea I would go by the amount of data that we're talking about. 256 colors are 8 bits per pixel, times 300x300 pixels. Uncompressed, you have 720 kB per image. 10 images per location: 7.2 MB data per location. 100k locations: 720 GB of uncompressed input data, 15M locations: 108 TB. If you compress your data, meaning you store them a JPG or something, I don't know, but I would expect you need about a factor 10 less storage (it depends how easily the images can be compressed and how well JPG compression works). Given unlimited time (and storage), any amount of data can be crunched on a laptop, although my laptop doesn't have 10 TB storage. But I would expect 720 GB worth of uncompressed data to be impractical on a single computer, unless you have an exceptionally efficient algorithm. That would be an algorithm that might be specifically designed so that it can be trained on all the data on a single computer, not a CNN with a bunch of layers that I expect you have in mind. Using distributed computing in the cloud or on-premise has its cost too, in terms of infrastructure cost, complexity to debug, etc. But I would expect that with this kind of dataset, it's worth it. That's something different from "the level of infrastructure that Google uses", which is millions of servers. I would start processing with, for example, 100 cores, and see how it goes. Don't forget to turn them off when you're done!
H: Why models performs better If normalize test data and train data separately? Many threads (and courses) such as this and this one suggest that you should apply normalization to the test data using the parameters used in the training set. But other some discussions I've found like this one and this one that suggest that applying the normalization to the test set is not really required and it might depends on many factors such as the model used for training or the nature of the test data. Now, personally, I am more inclined towards applying the normalization on test data as well. But the problem is this: I am working on a neural network model where: If I apply normalization using the recommended way I get 79% accuracy, (and to be honest it's not interesting for me) If apply normalization on training and testing in a separate way, I get really good results 85% (and sometimes more) and the further steps I try to do next work better as well. So, I don't know what my neural network performs better on test unseen data if I use the second method. I really I want continue using the second method for this particular model, but I don't feel good about it and feels like it's wrong or cheating. Now, I have one last argument. The last link I provided, have one answer that says this: "..This is all dependent on size of data sets & whether both train and test are equally representative of the domain you are trying to model. If you have thousands of data points and the test set is fully representative of the training set (hard to prove) then either method will be fine..." The dataset I use is a refined version of its predecessor (NSL-KDD dataset). The authors said "There is no duplicate records in the proposed test sets" and that they have removed any redundant values. So I feel, this dataset is uniform and the test set actually representative according to the authors. So can I use the second approach? Ps: Sorry if this is long, it's a research ethics thing. I will follow the approach you guys recommend. AI: If apply normalization on training and testing in a separate way, I get really good results 85% (and sometimes more) and the further steps I try to do next work better as well. The problem with applying normalization across instances on the test set separately is that the test set represents any new data. So in principle the model should be able to give a prediction for a single instance independently from any other instances, in which case there is no set of instances to obtain the mean/std dev from. More importantly, the prediction of the model for a given instance should always be the same. Normalizing on the test set breaches this principle, because it makes the prediction for a particular instance depends on the other instances in the test set. I don't think "separate normalization" is unethical strictly speaking, because it doesn't imply using any of the test data at training stage (whereas normalizing before splitting the train/test sets would). However it's theoretically incorrect for the reasons I mentioned above. The fact that you obtain such a big difference in performance by normalizing "separately" points to a very different distribution of the data between training and test set (or a bug somewhere along the process). I'd suggest investigating that, maybe there's some error in the data?
H: My CNN model Accuracy doesn't increase (high loss and low acc) Well, I need to do a CNN to classify if a Image is from one or another class. But my model return high losses (6.~8.) and low accuracies (0.50 on max). I tried to include more layers, change my activation functions, and nothing works. My database is 142 .jpg imgs (71 for each class) This is my code: OLD CODE def ReadImages(Path): LabelList = list() ImageCV = list() classes = ["nonPdr", "pdr"] # Get all subdirectories FolderList = [f for f in os.listdir(Path) if not f.startswith('.')] print(FolderList) # Loop over each directory for File in FolderList: for index, Image in enumerate(os.listdir(os.path.join(Path, File))): # Convert the path into a file ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (600,700))) LabelList.append(classes.index(os.path.splitext(File)[0])) return ImageCV, LabelList model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu", input_shape=(700,600,3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(4,4), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='RMSprop', loss='binary_crossentropy', metrics=['accuracy']) data, labels = ReadImages(TRAIN_DIR) model.fit(np.array(data), np.array(labels), epochs=10, batch_size=20) model.save('model.h5') What can I do to improve my model? I appreciate your help! UPDATE I tried to do what Shubham Panchal said but isn't resolve the problem: THINGS THAT I TRIED - Reduce Img size - lr=0.0001 - optimizers: adam, sgd, rmsprop - put more layers - put dropout layer - normalize the data with np.array(data) / 255.0 - Increase the data (1400 total, 700 each class) My code: model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu", input_shape=(150,150,3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(3,3), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(3,3), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(1, activation='softmax')) opt = SGD(lr=0.0001, momentum=0.9) model.compile(optimizer = opt, loss="binary_crossentropy", metrics=['accuracy']) #model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) data, labels = ReadImages(TRAIN_DIR) model.fit(np.array(data) / 255.0, np.array(labels), epochs=10, batch_size=16) My console: Epoch 1/10 1400/1400 [==============================] - 58s 42ms/step - loss: 7.9712 - acc: 0.5000 Epoch 2/10 1400/1400 [==============================] - 59s 42ms/step - loss: 7.9712 - acc: 0.5000 Epoch 3/10 1400/1400 [==============================] - 59s 42ms/step - loss: 7.9712 - acc: 0.5000 ... Anyone have any ideia what can I do?? AI: Here are some hacks which you can use to improve then model. The dataset seems to be inadequate. Try image augmentation. Image augmentation basically applies to different transformations to your images. Like the rotation, scale, color, whitening etc. are changed. It helps the model to generalise better on the image. See here. Use softmax classification function. Instead of sigmoid, try softmax activation function since you are working on a classification task. Sigmoid is mostly reserved for binary classification tasks. model.add(Dense(1, activation='softmax')) Add more convolution layers. Consider adding more Conv2D layers to the model because the image size is quite large. More the number of layers, the better feature extraction will take place. Due to less layers, your model is not able to extract smaller features which may be required for proper classification. Tips: Try adam optimizer instead of rmsprop. Restrict the kernel size to ( 3 , 3 ). Use a smaller batch size. Your batch size is 20 for 142 images. That makes only ~7 batches. Lower it to a number like 6 or 10. Use Dropout layers in between the Dense layers. The smaller learning rate always helps like 0.001 or 0.0001.
H: Evaluating pairwise distances between the output of a tf.keras.model I am trying to create a custom loss function in tensorflow. I am using tensorflow v2.0.rc0 for running the code. Following is the code and the function min_dist_loss computes the pairwise loss between the output of the neural network. Here's the code def min_dist_loss(_, y_pred): distances = [] for i in range(0, 16): for j in range(i + 1, 16): distances.append(tf.linalg.norm(y_pred[i] - y_pred[j])) return -tf.reduce_min(distances) and the module is being initialized and compiled as follows import tensorflow as tf from tensorboard.plugins.hparams import api as hp HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([6, 7])) HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd'])) METRIC_ACCURACY = 'accuracy' with tf.summary.create_file_writer('logs\hparam_tuning').as_default(): hp.hparams_config( hparams=[HP_NUM_UNITS, HP_OPTIMIZER], metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')] ) def train_test_model(logdir, hparams): weight1 = np.random.normal(loc=0.0, scale=0.01, size=[4, hparams[HP_NUM_UNITS]]) init1 = tf.constant_initializer(weight1) weight2 = np.random.normal(loc=0.0, scale=0.01, size=[hparams[HP_NUM_UNITS], 7]) init2 = tf.constant_initializer(weight2) model = tf.keras.models.Sequential([ # tf.keras.layers.Flatten(), tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.sigmoid, kernel_initializer=init1), tf.keras.layers.Dense(7, activation=tf.nn.sigmoid, kernel_initializer=init2) if hparams[HP_NUM_UNITS] == 6 else None, ]) model.compile( optimizer=hparams[HP_OPTIMIZER], loss=min_dist_loss, # metrics=['accuracy'], ) x_train = [list(k) for k in itertools.product([0, 1], repeat=4)] shuffle(x_train) x_train = 2 * np.array(x_train) - 1 model.fit( x_train, epochs=1, batch_size=16, callbacks=[ tf.keras.callbacks.TensorBoard(logdir), hp.KerasCallback(logdir, hparams) ], ) Now since the tensor object y_pred in the min_dist_loss is an object of shape [?, 7], indexing with i is throwing the following error: Traceback (most recent call last): File "/home/pc/Documents/user/code/keras_tensorflow/src/try1.py", line 95, in <module> run('logs\hparam_tuning' + run_name, hparams) File "/home/pc/Documents/user/code/keras_tensorflow/src/try1.py", line 78, in run accuracy = train_test_model(run_dir, hparams) File "/home/pc/Documents/user/code/keras_tensorflow/src/try1.py", line 66, in train_test_model hp.KerasCallback(logdir, hparams) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 734, in fit use_multiprocessing=use_multiprocessing) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 324, in fit total_epochs=epochs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 123, in run_one_epoch batch_outs = execution_function(iterator) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 86, in execution_function distributed_function(input_fn)) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 427, in __call__ self._initialize(args, kwds, add_initializers_to=initializer_map) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 370, in _initialize *args, **kwds)) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1847, in _get_concrete_function_internal_garbage_collected graph_function, _, _ = self._maybe_define_function(args, kwargs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 2147, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 2038, in _create_graph_function capture_by_value=self._capture_by_value), File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py", line 915, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 320, in wrapped_fn return weak_wrapped_fn().__wrapped__(*args, **kwds) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 73, in distributed_function per_replica_function, args=(model, x, y, sample_weights)) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 760, in experimental_run_v2 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 1787, in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 2132, in _call_for_each_replica return fn(*args, **kwargs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 292, in wrapper return func(*args, **kwargs) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 264, in train_on_batch output_loss_metrics=model._output_loss_metrics) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 311, in train_on_batch output_loss_metrics=output_loss_metrics)) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 252, in _process_single_batch training=training)) File "/home/pc/Documents/user/code/keras_tensorflow/venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 166, in _model_loss per_sample_losses = loss_fn.call(targets[i], outs[i]) IndexError: list index out of range How do I compute the minimum distance in this setting? Any help is appreciated. Also, if there are any errors in other parts of the code, please feel free to point it out. I am new to using keras on tensorflow. AI: Keras is expecting you to provide the true labels as well. Since you're defining your own loss function and you're not using the true labels, you can pass any labels like np.arange(16). Change your model.fit as below and it should work model.fit( x_train, np.arange(x_train.shape[0]), epochs=1, batch_size=16, callbacks=[ tf.keras.callbacks.TensorBoard(logdir), hp.KerasCallback(logdir, hparams) ], )
H: ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (142, 1) I'm trying to pass all my images (71 for each class) from folder 'train' to model.fit. The method ReadImages get these images and resize them (because are too big 4288x2848).... But when i run my code throws this error: ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (142, 1) This is my code: def ReadImages(Path): LabelList = list() ImageCV = list() classes = ["nonPdr", "pdr"] # Get all subdirectories FolderList = os.listdir(Path) # Loop over each directory for File in FolderList: if(os.path.isdir(os.path.join(Path, File))): for index, Image in enumerate(os.listdir(os.path.join(Path, File))): # Convert the path into a file ImageCV.append(cv2.imread(os.path.join(Path, File) + os.path.sep + Image)) ImageCV[index] = cv2.resize(ImageCV[index], (700, 600)) # Add a label for each image and remove the file extension LabelList.append(classes.index(os.path.splitext(File)[0])) else: ImageCV.append(cv2.imread(os.path.join(Path, File) + os.path.sep + Image)) # Add a label for each image and remove the file extension LabelList.append(classes.index(os.path.splitext(File)[0])) return ImageCV, LabelList model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu", input_shape=(605,700,3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(4,4), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='RMSprop', loss='binary_crossentropy', metrics=['accuracy']) data, labels = ReadImages(TRAIN_DIR) print(data[0]) model.fit(np.array(data), np.array(labels), epochs=10, batch_size=20) model.save('model.h5') detail: when I print (data[0]) it return: [0 0 0] [0 0 0] ... [1 0 0] [2 2 2] [0 0 0]] [[3 0 0] [3 0 0] [3 0 0] ... [2 0 0] [2 2 2] [0 0 0]] [[8 0 0] [8 0 0] [8 0 0] ... [2 0 0] [0 0 0] [0 0 0]] ... [[0 0 0] [0 0 0] [0 0 0] ... [0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0] ... [0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0] ... [0 0 0] [0 0 0] [0 0 0]]] What should I do to fix it? I appreciate any help AI: def ReadImages(Path): LabelList = list() ImageCV = list() classes = ["nonPdr", "pdr"] # Get all subdirectories FolderList = [f for f in os.listdir(Path) if not f.startswith('.')] print(FolderList) # Loop over each directory for File in FolderList: for index, Image in enumerate(os.listdir(os.path.join(Path, File))): # Convert the path into a file ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (600,700))) LabelList.append(classes.index(os.path.splitext(File)[0])) return ImageCV, LabelList A bit of change to your code after discussion in the chat room.
H: Precision Vs Recall Curve analysis I have the following averaged − curves with 4 models. Which one is the best? AI: It depends. The problem you are trying to solve decides which among the curves is the best For example If you are trying to solve a problem that is like identifying cancer where the cost of false positive is extremely high. The first blue plot (model 3) is the best. However, you are compromising on recall or number of positive examples captured. It means that not all patients who have cancer will be identified as having cancer by model 3 but who ever the model says has cancer is most likely to have it amongst all the models plotted. If your objective is just a simple marketing campaign where you would want most responders to be targetted but you are fine contacting people who are not likely to respond, model 2 which has the maximum recall. When targetting a non responder is also expensive like a direct mail campaign, model 4 is the best because it balances both precision and recall. Having said this response rate and sample size also need to be taken into account before arriving at which model is good. Because if accuracy of 94% and 87% are because of just 2 accurate samples (very low response) accuracy alone might not make sense.
H: How to implement Classification and Anomaly detection (C++) I am creating a system using C++(DX11) and i'm reading raw data into my program, i want to classify what the 3D data-set i'm reading in is and detect any anomalies it may have when compared to a database of the same types of item. I've not really done much in regards to machine learning aside from simple KMeans and linear regression solutions, what would be the best approach to implementing something like classification and anomaly detection in my program and what background reading/research might be required? Currently, the program can display the raw data as a 3D image. AI: You can try using SVM if your dataset is medium-sized or small (500-10000). You train the model with two classes: normal or anomaly (your output would be the probability of being of one class or the other). A great library in C++ is libSVM (https://www.csie.ntu.edu.tw/~cjlin/libsvm/) which allows to perform classification or regression tasks. There is a grest section of Dr NG's coursera ml class that focuses on anomaly detection. I would expect your classes to be imbalanced (you have much more normal cases than anomalies so you need to think about how to deal with this issue, also how do you choose the probability threshold to classify data as anomaly or not) There are other choices of course: K-means can be used + some rule or another classifier. I would suggest you watch the Andrew NG class try his method (pdf kernel estimation) and once you got the hang of it try another model (such as SVM).
H: Efficient environment for machine or deep learning in Python I am getting very frustrated working with either Google Colab or Azure notebooks (they are very slow and glitchy). Usually, I work with Jupyter notebooks to perform Machine Learning or Deep Learning tasks in Python. Does anyone have any recommendations for a truly good performing alternative? It does not matter if it is free or not. AI: Try Kaggle, they have kernels with GPUs (free but limited 9 hours runtime I think) Google cloud is currently beta testing jupyter notebooks on their infrastructure. You do not need to spawn your own compute instance, or even start a Docker: you have your jupyter kernel in your browser in a matter of minutes (https://cloud.google.com/ai-platform-notebooks/). There runtime is illimited but if your connection is glitchy the notebook might disconnect and you can loose your data. But as a rule of thumb if your runtime is several hours, it might be better to prototype on a subset in notebook and then train on your full dataset in command line.
H: Regression Trees - Splitting and decision rules I understand that a regression tree is built by splitting a node, such that the MSE for the label/output variable is minimized in each of the two resulting nodes. I have two questions about this: 1.) Is the search for the optimal split even dependent from the input variables? The MSE could be minimized through exhaustive search for two subsets which minimize the MSEs of the label in each of the subsets. No knowledge about input variables is needed for this. If this is the case, how are the decision rules for upcoming instances, for which an output should be predicted, set? How is it decided which feature to split at what point in order to obtain the split into the 2 subsets? 2.) Or does the algorithm run through all possible splits (split at each value of each feature once) and then chose the one with the minimal MSEs? This way the decision rule would be clear. AI: Short Answer: 1.Yes feature variables are needed for split. No the best 2 subsets for MSE reduction are not created. 2.Yes Long answer: Decision trees are greedy algorithms that choose a feature and split at each node and use that feature and cut to divide the data. So as you mentioned in point 2 the tree starts building with all y values in the first node and iterates over all combinations for each feature and selects the best feature and value split to divide into 2 groups which are further split. The termination conditions would be number of observations in the leaf node and/or the threshold for decrease in mse from in the split.
H: How to choose the model parameters (RandomizedSearchCV, .GridSearchCV) or manually Faced with the task of selecting parameters for the lightgbm model, the question accordingly arises, what is the best way to select them? I used the RandomizedSearchCV method, within 10 hours the parameters were selected, but there was no sense in it, the accuracy was the same as when manually entering the parameters at random. +/- the meaning of the parameters is clear, which ones are responsible for retraining, which ones are for the accuracy and speed of training, but it’s not entirely clear if you select manually one at a time or in pairs, or even more options? Below is an example of how I implemented the selection of parameters: SEED = 4 NFOLDS = 2 kf = KFold(n_splits= NFOLDS, shuffle=False) parameters = { 'num_leaves': np.arange(100,500,100), 'min_child_weight': np.arange(0.01,1,0.01), 'feature_fraction': np.arange(0.1,0.4,0.01), 'bagging_fraction':np.arange(0.3,0.5,0.01), 'min_data_in_leaf': np.arange(100,1500,10), 'objective': ['binary'], 'max_depth': [-1], 'learning_rate':np.arange(0.001,0.02,0.001), "boosting_type": ['gbdt'], "bagging_seed": np.arange(10,42,5), "metric": ['auc'], "verbosity": [1], 'reg_alpha': np.arange(0.3,1,0.2), 'reg_lambda': np.arange(0.37,0.39,0.001), 'random_state': [425], 'n_estimators': [500]} model = lightgbm.LGBMClassifier() RSCV = RandomizedSearchCV(model,parameters,scoring='roc_auc',cv=kf.split(train),n_iter=30,verbose=50) RSCV.fit(train,label) AI: Thanks for the clarification. You can configure the parameters once or twice at a time by re-instantiating the RSCV object each time, passing different parameter dictionaries each time. For example: SEED = 4 NFOLDS = 2 kf = KFold(n_splits= NFOLDS, shuffle=False) parameters = { 'num_leaves': np.arange(100,500,100), 'min_child_weight': np.arange(0.01,1,0.01), } model = lightgbm.LGBMClassifier() RSCV = RandomizedSearchCV(model,parameters,scoring='roc_auc',cv=kf.split(train),n_iter=30,verbose=50) RSCV.fit(train,label) parameters = { 'feature_fraction': np.arange(0.1,0.4,0.01), 'bagging_fraction':np.arange(0.3,0.5,0.01), 'min_data_in_leaf': np.arange(100,1500,10), } RSCV = RandomizedSearchCV(RSCV.best_estimator_,parameters,scoring='roc_auc',cv=kf.split(train),n_iter=30,verbose=50) RSCV.fit(train,label) By passing the RSCV.best_estimator_ instead of model to the second time, it will automatically use the best values for num_leaves and min_child_weight that it identified in the first run and effectively "freeze" those as they are, finding the best combination of feature_fraction, bagging_fraction and min_data_in_leaf under those constraints. My understanding, however, is that the best approach is to do as you are currently doing and simply include all the parameters in one search. I generally set n_estimators to some lowish value (200 or so) and learning_rate to some reasonably high value whilst doing the tuning, and then beef them up later. This will generally drastically reduce the time taken to perform the tuning. It does necessitate a refit afterwards to take account of the new values. model = lightgbm.LGMBClassifier(n_estimators=100, learning_rate=0.2 params = { 'num_leaves':np.arange(100, 500 ,100) ... } RSCV.fit(train, label) model = RSCV.best_estimator_ model.set_params(n_estimators=1000, learning_rate=0.001) model.fit(train, label) The literature on Randomised Search is usually quoted as being this paper, which I recommend reading: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf. One key takeaway is the conclusion that roughly 60 trials are needed to find the optimal parameter space with a reasonably high probability, so I'd recommend increasing the n_iter parameter of RSCV to 60. Finally, the use of np.arange() results in each of your lists being uniformly distributed. For example, the tuning algorithm will select values for feature_fraction between 0.1 and 0.4 with equal likelihood. You can also use Scipy's distribution functions to define other distributions; normal and log-normal for example that targets the search on a particular area (for example, you could make the optimiser more likely to try values closer to 0 than values close to 1 for a the feature_fraction parameter. Hope all that helps.
H: What is the accuracy majority class classifier? I have an SFrame and a model: train_data,test_data = products.random_split(.8, seed=0) selected_words_model = graphlab.logistic_classifier.create(train_data, target='sentiment', features=selected_words, validation_set=test_data) After computing the accuracy of the model with `selected_words_model.evaluate(test_data) I'm asked "What is the accuracy majority class classifier on this task?" Yet I don't even know what this "means accuracy majority class classifier", shouldn't it be "accuracy of the majority class classifier" ? Here is my attempt. All these materials come from this coursera ML fundations course exercise. AI: I suspect you are right that there is a missing "of the," and that the "majority class classifier" is the classifier that predicts the majority class for every input. Such a classifier is useful as a baseline model, and is particularly important when using accuracy as your metric. This matches what your notebook comments in the next bullet, so that's likely what was intended.
H: AWS : Workflow for deep learning I am using my company computer since I don't have another one or linux. Therefore, I am starting to use cloud resources to perform some tasks. I have a very simple question: Since most cloud resources don't have a GUI, how can I perform simple checks e.g. visualizing the bounding boxes my algorithm has found on a picture? How do people performing such tasks usually accomplish this? Is there an easy fix or do I have to go through the installation of GUI on the cloud instance? Or, do you usually just download the results locally and view them locally? AI: My Simple suggestion is to install python , Anaconda on the Linux machine from command line. As Jupyter Notebook gets installed with Anaconda package. Just give command jupyter notebook Now we can connect to ipython notebook in the Virtual Machine from the token generated from above command from any browser in any other machine locally.
H: Do timesteps must have the same temporal distance in training a RNN? I have a recurrent neural network with LSTM units that I want to train with batches of 6 timesteps. Each timestep is a record of a dataset and represents the temporal aggregation over 5 minutes of data taken every 10 seconds. Unfortunately, the dataset has temporal interruptions, so sometimes two successive records in the dataset can be temporally separated by 10 minutes to a few weeks. I was wondering if the timesteps I give to the network in each batch must all be separated by the same temporal distance, that is in my case they must all be within 5 minutes of each other or it is sufficient that they are subsequent and therefore I could also give the network for example a batch with 4 records at a distance of 5 minutes from each other and then 2 taken for example two days later. UPDATED: The data comes from an electronic system so the time interruptions are due to the device being turned off. Rarely it is turned off only for a few minutes while most of the time it remains off for longer periods, like hours or sometimes days. Among other elements, the device includes a battery, therefore many of the features present in the dataset records are temperature, current and voltage readings which change value during temporal interruptions (little when turned off for a few minutes and significantly when turned off for hours or days). The goal of the network is to predict the value of some features two steps ahead. AI: As always in ML modelling problems: it depends. The critical factor here is that you are predicting based on properties of a sequence. The sequence does not need to be sampled at fixed time steps, or even be time based at all. E.g. in natural language processing the sequence of letters or words is only very loosely associated with the timing of the same items when spoken or read. The sequence does need to be drawn from a self-consistent source where moving from one item to next item in the sequence has the same meaning with respect to what you are trying to predict. If the interruptions you see would cause large differences in the expected distribution of your input vector X or output Y, then you may not be able to use them as-is. You may still be able to rescue them as input data for a RNN, and make predictions based on them, if you add whether or not there has been an interruption - and if it might be relevant to your purpose, for how long - as additional features of input items in X. Another option is to follow the time sequence strictly but add placeholder value of X (with a different flag set) for "unkown", maybe along with some rough guess for the missing features (or even a ML prediction from a different model), which you might do if the quantity being aggregated over is still active and important where there are missing records, but the aggregation service you are using as input has failed. Depending on how much data you have available, and whether you need to make predictions at times when your aggregation service has failed, you could do anything from discarding incomplete data to ignoring the time differences, to interpolating missing records. The first question you need to answer is "what does it mean for a sequence item to be missing": If the whole system that provided the aggregation data is switched off, and there is no concept of pending items that will enter the system the moment it is switched on, then you can probably safely ignore the missing records and ignore the precise timing. If your prediction target is sensitive to elapsed time, and yo uneed to make predictions even when there have been interruptions to the aggregation system, then you will want to augment the sequence data somehow to account for the missing records, and train/predict with that added feature engineering. If only the aggregation is switched off (or has a chance of failure causing gaps in the record), and you don't need to run any predictions when it is not available, then you may be able to make the rule that the prediction system will only function when there are 6 recent consecutive records available, and discard the data that is causing you to worry. This makes sense if interruptions are rare or the predictive model is not critical. If you are not sure which of these things to do, then the next approach is to do all of them that seem that they could be intuitively correct, tuning each model separately, then see how each different approach performs on your test set.
H: What's the principal difference between ANN,RNN,DNN and CNN? I'm newer to deep learning domain. I would like to know what is the principal difference between RNN,ANN,DNN and CNN? How to implement those neural networks using the TensorFlow library? AI: Welcome to DS StackExchange. I'll go through your list: ANN (Artificial Neural Network): it's a very broad term that encompasses any form of Deep Learning model. All the others you listed are some forms of ANN. ANNs can be either shallow or deep. They are called shallow when they have only one hidden layer (i.e. one layer between input and output). They are called deep when hidden layers are more than one (what people implement most of the time). This is where the expression DNN (Deep Neural Network) comes. CNN (Convolutional Neural Network): they are designed specifically for computer vision (they are sometimes applied elsewhere though). Their name come from convolutional layers: they are different from standard (dense) layers of canonical ANNs, and they have been invented to receive and process pixel data. RNN (Recurrent Neural Network): they are the "time series version" of ANNs. They are meant to process sequences of data. They are at the basis of forecast models and language models. The most common kind of recurrent layers are called LSTM (Long Short Term Memory) and GRU (Gated Recurrent Units): their cells contain small, in-scale ANNs that choose how much past information they want to let flow through the model. That's how they modeled "memory". If you want to learn how to implement all these forms of ANN in TensorFlow, I suggest you this wonderful book: Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition by Aurélien Géron this is the best book on ML and DL, IMHO.
H: Weighted Binary Cross Entropy Loss -- Keras Implementation I have a binary segmentation problem with highly imbalanced data such that there are almost 60 class zero samples for every class one sample. To address this issue, I coded a simple weighted binary cross entropy loss function in Keras with Tensorflow as the backend. def weighted_bce(y_true, y_pred): weights = (y_true * 59.) + 1. bce = K.binary_crossentropy(y_true, y_pred) weighted_bce = K.mean(bce * weights) return weighted_bce I wanted to ask if this implementation is correct because I am new to Keras/Tensorflow and the optimizer is having a hard time optimizing this. The loss goes from something like 1.5 to 0.4 and doesn't go down further. Normal binary cross entropy performs better if I train it for a long time to the point of over-fitting. Before anyone asks, I cannot use class_weight because I am training a fully convolutional network. AI: The code is correct. The reason, why normal binary cross entropy performs better, is that it doesn't penalize for mistakes on the smaller class so drastically as in weighted case. To be sure, that this approach is suitable for you, it's reasonable to evaluate f1 metrics both for the smaller and the larger classes on the validation data. It might show that performance on the smaller class becomes better. And training time can increase, because the model is forced to discriminate objects of different classes and to learn important patterns to do that.
H: Evaluate imbalanced classification model on balanced testing sample Why it would be too optimistic to compute presicion, recall and f1-score to evaluate a model trained for imbalanced classification on a balanced testing sample ? AI: The test set should represent what your model will encounter in practice when you use it to make predictions. It serves as a resource to demonstrate that what you did will actually work. In case your test set does not represent what the true problem is or will be in the future, you can‘t say much on how well your model works. If you test on a balanced set while you have trained on a unbalanced set (which is harder to tackle), your test predictions will likely look better or worse than what you could reasonably expect with the „real“ unbalanced problem. So you cannot say anything with this test about the quality of your model.
H: Where to find height dataset, or datasets in General Hi there smart people, I am new to data Science and wanted to take my first few steps. Unfortunately I struggle to find datasets or any data at all regarding my topics of interest. For example, I wanted to build a simple program that takes a person's height and predicts the likely height of his/her child. (greetings from Galton) But even for this relatively famous topic I am unable to find data. Are there any good websited or something that list a lot of datasets for specific topics? A database of datasets, so to speak. I mean, I am on the Internet, it should not be that difficult to find some data ;) AI: The Galton Height dataset seems to be exactly what you're looking for. There are a ton of repositories of open datasets online. For beginners, I often see the UCI Machine Learning Repository. It has quite a few datasets which are easy to work with. Some other resources for datasets: Kaggle - hosts data science competitions, but you can download all the datasets without participating in a competition. Data.gov - You can freely download data from US govt agencies AWS Open Data - You need a AWS account, but these datasets are free-to-download
H: How to use a a trained model I just trained my first model in Python 3.7/scikitlearn (Linear Regression) (well I copied most of the code but its something ^^). Now I want to actually Use the model. Specifically its about sons heights incorrelating to their fathers. So I now want to enter a new Father-height and get a predictions for its sons height. How could something like this look like? I read about "Pickle" to save a model and use it later, seems awsome but how would I use such a saved model? If anybody can give me a simple answer or even just a link to atutorial would be great. Below is a piece of "my" code just for some context. #Spliting the data into test and train data X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0) #Doing a linear regression lm=LinearRegression() lm.fit(X_train,y_train) #Predicting the height of Sons y_test=lm.predict(X_test) print(y_test) AI: You have your model saved as the variable lm. You can use the lm.predict(X_test) for any other test scenario. Note that your X_test should be similar to your X_train meaning if you have transformations made on your X_train, you need to do similar transformation on X_test too. You can use pickle in the following way import pickle filename = 'model.pckl' pickle.dump(model, open(filename, 'wb')) #To load the model from disk, use this model = pickle.load(open(filename, 'rb')) prediction = model.predict(X_test)
H: How to creat a plot for the accuracy of a model Iam pretty new to the whole topic so please dont be harsh. I know these may be simple questions but everybody has to start somewhere ^^ So I created (or more copied) my first little Model which predicts sons heights based on their fathers. #Father Data X=data['Father'].values[:,None] X.shape #According sons data y=data.iloc[:,1].values y.shape #Spliting the data into test and train data X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0) #Doing a linear regression lm=LinearRegression() lm.fit(X_train,y_train) # save the model to disk filename = 'Father_Son_Height_Model.pckl' pickle.dump(lm, open(filename, 'wb')) #Predicting the height of Sons y_test=lm.predict(X_test) print(y_test) Now I wanted to create a plot that displays the accuracy of my model or how "good" it is. Something like it is done here: https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/ But i cant quiet get it to work. This here Sems like I would work but what should I store in "model_history"? plt.subplot(212) plt.title('Accuracy') plt.plot(model_history.history['acc'], label='train') plt.plot(model_history.history['val_acc'], label='test') plt.legend() plt.show() A easy to adapt tutorial-Link would be a geat help already. Keras seems to be a thing but I would want to avoid yet another library if possible and sensible. AI: Your context is different than the one provided in the link. There, the author has made a neural network in Keras and has plotted the accuracy against the number of epochs. One epoch is when an entire dataset is passed both forward and backward through the neural network once. So, he is calculating accuracy after every epoch while the weights vary to fit data based on the loss function. (Thus, the accuracy increases as the number of epochs increases.) In your case, you are performing a linear regression which fits the data and generates an equation. There is no feedback system. Accuracy here can be defined based on your need. In this case (Predicting sons height based on their father's), you can define accuracy as how accurate your predictions were. Take an error function like MAE (Mean absolute error). Lesser the error rate is more accurate is your model. MAE is an accuracy measure here. For general classification tasks, accuracy is the number of instances you predicted correctly divided by the total number of instances. (In the link, author used default keras accuracy metric defined somewhat like this)
H: Data Visualisation Techniques for Multi Labelled data I am new to data science and am trying to figure out how to visualize my multi labelled data using graphs. I am using a dataset to classify music by emotion based on their acoustic features (such as: pitch, amplitude etc.). So some have multi labelled emotion labels. This is a snapshot of my dataset: Please tell me any techniques for multi label classification visualization techniques. I searched all over the internet, but all of them are related to single label classification. AI: Are you looking for a Python or R package? Or would a tool like Tableau suffice? If you feel comfortable using a Python package, seaborn has the ability to visualize multi-label classes. Take a look at this link: https://towardsdatascience.com/journey-to-the-center-of-multi-label-classification-384c40229bff Here, there's a good explanation of how to get this done. However, if you want to use a tool to get your data visualized, look no further than Tableau. However, if you wish to do a lot of machine learning, this wouldn't be a right fit. Else, Tableau can visualize multi-label classification data. All you have to do is plug in your dataset as a data source and navigate to whichever chart preview suits your dataset. Also, Tableau neatly sorts your attributes into dimensions and facts; sometimes erroneously might I add. In that case, all you have to do is a simple right-click and convert it to a dimension or vice versa.
H: Modelling of an environment that is stochastic in nature I have started learning reinforcement learning and have few doubts regarding model based and model free methods. Is it possible to model an environment that is stochastic in nature? Is it because its difficult to model such environment we use model free methods? AI: Is it possible to model an environment that is stochastic in nature? Yes, a model of a stochastic environment can be one of: A distribution model, that outputs a probability distribution for next state and reward, given current state and action. If you are reading Sutton & Barto, or similar work which uses the function $p(s',r|s,a)$, then if you can implement this function for the whole environment, it means that you have access to a distribution model. A sampling model, that outputs a single reward and next state, given current state and action, with the same probability of any outcome as the real environment. If you can implement an accurate simulation of an environment, then you have a sampling model. If you want to use approaches such as Dynamic Programming, which work with expected values, then this is much easier with a distribution model, and in that case you need an accurate model from the start (otherwise Dynamic Programming may converge to a non-optimal policy). Is it because its difficult to model such environment we use model free methods? Not really, there is no special difficulty in modelling stochastic environments. For instance, if your environment is a dice game, it is just a matter of implementing the rules and a random number generator for the dice to create a sampling model. A distribution model is usually straightforward for basic dice rules, such as rolling a die to see how many steps you can take. However, independently of how random the environment is, complex environments can become hard to model. Distribution models may require a lot of maths to calculate all possibilities, so sampling models (simulations) are easier. For instance, a card game is relatively easy to implement in simulation when you track the deck contents. But a distibution model for it is more complex, as you have to track what has already been played or can be figured out from other observations. Many environments are too complex to model. For example, they may involve real world physics but don't include enough measurements to establish the full state. For instance, when an agent flies a drone it will be affected by air turbulence but cannot directly observe it. Chaotic effects such as turbulence are very hard to model, and gaining real world experience is likely going to be more accurate than any model based on a physics engine, no matter how hard you try to code one. Similarly, visualising the real world, or navigating human social environments can be very hard to model accurately. When deciding whether or not to use a model-free method, there needs to be a cost analysis. Even in complex environments you may prefer to use a model-based method: The advantage of a model is that it allows you to safely explore without taking a real action, and also in some cases it may be a lot faster to query the model than to take an action and wait for a result in the real world. In the time it takes for a real action to resolve a computer may be able to check 10, 100, 1000 or more simulated actions from its model. The disadvantages of a model are: It might not be as accurate as you like, meaning that basing decisions off it makes the agent's policy too far from optimal Using a model adds complexity to the agent. In some cases, the real environment is fast, safe and reliable enough that there's not much to gain by using a model. These cases include a lot of the toy problems used to study learning, where the environment is actually simulated (which is a form of model) but kept separate from the agent (so despite this, the agent is still technically model-free).
H: Simplest way to build a semantic analyzer I want to build a semantic analyzer i.e., to find how similar the meaning of two sentences are. For example- English: Birdie is washing itself in the water basin. English Paraphrase: The bird is bathing in the sink. Similarity Score: 5 ( The two sentences are completely equivalent, as they mean the same thing.) I have to find the similarity between the meaning of those sentences. Here is a github repo of what I want to implement. https://github.com/anantm95/Semantic-Textual-Similarity Is there any simpler approach? AI: Is there any simpler approach? Very unlikely, semantic similarity is a very complex problem related to Natural Language Understanding (NLU). You could look at the techniques used for textual entailment, Question Answering and summarization. Simple methods like the baseline system proposed in the github link, but they don't really try to analyze the semantics.
H: Validity of cross-validation for model performance estimation When applying cross-validation for estimating the performance of a predictive model, the reported performance is usually the average performance over all the validation folds. As during this procedure, several models are created, one model has to be chosen as the model which is actually used for prediction on real-world samples (e.g. in a product). I am curious whether it is really valid to report the validation performance as estimated performance of the final (selected) model (as the performance was derived using all the other models which were created during the validation procedure, but are not considered when using the final model for predictions). I would expect that the deviation of the selected model's performance might vary drastically from the average performance of all models (depending on several factors such as the used algorithm and validation scheme). Why is cross-validation used to estimate the performance of a predictive model despite the given fact (e.g. in many peer-reviewed scientific publications)? Wouldn't it always be better to conduct an additional performance evaluation with the selected model on an independent test set and report the resulting performance alongside the validation performance? AI: Cross-validation is used to estimate the performance of a certain type of model on a specific dataset. one model has to be chosen as the model which is actually used for prediction on real-world samples (e.g. in a product). Selecting one of the models obtained during cross-validation is not appropriate, and proceeding in this way would indeed cause the problem that you mention. The correct methodology is to train the final model on the full training data after cross-validation (i.e. independently from the models trained during CV). This way the performance obtained through CV is representative of the expected performance of the final model.