text
stringlengths
83
79.5k
H: What is the difference between classification and regression? I understand classification....a discrete response or category, like animal is dog or cat. The author says..."Regression techniques predict continuous changes such as the change in temperature, power demand, or stock market prices." I can't wrap my head around what he means. Thanks. AI: For the sake of illustration, let's imagine that you're trying to predict the amount of gas in the tank of your car. A classification problem statement of this question would be whether you have gas in your car (yes or no). A regression statement of this problem would predict the level of gas in your car (anywhere between completely full or completely empty) and could take any value. The output of a classification model can be one of n options, where n is the number of classes (and/or the probability associated with each class). The output of a regression model is a (possibly bounded) continuous value.
H: Which model can solve the "sequence demand" problem? I have a regression problem. When a truck comes, it influences the demand of employees for the next 30 days. Additionally the demand depends on the type of truck (when the truck is big, we need more people). Which algorithm/model can help to predict a demand on employees on the defined day? Data looks like so (I cannot give the original data): |-----------|--------|----------------|-------|------------------|----------| |Transaction|Employee|Date_Transaction| Truck |Arrival_Date_Truck|Type_Truck| |-----------|--------|----------------|-------|------------------|----------| | 1 | A | 01.01.2010 |Truck_B| 07.12.2009 | Big | |-----------|--------|----------------|-------|------------------|----------| | 2 | B | 01.01.2010 |Truck_A| 05.12.2009 | Big | |-----------|--------|----------------|-------|------------------|----------| | 3 | A | 02.01.2010 |Truck_A| 05.12.2009 | Small | |-----------|--------|----------------|-------|------------------|----------| | 4 | C | 02.01.2010 |Truck_B| 07.12.2009 | Small | |-----------|--------|----------------|-------|------------------|----------| | 5 | A | 03.01.2010 |Truck_C| 12.12.2009 | Middle | |-----------|--------|----------------|-------|------------------|----------| | 6 | B | 03.01.2010 |Truck_C| 12.12.2009 | Middle | |-----------|--------|----------------|-------|------------------|----------| | 7 | C | 03.01.2010 |Truck_B| 07.12.2009 | Big | |-----------|--------|----------------|-------|------------------|----------| | 8 | D | 03.01.2010 |Truck_B| 07.12.2009 | Big | |-----------|--------|----------------|-------|------------------|----------| | 9 | B | 04.01.2010 |Truck_C| 12.12.2009 | Middle | |-----------|--------|----------------|-------|------------------|----------| I know, that a count of days (count of transactions), that truck needs depends on the type of truck. Furthermore I know, that the count of transactions looks like: distribution AI: Any regression algorithm can address this problem, to a greater or lesser degree of success- the important thing is in how you form your data. One reasonable form to use would be a structure with one row for each day and features describing the arrival of trucks over an appropriate timeframe- say, over the past 14 days. Each row would then also have the demand for the day associated with it as the regression target. You would then be able to train a model that predicts the demand for a given day provided the appropriate truck arrival history.
H: Which is first ? Tuning the parameters or selecting the model I've been reading about how we split our data into 3 parts; generally, we use the validation set to help us tune the parameters and the test set to have an unbiased estimate on how well does our model perform and thus we can compare models based on the result of the test set. However, i've also read that model selection shoud be done before tuning the parameters. I'm getting confused. Which one must be done before the other ? Is the validation set used for tuning ? If true, how are we supposed to do model selection before tuning the parameters ? AI: You can tune parameters only if you have already trained the model, otherwise there is nothing to tune. However, i've also read that model selection shoud be done before tuning the parameters. Before tuning you need to do some kind of pre-processing before tuning the parameters. Usually your pipeline will consist of: Get Data and Clean It. Do some EDA ( Exploritary Data Analysis) do get better understanding of you data. Do some feature engineering - roughly, if possible, you would like to transform current data, so it would be more suitable for your ML algo. Train and evaluate your model on validation data. Tune your model`s params to get better performance ( fight with overfitting alos included). Evaluate on Test set how are we supposed to do model selection before tuning the parameters About Model Selection - I think even without having deep inshights about your data, you can already choose some baseline models (I think its a good rule to start with simple models like Linear Regression or K-NN to get the "feel of the performance"), because in most cases you know that you are probably dealing with regression/classification/clustering task, so you can already specify the set on models to try - but before trying the horde of ML models, you should get your data ready. Some posts like this one can give you the feels of available ml algos, but there are a lot more to try. In Addition, I think this paper might be very usefull if you are starting to work with ML
H: How to make a Power Spectral Density Plot in R I have a time series point process representing neuron spikes. I have computed and plotted autocovariance using acf but now I need to plot the Power Spectral Density. Power Spectral Density is defined as the Fourier Transform of the autocovariance, so I have calculated this from my data, but I do not understand how to turn it into a frequency vs amplitude plot. I have used the following code # X is some set of Wait times between spikes, below is just an example X <- c(56, 3, 4, 119, 3, 4, 121, 3, 3, 121, 3, 4, 120, 3, 4, 4, 115) acf <- acf(X,type="covariance") psd <- fft(acf$acf) Now psd is a complex valued array across the default 24 lags from the acf function. How do I turn this array into a PSD plot? AI: From further research I've discovered that the frequency is given by the index of the FFT multiplied by the sampling rate and divided by the size of the array. And the amplitude is the magnitude of the complex number. So the full code for such a plot would be as follows # X is some set of Wait times between spikes, below is just an example X <- c(56, 3, 4, 119, 3, 4, 121, 3, 3, 121, 3, 4, 120, 3, 4, 4, 115) acf <- acf(X,type="covariance") ft <- fft(acf$acf) freq <- (1:nrow(ft))*1000/nrow(ft) #In my case we sample by ms so 1000 hz A <- (Re(ft)^2 + Im(ft)^2)^.5 #amplitude is magnitude PSD <- cbind.data.frame(freq,A) ggplot(PSD, aes(x=freq,y=A)) + geom_line()
H: How good will a neural network perform on an unusual data? I want to make a simulation based on neural network that will estimate the situation label(not a discrete value) based on state values. Suppose I have data with 40 features/columns and one feature is limited in the range of 25-50, now the goal is the following, when simulating that environment I need to test some states where the value of that limited feature is out of that specified range. I need to know how such a simulation would behave, compared to reality. My initial thought was that the network finds the patterns between features and label, and if I give it an unusual feature value it can still estimate the label pretty accurately. AI: I agree with the conclusion you included within your question. The inherent nature of neural networks - along with the fact that all features interact with each other and the associated weights - will help you in cases where you get "outlying" data. It may even turn out that with 40 features, this one particular feature may not even matter at all regardless of value. So, I think you are OK to proceed as you stated in your original question.
H: Multi-Class Text Classification: Doc2Vec performing very bad compared to Hashing Vector I have a multi-class text classification problem in hand this is similar to product category mapping where we map products to its correct Category based on the text content provided. I first created a solution with Hashing Vector and SGD classifier with actually gave around ~84% accuracy. After going through many online content I found that Doc2Vec is the current cutting-edge representation of Document or paragraph in numerical format. So I changed my solution to use Doc2Vec method for Feature Engineering but the accuracy got from this is only ~54%. Code: Reading and Cleansing Data import logging import datetime import re import string import codecs import pandas as pd import numpy as np from gensim.models.doc2vec import Doc2Vec,TaggedDocument from sklearn.feature_extraction.text import HashingVectorizer from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.utils.class_weight import compute_class_weight from sklearn.metrics import accuracy_score from sklearn.linear_model import SGDClassifier import warnings warnings.filterwarnings("ignore") #Reading the input/ dataset data_file = "Consolidated_input_dataset.txt" data = pd.read_csv(data_file, header = 0, delimiter= "\t", quoting = 3, encoding = "utf8") data = data.dropna() #Cleansing the input dataset removing non alphabets data['cleansed_desc'] = data.COMMODITY_DESC.str.lower().str.replace('[^a-z]',' ').str.replace('\s+',' ') #Spliting to list for traing Doc2Vec data['cleansed_desc_split'] = data.cleansed_desc.str.split() train_data, test_data, train_label, test_label = train_test_split(data[["cleansed_desc", "cleansed_desc_split"]], data[["Label"]], test_size=0.3, random_state=100, stratify=data.Label) Hashing Vector: sgd_model_full = SGDClassifier(loss='modified_huber', n_jobs=-1, n_iter=8, random_state=42, alpha=1e-06, class_weight="balanced", verbose= 2) sgd_model.fit(train_data.Doc2Vec.tolist(), train_label.Label) #Predict Output output_node1_predict = sgd_model.predict(test_data.Doc2Vec.tolist()) print(accuracy_score(test_label.Label, output_node1_predict)) #Train Model sgd_model_full = SGDClassifier(loss='modified_huber', n_jobs=-1, n_iter=8, random_state=42, alpha=1e-06, class_weight="balanced", verbose= 2) vectorizer = HashingVectorizer(n_features=90000, ngram_range=(1,3)) vectorizer.fit(train_data.cleansed_desc) data_features = vectorizer.transform(train_data.cleansed_desc) sgd_model.fit(data_features, train_label.Label) #Predict Output test_features = vectorizer.transform(test_data.cleansed_desc) output_node1_predict = sgd_model.predict(test_features) print(accuracy_score(test_label.Label, output_node1_predict)) Output: 84% Doc2Vec: #Creating Doc2Vec data_tagged = train_data.apply( lambda r: TaggedDocument(words=r['cleansed_desc_split'], tags=[train_label.loc[r.name].Label]), axis=1) doc2vec_test = Doc2Vec(dm=0, vector_size=100, negative=5, hs=0, min_count=2, sample=0, epochs=5, workers=8) doc2vec_test.build_vocab(data_tagged) doc2vec_test.train(data_tagged, total_examples=doc2vec_test.corpus_count, epochs=doc2vec_test.iter) train_data["Doc2Vec"] = train_data.cleansed_desc_split.apply(lambda x : doc2vec_test.infer_vector(x)) test_data["Doc2Vec"] = test_data.cleansed_desc_split.apply(lambda x : doc2vec_test.infer_vector(x)) #Train Model sgd_model_full = SGDClassifier(loss='modified_huber', n_jobs=-1, n_iter=8, random_state=42, alpha=1e-06, class_weight="balanced", verbose= 2) sgd_model.fit(train_data.Doc2Vec.tolist(), train_label.Label) #Predict Output output_node1_predict = sgd_model.predict(test_data.Doc2Vec.tolist()) print(accuracy_score(test_label.Label, output_node1_predict)) output: 54% Parameters for Doc2 Vec: Vector Size =100 Window = 10 Epoch=5 min_count=2 Negative=5 Total number of Documents = 3450000+ Vocabulary Size = 46000+ AI: If I read your model correctly, you only performed 5 epochs with the Doc2Vec model. This is probably not be enough for the network to learn the word embeddings. Has your loss leveled out after 5 epochs? Try running it for 50 epochs and see if it makes any difference. Conceivably, you could need thousands of epochs to achieve a reasonable model.
H: How to calculate Cumulative Sum with Groupby in Python? I am trying to calculate cumulative sum with groupby using Pandas's DataFrame. However, I don't get expected output. My Source Code: import pandas as pd Employee = [['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'], ['CSE', 'CSE', 'EEE', 'EEE', 'CE', 'CE', 'ME', 'ME'], ['Cat-1', 'Cat-2', 'Cat-1', 'Cat-2', 'Cat-1', 'Cat-2', 'Cat-1', 'Cat-2']] index = pd.MultiIndex.from_arrays(Employee, names=['Name', 'Dept', 'Category']) Scale = [1, 2, 2, 3, 3, 1, 2, 3] Salary = [100, 200, 200, 300, 300, 100, 200, 300] df = pd.DataFrame({'scale': Scale, 'salary': Salary}, index=index) df1 = df.groupby(['Category', 'scale']).cumsum() print(df1) Expected Output: Cat-1 1 100 2 500 3 800 Cat-2 1 100 2 300 3 900 Obtained Result: Name Dept Category A CSE Cat-1 100 B CSE Cat-2 200 C EEE Cat-1 300 D EEE Cat-2 500 E CE Cat-1 600 F CE Cat-2 600 G ME Cat-1 1000 H ME Cat-2 1200 Groupby doesn't work. However, if I use sum() (i.e. df1 = df.groupby(['Category', 'scale']).sum()) instead of cumsum(), groupby works perfectly. AI: There are multiple entries for each group so you need to aggregate the data twice, in other words, use groupby twice. Once to get the sum for each group and once to calculate the cumulative sum of these sums. It can be done as follows: df.groupby(['Category','scale']).sum().groupby('Category').cumsum() Note that the cumsum should be applied on groups as partitioned by the Category column only to get the desired result.
H: In a binary classification, should the test dataset be balanced? I have a dataset with 4519 samples labeled as "1", and 18921 samples labeled as "0" in a binary classification exercise. I am well aware that during the training phase of a classification algorithm (in this case, a Random Forest) the number of 0/1 samples should be balanced to prevent biasing the algorithm towards the majority class. However, should the test dataset be balanced as well? In other words, if train my model with 1000 random samples of "0" class, and 1000 random samples of "1" class, should I test the model with the remaining 3519 samples of "1" class, and randomly select another 3519 samples of the majority "0" class, or I can go with the remaining 17921? What is the impact of an imbalanced test dataset on the precision, recall, and overall accuracy metrics? Thanks AI: The answer to your first question: should the test dataset be balanced as well? is, like many answers in data science, "it depends." And really, it depends on the audience for, and interpretability of the model metrics, which is the thrust of your second question: What is the impact of an imbalanced test dataset on the precision, recall, and overall accuracy metrics? Personally, if the metrics will just be used by you to evaluate the model, I would use the sensitivity and specificity within each class to evaluate the model, in which case, I care less about the balance of the classes in the test data as long as I have enough of both to be representative. I can account for the prior probabilities of the classes to evaluate the performance of the model. On the other hand, if the metrics will be used to describe predictive power to a non-technical audience, say upper management, I would want to be able to discuss the overall accuracy, for which, I would want a reasonably balanced test set. That said, it sounds like your test set is drawn independently of the training data. If you are going to balance the training data set, why not draw one balanced data set from the raw data and then split the training and test data? This will give you very similar class populations in both data sets without necessarily having to do any extra work.
H: How to prefer no choice instead of bad choice with sklearn decision tree I'm using sklearn decision trees to classify documents in two possible types "type1" and "type2". I've isolated few features that seem pertinent and tried to combine them manually to evaluate the result of a model. When classifying the documents manually i use the following outcomes: type 1 type 2 unknown Then I give the same features to a decision tree. The results are worse in that case because it always tries to classify documents in one of the categories "type1" or "type2" but is unable to classify documents in "unknown" Is it possible to configure a sklearn decision tree in a way that in case of high uncertainty the document won't be classified instead of selecting a category that is likely to be wrong? AI: TL;DR You can train your classifier as usual and threshold on the prediction probability. Scikit-learn You won't get that out of the box, but you can use this simple class to do just that! from sklearn.tree import DecisionTreeClassifier from numpy import argmax, max class MyClassifier(DecisionTreeClassifier): unknown_class = 'unknown' def __init__(self, no_class_threshold=0.75, **kwargs): self.no_class_threshold = no_class_threshold super().__init__(**kwargs) def predict(self, X): preds = self.predict_proba(X) y_pred = [self.classes_[i] if v > self.no_class_threshold else self.unknown_class for i, v in zip(argmax(preds, axis=1), max(preds, axis=1))] return y_pred Just use this class as you were using the DecisionTreeClassifier before. You can pass all the parameters from DecisionTreeClassifier plus an extra one: no_class_threshold. The no_class_threshold works in the following way: IF prediction_probability > no_class_threshold THEN output predicted class ELSE output 'unknown' Changing classifier You might not want to stay with the DecisionsTreeClassfier, and instead, you might want to experiment with other classifiers. You can do this by inheriting from another Scikit-learn class. Simple as that! For example: from sklearn.svm import SVC class MyClassifier(SVC): [...] Choosing the no_class_threshold You can: Choose it manually, to a degree that you feel confident Use hyper-parameter tuning to learn it, if you have data points labelled as 'unknown'. Example Here is a script you can run to test all the above. from numpy import argmax, max from sklearn.svm import SVC class MyClassifier(SVC): unknown_class = 'unknown' def __init__(self, no_class_threshold=0.75, **kwargs): self.no_class_threshold = no_class_threshold super().__init__(**kwargs) def predict(self, X): preds = self.predict_proba(X) y_pred = [self.classes_[i] if v > self.no_class_threshold else self.unknown_class for i, v in zip(argmax(preds, axis=1), max(preds, axis=1))] return y_pred if __name__ == '__main__': X = [[1, 1, 1], [1, 0, 1], [0, 0, 0], [0, 1, 0], [0, 1, 1]] y = ['class1', 'class1', 'class0', 'class0', 'class0'] clf = MyClassifier(no_class_threshold=0.55, probability=True, C=1) clf.fit(X, y) X2 = [[1, 1, 0], [0, 1, 1], [0, 0, 1]] pred = clf.predict(X2) print(pred)
H: Can random forest algorithm provide customer churn prediction probability at each customer instead at class level? I have customer training data set from telecom industry along with its test data set containing churn values 0 & 1 for each customer. I also have customer data set whose churn value is to be predicted ie 0 & 1. It is also required to get the churn prediction probability at individual customer level so that they can be arranged in descending order of the propensity to churn For brevity, showing limited features cust_train.xls cust_id Account Length VMail Message Day Mins Eve Mins Night Mins Intl Mins CustServ Calls cust_train_output.xls cust_id churn (0/1) I want to know if it is possible to get the churn prediction probability at individual customer level & how by random forest algorithm rather than class level provided by: predict_proba(X) => Predict class probabilities for X. Goal is to arrange the customer in descending order of the propensity to churn. Alterntively, is this possible with Logistic Regression Model? Thanks AI: I want to know if it is possible to get the churn prediction probability at individual customer level & how by random forest algorithm rather than class level provided by: predict_proba(X) => Predict class probabilities for X. I think the predict_proba(X) is actually the correct method to accomplish your task. For each instance (person) the function will display the probability for each class label. So, if you know which class is churn, I'll just assume class 1, you just need to slice the results on that column. For example: from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification from tabulate import tabulate X, y = make_classification(n_samples=1000, n_features=50, n_informative=2, n_redundant=0, random_state=1, shuffle=False) clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0) clf.fit(X, y) train_class_probability = clf.predict_proba(X) print(tabulate(train_class_probability[0:4], ['Churn', 'No Churn'])) NewData, y = make_classification(n_samples=10, n_features=50, n_informative=2, n_redundant=0, random_state=1000, shuffle=False) class_prediction = clf.predict(NewData) class_probability = clf.predict_proba(NewData) print(tabulate(class_prediction.reshape(-1, 1), ['0 - Churn / 1 - No-Churn'])) print(tabulate(class_probability, ['Churn', 'No Churn'])) # Just print the first column, Churn print(tabulate(class_probability[:, 0].reshape(-1, 1), ['Probability of Churn'])) Output will be: Class Probabilities of the Training Data (First four instances) Churn No Churn -------- ---------- 0.595298 0.404702 0.620975 0.379025 0.601251 0.398749 0.610663 0.389337 Class Prediction on New Data 0 - Churn / 1 - No-Churn -------------------------- 0 0 0 1 1 1 1 0 1 1 Class Probability Prediction on New Data Churn No Churn -------- ---------- 0.553698 0.446302 0.602109 0.397891 0.587715 0.412285 0.423982 0.576018 0.419588 0.580412 0.419984 0.580016 0.369798 0.630202 0.572373 0.427627 0.414734 0.585266 0.40422 0.59578 Probability of Churn on New Data Probability of Churn ---------------------- 0.553698 0.602109 0.587715 <- new customer #3 0.423982 <- new customer #4 0.419588 0.419984 0.369798 0.572373 0.414734 0.40422 So, the probability that "new customer #3" will churn is ~59% and the probability that "new customer #4" will churn is ~42%. Hopefully, I have understood your question. HTH
H: What are the best suited books for A Beginner to learn Data Science? I'm an undergraduate student studying data science. My goal is to understand the fundamentals of this field and then move on to complex and more advanced topics. I'm interested in knowing: What are some of the books I should consider reading? (from basic to advanced) which are best suited for this task? The books should build strong fundamental knowledge about the concepts involved and should gradually take through to advanced concepts. (Multiple combination of books can be suggested as well, but the order should be preserved). Mentioning some books I found, for your reference: Data Mining: The Textbook, by Charu C. Aggarwal The Elements of Statistical Learning, by Jerome H. Friedman, Robert Tibshirani, and Trevor Hastie Pattern Recognition And Machine Learning, by Christopher M. Bishop Machine Learning: A Probabilistic Perspective, by Kevin P. Murphy Computer Age Statistical Inference: Algorithms, Evidence, and Data Science, by Bradley Efron and Trevor Hastie Links to other answers, blogs and articles is welcomed, Thanks AI: Over the years I have realized that understanding Statistics well would enable you to solve a large number of business problems and is very important too as a first step. You could take Khan Academy Probablity and Statistics Course https://www.khanacademy.org/math/statistics-probability For Machine Learning you could read "An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani". Wonderful book. It feels almost like a novel and is beatifully written with examples and code in R. I would also recommend Andrew NG course on Machine Learning. For Deep Learning, I took the CS231 course by Andrew Karpathy but I liked this brilliant free online book by Micheal Nielson more. http://neuralnetworksanddeeplearning.com/ For more insight into business problems you can take up the course analytics Edge by eDX https://www.edx.org/course/the-analytics-edge For time Series forecasting books by Rob J Hyndman are the best before you go on and solve them https://otexts.org/fpp2/ Also, I did learn a lot from blogs on Analytics Vidhya. https://www.analyticsvidhya.com/ The best way to learn is taking part in hackathons and getting hands on.
H: Is Loss not a good indication of performance? Im trying to segment 3D volumes using a 3D uNet network. Ive reached a stage where I am getting very good validation loss using CrossEntropy and BCE idx: 0 of 53 - Validation Loss: 0.029650183394551277 idx: 5 of 53 - Validation Loss: 0.009899887256324291 idx: 10 of 53 - Validation Loss: 0.05049080401659012 idx: 15 of 53 - Validation Loss: 0.02019292116165161 idx: 20 of 53 - Validation Loss: 0.04724293574690819 idx: 25 of 53 - Validation Loss: 0.02810296043753624 idx: 30 of 53 - Validation Loss: 0.02642594277858734 idx: 35 of 53 - Validation Loss: 0.029894422739744186 idx: 40 of 53 - Validation Loss: 0.04158024489879608 idx: 45 of 53 - Validation Loss: 0.04574814811348915 idx: 50 of 53 - Validation Loss: 0.05406259000301361 I assumed my network is performing very well so i wrote a script to visualize my network outputs against their respective targets. What I get is something very different, not something that justifies this loss. The samples are of depth 32 and I've outputted each z-plane as a single image. Here is the target: And the predicted output: All samples are like this, not one of them accurately represent the target with the reported loss.. so I ask is my loss wrong? What should I look into to fix this? Thanks AI: To answer the title of your question: Is Loss not a good indication of performance? It is only a relative indicator of performance over the training session. First, what is loss anyway? In general, the loss is some expression of the difference between the model's predicted output and the target output. Depending on the loss function used (e.g. if log is involved) or on the nominal values of the inputs themselves, the value of the loss can also be very small or very large. People usually normalise data so the values are smaller, but the point is that you cannot always say that a validation loss of 0.0012345 is actually a good value. Or that 12345 is definitely bad! Other possible loss functions Other 3d segmentation models that I have seen it is common to use the Dice Coefficient for your cost/loss. Maybe give that a try. The Dice coefficient is essentially the same as the F1 score; you are really finding a trade-off in how to penalise a model for its mistakes in classification e.g. of pixels or voxels. Do you want to strongly punich bad cases or rather a more averging approach. As that link points out, it is similar to the difference between the $L_1$ and $L_2$ losses. There is also the Jaccard index, which is essentially the same as the Dice Coefficient. The Tversky index is a generalisation of the two - it is an asymmetric similarity measure. : $$ Tversky(A, B; α, β) = \frac{|TruePos|}{|TruePos| + \alpha |FalsePos| + \beta |FalseNeg|} $$ The Dice coefficient is this with $\alpha = \frac{1}{2}$ and $\beta = \frac{1}{2}$ The Jaccard index instead has $\alpha = 1$ and $\beta = 1$ This might be a nice approach for your problem to tweak how the loss is computed. There are plenty of source online to explain more about these well-defined measures and perhaps help you gain an intuition for their results. Your results With the losses you posted, it doesn't actually look as good as you explained. I am not sure exactly what your idx values mean, but the loss values are actually going up: In [1]: import numpy as np In [2]: import pandas as pd In [3]: import matplotlib.pyplot as plt In [4]: val_loss = np.array([0.029650183394551277, 0.009899887256324291, 0.0504908040 ...: 1659012, 0.02019292116165161, 0.04724293574690819, 0.02810296043753624, 0.026 ...: 42594277858734, 0.029894422739744186, 0.04158024489879608, 0.0457481481134 ...: 8913, 0.05406259000301361]) In [5]: pd.Series(val_loss, index=range(0, 51, 5)).plot() Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x7f4405326390> In [6]: plt.show() This suggests you might be overfitting. However, the differences between the predictions and ground truth in the images you show suggest there is a more fundamental problem.
H: Batch normalization vs batch size I have noticed that my performance of VGG 16 network gets better if I increase the batch size from $64$ to $256$. I have also observed that, using batch size $64$, the with and without batch normalization results have lot of difference. With batch norm results being poorer. As I increase the batch size the performance of with and without batch normalization gets closer. Something funky going on here. So I would like to ask is the increase in batch size have effect on batch normalization? AI: By increasing batch size your steps can be more accurate because your sampling will be closer to the real population. If you increase the size of batch, your batch normalisation can have better results. The reason is exactly like the input layer. The samples will be closer to the population for inner activations.
H: Forcing a multi-label multi-class tree-based classifier to make more label predictions per document I'm been experimenting with tree based classifiers for multi-label document classification. All the trees I've created, however, tend to predict only one or two labels per document. Whereas the training set has about 4 labels per document on average. Furthermore, in my particular application, false positives are much less costly to the business than false negatives. So, if anything, I'd like the tree to be making about 6 or 7 predictions per document. I'm not entirely sure which parameters control this. I've tried experimenting with tree size but without effect. I'd ideally like to just set a threshold for when a prediction is included, and lower this. I'm using sklearn (and playing with skmultilearn). Here's an example of a forest configuration: from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier( n_estimators=20, criterion='gini', max_features=0.5, max_depth=68, min_samples_split=4, min_samples_leaf = 2, n_jobs=3 ) AI: You seem to experience a class imbalance situation where some classes dominate the others by the number of samples they have, so that your algorithm finds it wise to predict less of the rare classes or non-predict them to decrease the loss at the end. You can manually set the class weights for the Random Forest Classifier, making loss function treat unevenly to different classes, but even in total. For details you can refer to: https://stackoverflow.com/questions/20082674/unbalanced-classification-using-randomforestclassifier-in-sklearn Note: Random Forest is not robust to class imbalance, this is a known situation. You can refer to: https://stats.stackexchange.com/questions/242833/is-random-forest-a-good-option-for-unbalanced-data-classification Hope if I could help. If not, I will be around for further discussion.
H: Caret + RStudio: Error "Please make sure `y` is a factor or numeric value" when training I'm new to Caret and I've been trying a couple things to get the hang of things. But this error happened to me and I'm not sure why. I've been trying to train a model with some data I got from "PimaIndiansDiabetes". These are the "x" and "y" I'm using: >str(pima.Datos.Train[pima.Vars.Entrada.Usadas]) 'data.frame': 615 obs. of 8 variables: \$ pregnant: num 6 1 8 1 3 10 2 8 4 10 ... \$ glucose : num 148 85 183 89 78 115 197 125 110 139 ... \$ pressure: num 72 66 64 66 50 0 70 96 92 80 ... \$ triceps : num 35 29 0 23 32 0 45 0 0 0 ... \$ insulin : num 0 0 0 94 88 0 543 0 0 0 ... \$ mass : num 33.6 26.6 23.3 28.1 31 35.3 30.5 0 37.6 27.1 ... \$ pedigree: num 0.627 0.351 0.672 0.167 0.248 ... \$ age : num 50 31 32 21 26 29 53 54 30 57 ... >str(pima.Datos.Train[pima.Var.Salida.Usada]) 'data.frame': 615 obs. of 1 variable: \$ diabetes: Factor w/ 2 levels "neg","pos": 2 1 2 1 2 1 2 2 1 1 ... >pima.modelo <- train(pima.Datos.Train[pima.Vars.Entrada.Usadas], pima.Datos.Train[pima.Var.Salida.Usada], method='mlp') train returns an Error: Please make sure y is a factor or numeric value. But as far as I know, "y" is a factor, so I'm not really sure where the Error comes from. ¿Any help with this? Thanks in advance AI: Have you done any pre-processing or data manipulation before training the model? Its hard for me to say what is the reason the error, but I have tried to do the same task and it worked whithout errors, here is my code: library(mlbench) library(caret) df <- PimaIndiansDiabetes #Using x, y arguments modelo <- train(x = df[, 1:8] , y = df$diabetes, method='mlp') #Using formula modelo <- train(diabetes ~., data = df, method='mlp') Both approaches valid, and produce result like this: > modelo <- train(x = df[, 1:8] , y = df$diabetes, method='mlp') > modelo Multi-Layer Perceptron 768 samples 8 predictor 2 classes: 'neg', 'pos' No pre-processing Resampling: Bootstrapped (25 reps) Summary of sample sizes: 768, 768, 768, 768, 768, 768, ... Resampling results across tuning parameters: size Accuracy Kappa 1 0.6545936 0.0000000000 3 0.6416382 -0.0008031577 5 0.6397277 0.0002662915 Accuracy was used to select the optimal model using the largest value. The final value used for the model was size = 1. Please note that your results might differ a little.
H: Sparse_categorical_crossentropy vs categorical_crossentropy (keras, accuracy) Which is better for accuracy or are they the same? Of course, if you use categorical_crossentropy you use one hot encoding, and if you use sparse_categorical_crossentropy you encode as normal integers. Additionally, when is one better than the other? AI: Use sparse categorical crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]). Formula for categorical crossentropy (S - samples, C - classess, $s \in c $ - sample belongs to class c) is: $$ -\frac{1}{N} \sum_{s\in S} \sum_{c \in C} 1_{s\in c} log {p(s \in c)} $$ For case when classes are exclusive, you don't need to sum over them - for each sample only non-zero value is just $-log p(s \in c)$ for true class c. This allows to conserve time and memory. Consider case of 10000 classes when they are mutually exclusive - just 1 log instead of summing up 10000 for each sample, just one integer instead of 10000 floats. Formula is the same in both cases, so no impact on accuracy should be there.
H: Cleaning the univariate dataset with high noise At this time, I am having a dataset containing the operating duration for some sensors. This could be considered as a univariate dataset because it has only 1 dimension. For example: [1]: [10, 12, 13, 15, 16] indicates that the sensor [1] will have some operating duration like [10, 12, 13, 15, 16]. I want to see the range of operating duration for each sensor, by measuring the mean and standard deviation for each sensor. But my problem is in my dataset, each sensor has many noises. For example: [1]: [1, 1, 1, 1, 1, 2, 2, 2, 10, 12, 13, 15, 16, 200, 400, 500]. You could see that sensor [1] has many noises like 1, 2, 200, 400 and 500. In my dataset, there are many cases like this. Without removing the noise, the standard deviation is always smaller than the mean. This makes my duration analysis not meaningful. So my question is: I want to ask if there is any method for removing the noises like that in my dataset. Thank you. AI: There are multiple algorithms for this. On one of the kaggle competitions I saw applying several algorithms and then throwing the data away by the number of votes from all algorithms. The algorithms that I remember: Remove all points above or below 2 standard deviations. X>mean+2*std dev, X < mean-2*std dev. 2 is actually a parameter you need to optimize. Remove the 5% of the highest and lowest values. X>95% quantile, X<5% quantile. 5% is a parameter you need to optimize. X - median > 2*1.4826*MAD (median absolute deviation. 1.4826*MAD is an estimator for standard deviation under some assumptions). And the same for lower bound. https://eurekastatistics.com/using-the-median-absolute-deviation-to-find-outliers/ Repeat several times until no points are thrown away. Example of this method with voting from some other kaggle winners: https://nycdatascience.com/blog/student-works/kaggle-predict-consumer-credit-default/
H: How to make my Neural Netwok run on GPU instead of CPU I have installed Anaconda3 and have installed latest versions of Keras and Tensorflow. Running this command : from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) I find the Notebook is running in CPU: [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 2888773992010593937 ] This is my Nvidia version: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Sat_Aug_25_21:08:04_Central_Daylight_Time_2018 Cuda compilation tools, release 10.0, V10.0.130 running nvidia-smi, I'm getting this result: I want to make the neural network to train on GPU. Please help me in switching from CPU to GPU. AI: First some stupid sanity-check questions: do you have a GPU in your local machine? (you didn't mention that explicitly). I ask because it will not work e.g. on an integrated Intel graphics card found in some laptops. Second, you installed Keras and Tensorflow, but did you install the GPU version of Tensorflow? Using Anaconda, this would be done with the command: conda install -c anaconda tensorflow-gpu Other useful things to know: what operating system are you using? (I assume Linux e.g. Ubuntu) what GPU do you expect to be shown as available? Can you run the command: nvidia-smi in a terminal and update your question with the output? If you have installed the correct package (the above method is one of a few possible ways of doing it), and if you have an Nvidia-GPU available, Tensorflow would usually by default reserve all available memory of the GPU as soon as it starts building the static graph. If you were not already, it is probably a good idea to use a conda environment, which keeps the requirements of your model separate from whatever your system might already have installed. Have a look at this nice little walkthrough on getting started - this will likely be a good test to see if your system is able to run models on a GPU as it removes all other possible problems created components unrelated to your script. In short, create and activate a new envrinment that includes the GPU version of Tensowflow like this: conda create --name gpu_test tensorflow-gpu # creates the env and installs tf conda activate gpu_test # activate the env python test_gpu_script.py # run the script given below UPDATE I would suggest running a small script to execute a few operations in Tensorflow on a CPU and on a GPU. This will rule out the problem that you might not have enough memory for the RNN you're trying to train. I made a script to do that, so it tests the same operations on a CPU and GPU and prints a summary. You should be able to just copy-paste the code and run it: import numpy as np import tensorflow as tf from datetime import datetime # Choose which device you want to test on: either 'cpu' or 'gpu' devices = ['cpu', 'gpu'] # Choose size of the matrix to be used. # Make it bigger to see bigger benefits of parallel computation shapes = [(50, 50), (100, 100), (500, 500), (1000, 1000)] def compute_operations(device, shape): """Run a simple set of operations on a matrix of given shape on given device Parameters ---------- device : the type of device to use, either 'cpu' or 'gpu' shape : a tuple for the shape of a 2d tensor, e.g. (10, 10) Returns ------- out : results of the operations as the time taken """ # Define operations to be computed on selected device with tf.device(device): random_matrix = tf.random_uniform(shape=shape, minval=0, maxval=1) dot_operation = tf.matmul(random_matrix, tf.transpose(random_matrix)) sum_operation = tf.reduce_sum(dot_operation) # Time the actual runtime of the operations start_time = datetime.now() with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as session: result = session.run(sum_operation) elapsed_time = datetime.now() - start_time return result, elapsed_time if __name__ == '__main__': # Run the computations and print summary of each run for device in devices: print("--" * 20) for shape in shapes: _, time_taken = compute_operations(device, shape) # Print the result and also the time taken on the selected device print("Input shape:", shape, "using Device:", device, "took: {:.2f}".format(time_taken.seconds + time_taken.microseconds/1e6)) #print("Computation on shape:", shape, "using Device:", device, "took:") print("--" * 20) Results from running on CPU: Computation on shape: (50, 50), using Device: 'cpu' took: 0.04s Computation on shape: (500, 500), using Device: 'cpu' took: 0.05s Computation on shape: (1000, 1000), using Device: 'cpu' took: 0.09s Computation on shape: (10000, 10000), using Device: 'cpu' took: 32.81s Results from running on GPU: Computation on shape: (50, 50), using Device: 'gpu' took: 0.03s Computation on shape: (500, 500), using Device: 'gpu' took: 0.04s Computation on shape: (1000, 1000), using Device: 'gpu' took: 0.04s Computation on shape: (10000, 10000), using Device: 'gpu' took: 0.05s
H: How can I implement tangent distance for k-nearest neighbor in python/scikit-learn? My ultimate aim is to have a function which I can feed into scikit-learn's NearestNeighbor class as a custom metric parameter. Existing packages I have been researching existing libraries for a while. The only thing I found was this KMeans package, for python 2 and based on implementing a C library. I could neither load it in with ctypes nor convert it into an executable with gcc. I also found this other C code and this Matlab script but with similar results. Implementation I also looked into a few papers, to see if I can implement it by myself. For instance, based on this I understand that the main thing I need to do is to calculate the tangent matrix. But, I do not understand how do I define $s(p, \alpha)$ and especially how do I calculate the derivatives in python. I would be glad for any help, comment, whatsoever. Update As suggested, I raised the following related issues/requests: ComeOnGetMe Scikit-learn Update 2 @ComeOnGetMe rewrote his code so it can be used along the scikit-learn specifications (example code). Many thanks for that! Nonetheless, when I tried to use it in scikit-learn it underperformed and was very slow, so there is further work needed with that. Since then I also found a more detailed explanation for code implementation, although based on the C code already mentioned. AI: I would just repeat my reply under the original issue in case anybody is looking for the answer here. The direct answer to the issue: I can't really recall how I used this code 2 years ago. But I got it working with two steps: Build the shared library with gcc -fPIC -shared st.c -o ts.so. Change the .so path in tangentDistance.py to the absolute path of the ts.so file. I have just updated the code so that you can run it directly after compiling the .so file in the root directory. A bit of comment on this repo: Clearly this library is not well designed and filled with so much hardcodes. You can't really use it if you are not doing exactly the same task as I was doing: k-means clustering on MNIST dataset. If you want it to be more general and better fit your purposes just let me know.
H: Classifier not predicting real data I'm trying to train a classifier to recognize my own signature. This is how I built my classifier How did I collect data? Signed on a piece of paper for 50 times and created 50 images out of it. for negative test cases, I downloaded IAM Handwriting database. which contains around 600MB of handwriting data. this is to negate other possible matches. How did I do Feature Engineering? Step 1 : Read and convert image in gray scale. Perform median blur. img = cv2.imread(image,0) img = cv2.medianBlur(img,5) Step 2 : Perform adaptive threshold followed by morphological opening to reduce noise in the image. edges = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,11,2) kernel = np.ones((2,2),np.uint8) dilation = cv2.morphologyEx(edges, cv2.MORPH_OPEN, kernel) How did I do model training? Extracted ORB features from all the sample images for training (matrix size RX32) and used a RandomForestClassifier for training. The problem The accuracy of my classifier is a whopping 0.9874066374996978, but it fails to recognize almost all of my new signature samples, signed on same paper under same lighting conditions. I'm new to applied ML. What do you experts think I should check?? AI: Using a conventional Machine learning approach to tackle this problem will lead to a lower accuracy of your model. This could easily be solved by using Deep learning, more specifically Convolutional Neural Networks (CNN). There is no need to feature engineer on your signature images if you are using a convolution network for feature extraction. But although using convolution network, I would like you to reconsider your methods and logics because there are a number of factors that come into play while accessing a signature. The physical act of signing a signature requires coordinating the brain, eyes, arms, fingers, muscles and nerves. So every signature needs not be same everytime. Refer to these papers before you model your classifer. Offline Handwritten Signature Verification, Convolutional Siamese Network for Writer Independent Offline Signature Verification Analyzing features learned for Offline Signature Verification using Deep CNNs
H: Is it possible to train this image classifier? I'm writing a mobile app that will enable a user to scan a Craft Beer Label from a bottle, tap, six pack, etc. The scan will only work for my customers who are the Brewers themselves, so I will have access to all of the artwork used for their labels. My concern is around training the model. As you can imagine, there will be differences in lighting conditions at bars, amoung other challenges. As I am just getting started in Machine Learning, is this task feasible? How difficult will it be to train the model given the various conditions found at different pubs/bars. Thanks. AI: For sure, is a really feasible application! Indeed, there does exist Deep Learning models for that. I suggest you to start by doing transfer learning on a pre-trained model like this: https://github.com/satojkovic/DeepLogo Typically you remove the last layers of the network and add layers that adapt to your data. Then, train these last new layers with your data (freezing the layers you have kept from the original model). Another good advice is to perform data augmentantion on your dataset to include ligth, scale and rotation variants of your images. Summary: It's a feasible task Check out pre-build models Learn about transfer learning on pre-trained models Learn about data augmentation to include robustness against different scenarios and env. conditions
H: examples of semi structured data ? i read that Json or XML are unstructured data; Are Json or XML data or are they tools to tag the data? I understand from this wikipedia page (https://en.wikipedia.org/wiki/Semi-structured_data) that semi structured data are data without a formal database structure, but still have some tags; AI: JSON and XML are not tools. They are data formats. For example, to find out more about JSON you can look up the RFC description that specifies how to format JSON files: https://www.rfc-editor.org/rfc/rfc7159 It explains how to tag the data. Then there are tools that validate JSON, and XML files to see if they conform to the requirements, or there are tools that convert JSON into another format or read it into memory into some object or class.
H: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() using panda python enter preformatted text hereHere I have data to import from a CSV file.I wrote an equation inside the class and to solve the equation data will import from the CSV file. When I run my code I got an error like "tuple indices must be integers or slices, not str" using panda python. Can anyone help me to solve this problem? I upload my code and CSV file here. def ph_convert(time,we,h,a,w): while time <= 30: level = 1.1 level = float(level) if w == 1: ph= ((((6*we)+(1*h))/level -(4*a)))/time else: ph= ((6+((1*we)+(3*h))/level -(6 *a)))/time break while time <=60: level = 1.25 level = float(level) if w == 1: ph= ((((6*we)+(1*h))/level -(4*a)))/time else: ph= ((6+((1*we)+(3*h))/level -(6 *a)))/time break print(ph) data = pd.read_csv('data1.csv') data['time'] = data['time'].apply(time_convert) we = data['we'].astype(float) h = data['h'].astype(float) a = data['a'].astype(float) w = data['w'].astype(float) time = data['time'].astype(float) print(ph_convert(time,we,h,a,w)) Subset of my CSV file: we h a w time 48.1 150 53 1 6:15:00 48.1 150 53 1 9:00:00 48.1 150 53 1 9:25:00 48.1 150 53 1 9:30:00 48.1 150 53 1 11:00:00 error: [![enter image description here][2]][2] AI: while time <= 30: You are trying to evaluate the vector time, but in python, boolean evaluations with that expression can only be done over scalars. The thing is, as time has multiple values, the Python interpreter can't figure out if the condition must hold over all of them, just one, etc. If you want to evaluate if any of the values of time, or all the values in time you can use methods like time.any() or time.all() You can always walk through all the items in time and perform the evaluation on them: for t in time: if t <= 30: ...
H: learning curve Sklearn I was trying Random Forest Algorithm on Boston dataset to predict the house prices medv with the help of sklearn's RandomForestRegressor. Just to evaluate how good is the model performing I tried sklearn's learning curve with below code train_sizes = [1, 25, 50, 100, 200, 390] # 390 is 80% of shape(X) from sklearn.model_selection import learning_curve def learning_curves(estimator, X, y, train_sizes, cv): train_sizes, train_scores, validation_scores = learning_curve( estimator, X, y, train_sizes = train_sizes, cv = cv, scoring = 'neg_mean_squared_error') #print('Training scores:\n\n', train_scores) #print('\n', '-' * 70) # separator to make the output easy to read #print('\nValidation scores:\n\n', validation_scores) train_scores_mean = -train_scores.mean(axis = 1) print(train_scores_mean) validation_scores_mean = -validation_scores.mean(axis = 1) print(validation_scores_mean) plt.plot(train_sizes, train_scores_mean, label = 'Training error') plt.plot(train_sizes, validation_scores_mean, label = 'Validation error') plt.ylabel('MSE', fontsize = 14) plt.xlabel('Training set size', fontsize = 14) title = 'Learning curves for a ' + str(estimator).split('(')[0] + ' model' plt.title(title, fontsize = 18, y = 1.03) plt.legend() plt.ylim(0,40) If you notice I have passed X, y and not X_train, y_train to learning_curve. I am not understanding should I pass X, y or only the training subset X_train, y_train to learning_curve. Update 1 Dimensions of my Train/Test split (75%:Train and 25%:Test) X.shape: (489, 11) X_train.shape: (366, 11) X_test.shape: (123, 11) I had few additional questions regarding the working of learning_curve Does the size of test data set varies according to the size of train dataset as mentioned in list train_sizes or it is always fixed (which would be 25% in my case according to train/test split which is 123 samples) for example When train dataset size = 1 the will the test data size be 488 or will it be 123(the size of X_test) When train dataset size = 25 the will the test data size be 464 or will it be 123(the size of X_test) When train dataset size = 50 the will the test data size be 439 or will it be 123(the size of X_test) Update 2 In the blog the dataset has 9568 observations and the blogger passes entire dataset X to learning_curve. train_sizes = [1, 100, 500, 2000, 5000, 7654] In first iteration when train_size is 1 then the test_size should be 9567 but why he say that But when tested on the validation set (which has 1914 instances), the MSE rockets up to roughly 423.4. Shouldnt the test_size be 9567 instead of 1914 for first iteration In second iteration when the train_size is 100 then the test_size should be 9468 What I meant to say is the test_size will be variable according to train_size correct me if I am wrong AI: [After Update1/2] A learning curve shows how error changes as the training set size increases. One basically change the size of training data points and measure a desired score and compare it against a fixed test set to see how it generalizes. For you the utmost important part to note is the fixed test set. If you were to change the test set, that you could theoretically, how could you evaluate the performance of the same model by changing the size of the training size (because you are changing both datasets at the same time)?! That is why you pass the the whole X,y in The Sklearn's learning curve, and it returns the tuple containing train_sizes, train_scores, test_scores. As far as I understood, it does a nested validation cross-validation. It basically takes the whole X,y, and split into train/test (keeping the test data strictly independent and fix in size), depending on how you pass the cv parameter, and keep increasing the training size and record the performance scores for plotting the learning curve. This ensures a proper measure of the optimal model's performance. Concerning your Update 1: Having just discussed the details, then your Update 1 example scenario would be wrong. We train a model (no matter how), and would like to evaluate its performance on a hold-out dataset. If you change your hold-out dataset, you wouldn't know whether changing the training size led to the change in test score or the variability of the hold-out dataset itself! Concerning your Update 2: This is exactly what we have been discussing so far. No matter what train_size is, the test_size would remain 1914, as long as the same cv method is used. Example: To further clarify let's imagine we have the following scenario: train_sizes = [1, 50, 100, 200, 400, 600] # X.shape: (1000, 5) # len(train_sizes)=6 If we were to use the following settings in learning_curve class in sklearn: cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0) learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes) The learning_curve returns the train_sizes, train_scores, test_scores for six points as we have 6 train_sizes. And for these points the train_sizes and test_size would look like this: Point 1: train_size: 1, test_size:200 Point 2: train_size: 50, test_size:200 Point 3: train_size: 100, test_size:200 Point 4: train_size: 200, test_size:200 Point 5: train_size: 400, test_size:200 Point 6: train_size: 600, test_size:200 I think train_sizes are clear (taken from the list we provided). The test_size remains fixed, and it is decided because we chose test_size=0.2 in the cv (recall that we have had 1000 data points, so 20% of it would be -> 1000 * 0.2 = 200), and it has nothing to do with our train_sizes!! This is the hold-out set that we keep fixed during this experiment. Hope this last example clarifies it altogether. ;-) I would still keep that blogpost here as reference for others.
H: Merging 3 csv files I have 3 csv files that I would like to combine with rStudio, but I get an error. How can I resolve this error? Code # Load the files ’city.csv’, ’country.csv’ and ’countrylanguage.csv’ into R as data frames. Note that these files have no headers. city <- read.csv("city.csv", header = FALSE) country <- read.csv("country.csv", header = FALSE) countrylanguages <- read.csv("countrylanguages.csv", header = FALSE) # Give names to the variables/columns of each data frame, using the names from the database in Take Home 2. colnames(city) = c("id", "name", "code", "original-name", "population") colnames(country) = c("code", "name", "continent", "region", "surfacearea", "indepYear", "population", "lifeExpectancy", "GNP", "gnpOld", "localName", "governmentForm", "headOfState", "capital", "code2") colnames(countrylanguages) =c("code", "language", "isOfficial","percentage") #Create a new data frame by joining the previous 3 data frames into one. World = merge(city,country,countrylanguages, by = `code`) Error: Error in fix.by(by.x, x) : 'by' must specify one or more columns as numbers, names or logical AI: Merge takes only two dataframes as parameter to join them together. If you want to join 3 you have to do it one by one. Something like that should work: World = merge(city,country, by= `code`) World = merge(World,countrylanguages, by = `code`)
H: Converting lower case to upper case I am trying to convert a column from lower case to upper case, but it is not working. My code: #Change all city name in city to uppercase. library(magrittr) city %>% `colnames<-`(tolower(names(name))) %>% head AI: I do not understand your question completely, however if you are trying to convert text to upper case you should not use the tolower() function rather you should use the toupper() function. For example: toupper("A") and if you want to make the column names uppercase I guess the syntax would be: toupper(colnames(dataframe))
H: Finding outliers from multiple files I am dealing with a very strange problem. I have a lot of files. I need to show which files are similar and which one has exception/outliers using its data. I can show with unsupervised learning using KMeans / DBSCAN or similar ML algorithms for each file. But what would be the approach for such a case? How can I show this record has outliers then group them together? My datasets are multivariate time series AI: If you are looking to find outlier records based and then identify the file containing the records, K-means may be as good a place to start as any, but perhaps you need to join all the files together to train it first. One challenge with K-means and other clustering algorithms is that you have to start with $k$ but sometimes the value of $k$ is hard to determine because the good data can continue to break down into more discrete clusters as $k$ increases. This becomes a problem if your outliers have a high degree of uniqueness because you may need to increase $k$ to catch the outliers, but all you achieve is greater breakdown of the good records. If you can identify the outliers, you should be able to create a training data set that could be used in a supervised technique like a decision tree. It may take a little longer because you have to label your records, but this may give you a better result in the end. If you are looking for combinations of records that should not exists in the same file or group of records and assuming you can identify records by some categorical identifier, e.g. transaction type, I would start with a probabilistic approach like apriori, but you may want to check out Naive Bayes based approach. There are some differences in these approaches, but, either one may produce decent results. More generally, I think the analysis you want to perform is $Association\ Rule\ Learning$. This type of technique can be used to derive a statistical probability for the combinations and then you can identify files that contain combinations that have a low probability of co-occurrence. HTH
H: How would you a apply a cnn to do age estimation on static images? After doing some reading on age estimation using the IMDB wiki dataset I wanted to try it out myself on a smaller scale but I dont quite understand the application of the CNN. Any clarification would be great. AI: With a CNN, one of the typical (and simplest) approaches is to perform classification or regression: you train a network with a set of labeled images (supervised learning) with the aim of, after the training, the network will be able to assign the correct label (among the available labels in the training set) to a new image (never seen before). Basically, you need: Decide which are your labels: they can be ages from 0 to 100, so you will have 101 labels. You can decide that you have 10 labels, 0-10, 10-20, and so on... For regression you can use real numbers within a range... Training labeled data: set o pairs $(img,l)$ where $img$ refers to images and $l$ to labels, i.e. for each image there exists only one label among those you have decided to use Testing labeled data: the same as above, but this images won't be used to train the network. They will be used to test its performance CNN: design a Convolutional Neural Network (or get one done) to perform classification or regression. There are lots of examples out there Train the CNN on your training data set, get the appropriate performance metric Test the CNN on your test set after training, with the same performance metric Compare training results with test results to evaluate the bias, variance and overfitting issues of the network EDIT (following Mark.F comment): The network will be slightly different if you try to perform classification or regression: for classification, the network can only assign one of the values (typically integers in this scenario) that are available in the training data. For regression, the network will assign a certain value within a range for each image (for example 0-100). The cost functions for both types or network are different and usually also the last layers. What you do need for both approaches is labeled data. The CNN are used with images because they use 2D filtering over them to extract the most important features. Basically, the CNN can learn the most important "aspects" of the images that best help to perform the desired task
H: Very slow convergence with CNN I am new to deep learning. I am working on training an SSD model on a set of small objects. I am using Adam gradient descent for optimization and a large input (800x800), but I seem to only get an improvement of 0.010 after every 20 or so epochs(350 steps). What can I do or look for to speed up convergence on this model? AI: Implement the below mentioned techniques and check Add Batch Normalization Increase the learning rate Standard/Normalize the inputs if you have not done it already
H: Terminology question In Machine Learning, is the definition of the Model just the algorithm that was selected for the problem domain, or is the Model the algorithm and the training data? Thanks. AI: Model in general can be said as a representation of a process. In machine learning, the model can be referred to something that applies a machine learning algorithm on the given data and gives numerical outputs to make predictions on that data. It is algorithm that learns the pattern in the data to make the predictions and not the model. The entire process of making algorithm learning from the data with highest accuracy is called as the creating the model. Since the machine learning algorithms are based upon mathematics, we can refer to machine learning model as mathematical representation of the process that is used to solve the problem in hand which is to learn patterns in the data to make predictions. To make the model more successful in the real world we will try to increase its accuracy using the machine learning techniques like hyper parameter tuning, adding regularization etc.
H: Looking for second opinion on checking that attribute in data frame is unique using r Check whether attribute name in city is unique (i.e. each city has a different name). data_frame is city with a column called name. length(unique(city$name)) rapply(city,function(name)length(unique(name))) unique(city$name) AI: multiple options are available in R.Apart from your mentioned function. dplyr duplicated
H: How to use CNN to build ROC curves I know we can use SVMs probabilities after predicting validation data in order to build ROC curves. However, for CNNs, I have a binary classification problem and so the sigmoid activation function will give me probabilities of both classes. So, which probability should I use to build the ROC curve? what happens if the probabilities are close to each other? AI: For binary classification problem, naturally, classes are called positives and negatives. Positive is usually a class of detection something that we are interested in, for example, spam or disease detection. According to documentation for sklearn.metrics.roc_curve, it takes as argument probabilities for positive class. For most binary classifiers, output probability usually refers to the probability of positive class (which is marked as 1 in the data). So, use probabilities of the class marked as 1 in the data.
H: found \N in my data does not count as missing values in r So scrolling through my columns I find \N embedded. I need to count them, but I get an error. Would it be considered a missing value, it's new to me. # Check whether attribute HeadOfState in country has any missing values, and if so, how many. country$headOfState[country$headOfState==""] <- NA country$headOfState[country$headOfState==\N] <- NA sum(is.na(country$headOfState)) # Check whether attribute IndepYear in country has any missing values, and if so, how many. country$IndepYear[country$indepYear==""] <- NA country$indepYear[country$indepYear=='\N'] <- NA sum(is.na(country$indepYear)) Error: unexpected input in "country$headOfState[country$headOfState==\" country$headOfState[country$headOfState==\N] <- NA Error: unexpected input in "country$headOfState[country$headOfState==\" AI: The reason its shorws an error, bacause '\' is a part of base regex expressions in R. As states here: The metacharacters in extended regular expressions are . \ | ( ) [ { ^ $ * + ? So the comparisson like this country$indepYear=='\N' or this ( Its not a valid comparisson at all) country$headOfState==\N will throw an error. You need to "escape" the "\" symbol. Try something like this instead if you want to replace "\N" with NA: country$indepYear[country$indepYear=='\\N'] <- NA If you need just to count them, you can use this approach: sum(country$indepYear=='\\N') Hope this helps.
H: How to calculate time difference in between rows using loop in panda python I have a CSV file with columns date, time. I want to calculate row-by-row the time difference time_diff in the time column. I wrote the following code but it's incorrect. Here is my code and at bottom, my CSV file: #def time_diff(x): date_array = [] date_array.append(pd.to_datetime(data['date'][0]).date()) start = [] end = [] temp_date = pd.to_datetime(data['date'][0]).date() start.append(pd.to_datetime(data['time'][0], format='%H:%M:%S').time()) for i in range(len(data['date'])): cur_date = pd.to_datetime(data['date'][i]).date() if( cur_date > temp_date): end.append(pd.to_datetime(data['time'][i-1], format='%H:%M:%S').time()) start.append(pd.to_datetime(data['time'][i], format='%H:%M:%S').time()) date_array.append(cur_date) temp_date = cur_date end.append(pd.to_datetime(data['time'][len(data['date'])-1], format='%H:%M:%S').time()) if start <= end: return end - start else: end += timedelta(1) # +day assert end > start return end - start for i in range(len(date_array)): s_time = datetime.datetime.combine(date_array[i],start_time[i]) e_time = datetime.datetime.combine(date_array[i], end_time[i]) timediff = (e_time - s_time) My CSV file: First column and second column are date and time. Third column shows expected time_diff in hours. AI: A little bit messy aproach ( but it works atleast), but running difference wasnt working for me, so here you go: df = pd.read_csv('test.csv', sep = ";") df['concant_time'] = df['date'] + " " + df['time'] # concat time to take day into account df['concant_time'] = pd.to_datetime(df['concant_time'], format = "%m/%d/%Y %H:%M:00") # transfrom string to datetime df['time_diff'] = 0 #initialize column to assign running time diff. And the loop to get time difference: for i in range(df.shape[0] - 1): df['time_diff'][i+1] = (datetime.datetime.min + (df['concant_time'][i+1] - df['concant_time'][i])).time() # .time() gets the time from datetime object, if you need both days and hours, simply use this line instead: df['time_diff'][i+1] = df['concant_time'][i+1] - df['concant_time'][i] the code outputs next: Hope it helps!
H: Activation in convolution layer On a CNN, what is use of using Activation function in convolution layer? Does single weight is used for full matrix or for every pixel or box it may vary? AI: The use is the same as always, without the non-linear activation, no matter how many layers you use, the result will still be a linear model. The non-linear activation is performed on every single output "pixel" indevidualy:
H: error trying to build a histogram using r with the rStudio application Using ggplot2 I am attempting to create a histogram. I have a column that is full on the continents. I need to add all the continents which I attempted to do with the aggregate function. data <- aggregate(country$continent,country["continent"], sum) hist(data) Error in Summary.factor(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : ‘sum’ not meaningful for factors AI: The following code : data.frame(table(country$continent)) will create a dataframe where one column is the continent and the other is the number of countries in the continent.
H: How to get the formulas used by seasonal_decompose for Trend and Seasonality I'm trying to use decomposition to forecast into the future. From my reading I understand that I can do this by adding a trend formula to a seasonality formula. I know that I can decompose a time series with this: import statsmodels.api as sm result = sm.tsa.seasonal_decompose(series.values, model='additive',freq=12) trend = result.trend seasonal = result.seasonal residual = result.resid The trend, seasonal and residual variables are arrays of numbers. I've searched google and the documentation for seasonal_decompose and haven't found a way to see/get access to the formulas used to calculate the numbers for trend and seasonal. From my understanding, I need those formulas in order to be able to make projections. Do you know of a way to get those formulas from seasonal_decompose? Is there another function or method that works better for this? AI: I do not know how seasonal decompose works, but there is another very similar method that is well described. Triple exponential smoothing (other name Holt-Winters method) internally makes a decomposition into trend, seasonal and residual components. But you should always check the decomposition. It always finds the seasonal component even if it does not exist. Also the decomposition is actually sensitive to the underlying model. If the data is linear in logarithms and you do not apply a logarithm you will get a much less precise decomposition.
H: find the values of theta in a cost function from andrew ng course I was following Andrew NG ML course on coursera. I was stuck at cost function. How can I find values for Theta-0 and Theta-1 ? So I came to know Theta-0 as y intercept so the values must be Theta-0 = 0.5 but how can I find the value for Theta-1? Any help is appreciated. AI: Elaborating on the previous answer, now hat you have $\theta_0 = 0.5$, take any value of $x$ and the corresponding $h_\theta$ value to find $\theta_1$. For instance, using $x=1$ will yield: $h_\theta(1) = \theta_0 + \theta_1 \times 1$ From the graphd, $h_\theta(1) = 1.5$ so that: $\theta_0 + \theta_1 = 1.5$ $\theta_1 = 1.5 - \theta_0$ $\theta_1 = 1.5 - 0.5 = 1$ The final equation is then $h_\theta = 0.5 + x$ As said in the comment, you can also get the slope visually, noticing that if you increase $x$ by 1, $y$ also increases by 1 when you follow the slope. Does this help?
H: Sentiment analysis for multiple entry in one text I would like to do sentiment analysis on a set of financial news from the S&P 500 for given entities (organization names). However, each news (rows in my dataset) may have more than one entity and I have to do analysis for each entity separately the same news. First, my algorithm should find which entities are present in a specific news and then do sentiment analysis for them. I would appreciate if someone could help me. Example of output: AI: A Ensemble of "Target based sentiment" models worked for a similar use-case that I worked on. Some of the models that were part of the solution : http://sentic.net/sentic-lstm.pdf https://hal.archives-ouvertes.fr/hal-01537896/document https://arxiv.org/pdf/1802.05818.pdf
H: Continous bag of words claimed to be unsupervised, how is it working? I'm following these two lectures on CBOW and skip-gram word2vec models. The first is lec 12 and the next lec 13 of a deep learning series https://www.youtube.com/watch?v=syWB-YMYZvI https://www.youtube.com/watch?v=GMCwS7tS5ZM&t=548s Up to about 17 mins into the second video the lecturer says that the approach is unsupervised for CBOW as there are no labels? How can you learn a NN with no labels? This completely confused me as why are we not comparing our softmax probability vector to an actual set of outputs so that we can adjust the v_c and v_w weights accordingly. His liklihood function seems to only be concerned with the parameters v_c and v_w (and completely devoid of some kind of target label) which is bonkers to me, because could i not not just make them whatever i want? In addition how are you learning relationships between pairs of actual words, if actual target variables are not guiding you towards correct labels? Could someone please explain what is going on under the hood as i really want to understand this approach. Perhaps target labels are being considered and i've not noticed it? Most log likelihood estimates with a window of size $m$ look something like the following $-\mathrm{log} \prod_{j=0,j \neq m}^{2m} \frac{e^{u^T_{c-m+j}v_c}}{\sum_{k=1}^{|v|} e^{u_k^Tv_c}} $ To my knowledge a likelihood function should involve data from actual observations not purely parameters, which are to be estimate, can someone explain this also? Note i follow it more from 17 mins onwards as he talks about pairs (w,c) in D or D' respectively. I appreciate any help! AI: The CBOW approach is unsupervised because the network learns the distribution of word co-occurrences around each word, and this doesn't require labelling or additional input, just sequences of words. As Mikolov et al state in one of the original articles, "the training objective is to learn word vector representations that are good at predicting the nearby words; in another, Mikolov et al say their aim is "building a log-linear classifier with four future and four history words at the input, where the training criterion is to correctly classify the current (middle) word". So if you can see that for each sequence, you're taking 'x' words and training the network to predict one given the others, then no supervision is involved. It's a little unconventional because it's not the output of the network which is important, but the weights that are learned during training - these are what is taken and used as embeddings in other tasks. Adrian Colyer does a great general write-up of word2vec here, and the explanation from Chris McCormick here is good and quite accessible.
H: Structure the dataset for financial machine learning I am trying to construct a dataset to apply MLP in forecasting financial returns. The main idea is that I want to predict future equity returns (1 month ahead, but the horizon can vary, just to give an idea) using fundamental data. My dataset is made of n features for N companies. The main issue is that the features are not recorded at the same point in time, so that some of them would be a monthly time series and some other a daily time series. I am trying to understand the general way to approach this problem because it's my first ML application. What should I do if my features are time series of data that differs in this sense? I have red a lot of publications, but no one gets in deeper about how the dataset is composed (before train/test splitting). Can someone recommend some useful source? EDIT Suppose I have a variable amount of companies over 10 years of monthly data, mainly because some of them doesn't exist anymore, but it is important for the problem of survivorship bias. For each of these N companies I have n features that represents fundamentals of these companies. I would be able to regress the monthly return, so I am trying to solve a supervised learning problem. What I cannot understand is the feasibility of the problem using a simple Multilayer perceptron instead a more complex structure like RNN or even LSTM. I wouldn't want to switch to one of these more complex architectures for some specific reason related to the project I am building. Probably I am missing the reasoning behind the training of the network. How should I provide input data to the architectures in order to perform backpropagation and GD? The only thing I am sure is that I should shift the monthly return series of a time period to recast the problem as supervised. AI: You are going to have to consider three different factors: 1 - What data are you going to have available when you run your predictions? Are you going to have to pre-process that data? Are you going to have the time to do it? You should be setting your focus on this and work backwards from what runtime predictions look like 2 - When it comes to time series, you have to think of it in terms of (1) lookback windows, or how many periods prior are you considering and (2) time shifts, or how many periods forward are you predicting? This can result in an amazing number of combinations for you to model. You should end up with data sets where you have n features for X time periods and your target variable (your labeled data) is the result Y time periods from X where you are creating a prediction. 3 - The cardinal sin of time series is that you should never model on data that was not available at the specific runtime date. It's a common beginner mistake where you mix up your time periods and end up taking in data that was not actually available when it was released; in other words, it will not be available to you (or at least not correct) when you go to make a prediction. This can be common in financial data where entities can come back and "re-state" numbers at a future date. You have to make sure you are modeling on data as it stood on the target date you are modeling. With these thoughts in mind, you should be on your way to prepping your data in a way that makes sense.
H: How can I parallelize GloVe reverse lookups in PyTorch? I feel like I'm missing something obvious here because I can't find any discussion of this. I want to do a lot of reverse lookups (nearest neighbor distance searches) on the GloVe embeddings for a word generation network. I'm currently just iterating through the vocabulary on the cpu. I've sped it up a bit using a process pool, as shown in the snippet below, but it's still very slow for large vocabs. Is there a way to move this to to the GPU using cuda? I've also read that there is a way to turn this sort of thing into one big matrix operation... Any references would be appreciated. Thanks! glove = torchtext.vocab.GloVe(name='6B', dim=wordDim) def closest(vec): dists = [(w, torch.dist(vec, glove.vectors[glove.stoi[w]] for w in glove.itos] return sorted(dists, key=lambda t: t[1])[0] output = # word vectors… # using a process pool to parallelize the lookup pool=ProcessPoolExecutor(max_workers=8) predictedWords = [w for w in list(pool.map(closestWord, output)] AI: It's usually not very efficient to approach these types of problems in pythonic ways, with list comprehensions and such. This whole process can be done with some matrix math, which will be substantially faster (and able to be computed on the GPU using PyTorch). Using torch.dist with default p=2 will compute the Euclidean distance between two tensors, which is defined as $$ \text{euclidean distance} = d(\mathbf{v}, \mathbf{w}) = \sqrt{\sum_d \left(\mathbf{v}_d - \mathbf{w}_d\right)^2} $$ You can do this efficiently in PyTorch for every word in your vocab by broadcasting your query word over the whole matrix of word vectors: def closest(vec): dists = torch.sqrt(((glove.vectors - vec) ** 2).sum(dim=1)) return dists.argmin() # or glove.itos[dists.argmin()] if you want a string output However, usually people use the cosine similarity to find the closest word vectors rather than the Euclidean distance. This is the cosine of the angle between the vectors, and can be calculated as $$ \text{Cosine similarity} = \frac{\mathbf{v}\cdot \mathbf{w}}{\|\mathbf{v}\| \|\mathbf{w}\|} $$ which can similarly be implemented in PyTorch as the following: glove_lengths = torch.sqrt((glove.vectors ** 2).sum(dim=1)) def closest_cosine(vec): numerator = (glove.vectors * vec).sum(dim=1) denominator = glove_lengths * torch.sqrt((vec ** 2).sum()) similarities = numerator / denominator return glove.itos[similarities.argmax()] Note that we moved the $\|\mathbf{v}\|$ part of the equation outside the function here, so we don't have to recompute the lengths of each glove vector every time we want to run it.
H: What clustering algorithm is appropriate for clustering paths? I have a dataset with vectors in 2-dimensional space that form separate sequences (paths). Full data is presented below: , while a random sample of 5 paths looks like below (please note that incontinuity in paths are natural for the data and doesn't mean missing values): I would like to find similar paths, where similar would mean (in order from the most to the less prominent): they end up in a similar region they are similar in direct length (i.e. length from start to end on x axis) they are similar in complexity (i.e. number of vectors) they starts in a similar region What clustering algorithms are natural choice for such a setup? What are things worth to be aware while clustering paths? How can I deal with the fact, that different paths has different number of vectors? How can I represent a data to take that into account? AI: Like Ricardo mentioned in his comment on your question, the main step here is finding a distance metric between paths. Then you can experiment with different clustering algorithms and see what works. What comes to mind is dynamic time warping (DTW). DTW gives you a way to find a measure of "distance" (it is actually not strictly a distance metric, but it is close) between two time series. One very useful thing is that it can be used to compare two time series that are of different lengths. There are many good blog posts on DTW, so I won't try to give yet another explanation of it. There are also many python implementations of it. And a lot of work has gone into making the algorithm very fast. DTW is a strange algorithm--in some ways very simplistic, but typically works well. Once you modify the algorithm to deal with paths, you can construct the distance matrix and use that for clustering. One common clustering algorithm that is used in conjunction with DTW is spectral clustering, since the distance matrix can be used directly (instead of the matrix of data points, which we don't have here).
H: Feature Scaling and normalization in cross-validation set I have a question that normally, when we are making a training set and a final test set, we would compute the mean and standard deviation for preprocessing using the training data and use it to standardize (transform) the test data. So, when we are making a training data, a cross-validation data and a final test data, shouldn't we compute mean and std for preprocessing using the training data alone and then standardize the validation set and test set using it? And, if I'm right how can we do it in keras, since in keras like in the below code, taken from kaggle( by Francois Chollet MNIST tutorial): simple-deep-mlp-with-keras by Francois Chollet # pre-processing: divide by max and substract mean scale = np.max(X_train) X_train /= scale X_test /= scale mean = np.std(X_train) X_train -= mean X_test -= mean model.fit(X_train, y_train, nb_epoch=10, batch_size=16, validation_split=0.1, show_accuracy=True, verbose=2) The validation set is created using the fit method and thus, it is also used in computing the preprocessing mean and std. So, how to create validation set before and use only the training data for feature scaling and normalization? And which method is the right method? Also, in the above code scale and mean are being computed for each column respectively and is being used for normalization , should we compute mean and scale(standard deviation or range) for each pixel or for the entire image? How does that affect? AI: Answering your last question first: your mean and std should be computed over the entire image. So for an RGB image you will get a mean/std vector of 3 values for the entire dataset (calculated per channel). Normalizing per pixel is wrong and would result in loss of information (you need to think of an image as a single discreet function, you can't treat each part of it separately without changing its overall meaning). Regarding normalizing the validation set, you can manually create the validation set and use the "validation_data" argument in the fit method: # Creating the validation set: X_valid = X_train[int(0.9*len(X_train)):] X_train = X_train[:int(0.9*len(X_train))] y_valid = y_train[int(0.9*len(y_train)):] y_train = y_train[:int(0.9*len(y_train))] # pre-processing: divide by max and subtract mean scale = np.max(X_train) X_train /= scale X_valid /= scale X_test /= scale mean = np.std(X_train) X_train -= mean X_valid -= mean X_test -= mean model.fit(X_train, y_train, batch_size=16, epochs=10, validation_data=(X_valid, y_valid), verbose=2)
H: Installed module pysubgroup not found in Jupyter Notebook I'm trying to use the pysubgroup python package referenced here. I think I properly installed it as shown below, with no errors when I installed it: > mymacs-MacBook-Pro:dq-pattern-research mymac$ pip3 search pysubgroup > pysubgroup (0.5.4) - pysubgroup is a Python library for the data > analysis > task of subgroup discovery. INSTALLED: 0.5.4 (latest) mymacs-MacBook-Pro:dq-pattern-research mymac$ pip3 install > pysubgroup Requirement already satisfied: pysubgroup in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (0.5.4) Requirement already satisfied: pandas in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from pysubgroup) (0.23.4) Requirement already satisfied: scipy in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from pysubgroup) (1.1.0) Requirement already satisfied: numpy in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from pysubgroup) (1.15.1) Requirement already satisfied: matplotlib > in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from pysubgroup) (2.2.3) Requirement already satisfied: pytz>=2011k > in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from pandas->pysubgroup) (2018.5) Requirement already satisfied: > python-dateutil>=2.5.0 in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from pandas->pysubgroup) (2.7.3) Requirement already satisfied: > cycler>=0.10 in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from matplotlib->pysubgroup) (0.10.0) Requirement already satisfied: > kiwisolver>=1.0.1 in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from matplotlib->pysubgroup) (1.0.1) Requirement already satisfied: > six>=1.10 in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from matplotlib->pysubgroup) (1.11.0) Requirement already satisfied: > pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from matplotlib->pysubgroup) (2.2.0) Requirement already satisfied: > setuptools in > /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages > (from kiwisolver>=1.0.1->matplotlib->pysubgroup) (39.0.1) But when I try to access it via jupyter notebook in Safari on Mac I get an error that it can't be found: import pysubgroup as ps import pandas as pd --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) in ----> 1 import pysubgroup as ps 2 import pandas as pd ModuleNotFoundError: No module named 'pysubgroup' I also can see that it is installed in a notebook: [![ ]2]2 AI: you are right, I couldn't find out why this is happening, but according to the documentation: pysubgroup consists of pure Python code. Thus, you can simply download the code from the repository and copy it in your site-packages directory. I cloned the repository from github, then ran the command: python setup.py install inside the cloned directory, then copied the whole directory (after running install command) to the site-packages directory, and it worked for me. Here's how you can find the location of site-packages directory: https://stackoverflow.com/questions/122327/how-do-i-find-the-location-of-my-python-site-packages-directory
H: Predicting sequences newbie question I have a ranked list of rows of 100 lines of data 1- 8 4 0 5 9 3 2- 0 3 3 5 3 2 3- 0 0 2 4 0 2 .. 100- 0 2 3 2 2 0 Is it possible to predict a) when given a new sequence where it would fit? b) a way to generate a row that would fit into the top 1, top 5, top 10 AI: The short answer is yes. What you are looking at would be developing a model that would regress an input of 6 integers to a single integer between 0 and 100. $f(x) : \mathbb{Z}^6 \rightarrow \mathbb{Z}^1$ This could also be done as as 100 class classification problem. The regression approach would be better in my opinion. One big question that you need to know to better determine if machine learning is the correct approach: are the sequences (e.g., 0 3 3 5 3 2) the only sequences that = the given ranks (e.g.,1). If so you only need to match sequences, no need for M L. If no, and you chose to develop an ML approach, an issue you have is that you only have 1 observation for each integer. Depending on the variability of the input integers and their output rank {1 - 100} you would want multiple observations corresponding to each integer. One last thing, I would start with one linear and one non-linear algorithm to see which method fits this data best.
H: Can you learn an algorithm from a trained model? Are there any papers where an algorithm was entirely based on the results of a trained model? Let me explain. Suppose you want to come up with an algorithm that sorts three numbers $a,b,c$. I can generate several training data points $x_i = (a_i, b_i, c_i), a_i, b_i, c_i \in \mathbb{R}$ with their label $\hat y_i = (\min(x_i), mid(x_i), \max(x_i))$. That way, I can generate a lot of data points and train a model to predict them in their order. My question is, are there any papers which were able to translate the trained model back to an algorithm that is understandable by humans (instead of just the values of the parameters ?). I'd be very interested even if the algorithms were very simple. AI: In this article https://arxiv.org/abs/1711.09784 the authors the fits decision tree to a neural network in order to understand what is the neural network does. Is it what you are looking for?
H: Regression in Python with many NaN values spread across all columns I want to do a regression to predict "value" based on the other columns from below example table. The data was collected by single indicator and not across all data points, resulting in many NaN/blank values: value age education gender 32.3 Male 31.8 Female 32.8 High school 33.8 Technical school 26.4 College graduate 16.3 18 - 24 35.2 25 - 34 35.5 35 - 44 I converted categorical data by using dummy variables which resulted in below column examples. I guess that the quality of my model will be affected because I have only a single 1 by row and the rest is all 0. value 18 - 24 25 - 34 35 - 44 College High school 32.8 0 0 0 0 1 26.4 0 0 0 1 0 16.5 1 0 0 0 0 So my question is, what is the best way to clean and convert the data for given source data structure? AI: If you have 3 completely different data sources where there are no common columns consider creating 3 separate models instead of trying to force them into one. If you have overlapping columns as well handling of NaN values can be influenced by your model choice. (i.e. most tree based model can handle missing values) If you use linear regression you need to have numerical values and you can experiment with several options: adding new variables indicating if a certain variable is missing or not (0/1) instead of using 0 for the missing values experiment replacing them with either median/mean/random value from the original distribution of the variable (as a result your new data will have the same distribution of values as the original one) create model for predicting variables containing missing values and use it to replace NaNs use some dimensionality reduction if your matrix is too big and sparse
H: Grouping and summing with r I have a data.frame that contains two columns that I want to group and sum the direct labor for employees. Employees Actual work Joe Smo 8 Joe Smo 7 Joe Smo 5 Sam Adams 7 Sam Adams 5 Sam Adams 3 The desired outcome is Results: Joe Smo 20 Sam Adams 15 Here is the code I used that doesn't work. aggregate(directLaborNov18[c('Employee')], directLaborNov18['Actual_work'], sum) AI: You were very close. Just switch the first two input arguments. > aggregate(directLaborNov18['Actual_work'], directLaborNov18[c('Employee')], sum) Employee Actual_work 1 Joe Smo 20 2 Sam Adams 15
H: How Does Weighted KNN Work? I am reading notes on using weights for KNN and I came across an example that I don't really understand. Suppose we have K = 7 and we obtain the following: Decision set = {A, A, A, A, B, B, B} If this was the standard KNN algorithm we would pick A, however the notes give an example of using weights: By class distribution (weight inversely proportional to class frequency) class A: 95 %, class B 5 % This results in a class of B. I can't seem to figure out the math that was left out to obtain B as the answer. AI: We can view nearest neighbor as a voting process where we consult our $k$ nearest neighbor. We give the $i$-th data point a voting weight $w_i$. In your example, each data point in class $A$ has weight $\frac1{0.95}$ and each data point in class $B$ has weight $\frac1{0.05}$. There are $4$ votes from class $A$ and $3$ votes from class $B$. We give class $A$ a score of $\frac{4}{0.95}\approx 4.21$ and class $B$ a score of $\frac{3}{0.05}=60$. Class $B$ has a higher score, hence we assign it to class $B$.
H: Cost function in linear regression Can anyone help me about cost function in linear regression. As from the below plot we have actual values and predicted values and I assumed the answer as zero but it actually is 14/6? Can anyone help out please? AI: $h_\theta$ implies that you're trying to model the relation between $h$ and $x$ with an straight line coming from the origin $(0,0)$. The parementer $\theta_1$ is the slope of this line. Evaluating $J(\theta_1=0) = J(0)$, implies that $h_\theta(x_i)=0$ whatever the value of $x_i$ is. Since $m=3$, and the labels are $y_1=1, y_2=2, y_3=3$ $J(0) = \frac{1}{6}[(0 -1)^2 + (0 - 2)^2 + (0-3)^2] = \frac{14}{6}$ The value of $\theta_1$ for which $J(\theta_1) = 0$ is obviously 1
H: KL divergence in VAE If I understand correctly KL-divergence is relative entropy of two distributions. To calculate KL divergence of two distributions, you would need two vectors of random variables. What I do not understand it, how you can calculate KL divergence in VAE (latent space vector and N(0,1) as it is stated in many tutorials. Latent space vector is not a vector of random variables. It is a vector of product of input, weights,bias and activation functions. All these do not make your vector a random variable vector. My question is, how to properly create latent space vector as random variable vector, so you could eventually calculate KL divergence. AI: You are right that the output of your encoder neural network is not a random variable, this is the mean $\mu$ and the standard deviation $\sigma$ of your latent random variable. For example if the output of your encoder is $\sigma = 1$ and $\mu = 0.5$, your latent random variable will be normal with mean 0.5 and standard deviation 1. Theb, you can calculate the KL divergence between your random variable and the prior for the latent random variable which is $\mathcal{N}(0, 1)$ in many tutorials.
H: How to write formula inside the loop to run this code in every hour continously in every day in panda python I have a csv file with five column data value. five columns are ['A','B','C','D','time'] Here my 'D' column is the output column and 'A'is my first column. Here I upload my coda and csv file. data = pd.read_csv('data20.csv') data = pd.DataFrame(data,columns=['time', 'A','B','C','D']) for i in range(1, len(data)): data.loc[i+1,'A'] = data.loc[i, 'D'] + data.loc[i, 'B'] - data.loc[i, 'C'] my csv file ; In my csv first row in A column 63 is represented and my output is 63 in D column. D column is my desired outputs. then in second row in A column its represented as 0. but I want to apply previous output value(63) into second row as A column input. Then that value will calculate by column B and reduce by C column then my desired output value is 60 represent in D column. So like this process will continue. So in A column 0 values replace by previous output value in D column. Here I want to call that every hour this process continue. In my csv file till to 2 p.m output value is 72. So then this 72 is will be the input of A column at 3p.m.then B and C column values will calculate or reduce using this formula will give the output for at 3.00 p.m. So like this I want to find the value of D column at 4p.m, 5 p.m, 6 p.m till to 12 midnight and it will continous to next day. So I wrote the code and I ddn't have any idea to give time for this process. Can anyone help me to solve this problem? AI: you could do something like import time while True: <your code> time.sleep(3600) this would execute your code every hour. I hope this answer your question.
H: Do I need to convert booleans to ints to enter them in a machine learning algorithm? My dataset contains a lot of columns with booleans do I really need to change them so I can insert them into the algorithm? I'm gonna use KNN right now but will test other algorithms later so I'm trying to ready up my dataset AI: In Python, True and False are cast implicitly into integers: True == 1 # True! False == 0 # True! Although they are not the same objects - you can test this with True is 1, which returns False. This means that an algorithm running in pure Python should work without conversion. Many libraries/algorithms have some part implemented in C/C++ in the background, in which case you might run into problems. You could try the model on your Pandas DataFrame as boolean. If it crashes, you know you must convert to integers/floats. Even if it doesn't crash, you could convert the values to integers or floats and run it for comparison. Here is a short example: In [1]: import pandas as pd In [2]: df = pd.DataFrame({'a':[0, 2, 3, 4, 5], 'b':[True, False, True, False, False]}) In [3]: df Out[3]: a b 0 0 True 1 2 False 2 3 True 3 4 False 4 5 False Convert everything to boolean In [4]: df.astype(bool) Out[4]: a b 0 False True 1 True False 2 True True 3 True False 4 True False In [5]: df.astype(float) Out[5]: a b 0 0.0 1.0 1 2.0 0.0 2 3.0 1.0 3 4.0 0.0 4 5.0 0.0
H: density of a synset I am reading the paper Text Classification Using WordNet Hypernyms. In it, the author gives the definition of synset density as the number of occurrences of a synset in the WordNet output divided by the number of words in the document. What is the synset density representation? AI: Welcome to this site! Please, try to be more specific when asking questions. In my answer I address the specific part of your question. If you wish to update the second part of the question, making it more specific, I will expand my answer accordingly as much as I can. The formula you're looking into is a common case of normalisation, not unlike the way the classical notion of density is calculated, following the formula density is the mass over the volume: In our case the mass of a synset is the count of its occurences, and the volume of its "container" is the total number of words. One way to think of it is this: Naturally, we want to measure the occurences, so the dividend should be obvious. Probably it is the divisor (division by the total number of words) that perplexes you. Consider what would have happened if we didn't do this division. Let's take two documents: a tweet and Tolstoy's War and peace. The number of characters in the first document roughly corresponds to the number of pages in the second. So intuitively it should be clear to everyone that the number of occurences of any synset would be magnitudes higher in the second documents. When we are talking of density, this is an undesired effect. The density shouldn't depend on the quantity. This is why you divide by the total volume.
H: Why is there a difference of "ML" vs "MLLIB" in Apache Spark's documentation? I am trying to figure out which pyspark library to use with Word2Vec and I'm presented with two options according to the pyspark documentation. https://spark.apache.org/docs/2.2.0/mllib-feature-extraction.html#word2vec https://spark.apache.org/docs/2.2.0/ml-features.html#word2vec mllib seems to be for using RDD's. And ml seems to be using "DataFrames". What is the difference? Shouldn't they both be using RDDs if this is spark under the hood? What is a "DataFrame" here? As the documentation doesn't explain it. AI: You are right, mllib uses RDDs and ml uses dataframes. At the beginning, there was only mllib because dataframes did not exist in spark. In fact, ml is kind of the new mllib, if you are new to spark, you should work with ml and dataframes.
H: What affects the magnitude of lasso penalty of a feature? Is there a way to intuitively tell if the lasso penalty for a particular feature will be small or large? Consider the following scenario: Imagine we use Lasso regression on a dataset of 100 features $ [X_1, X_2, X_3, ..., X_{100}] $. Now, if we take one feature and scale up its values by a factor of 10, so for example $X_1 = X_1 * 10$, and then we fit lasso regression on the new dataset, which of the following might be true? X1 probably would be excluded from the fitted model X1 probably would be included in the fitted model Someone told me that 2 is true. Somehow the operation on $X_1$ would result in less lasso penalty for $X_1$, therefore the feature is likely to be kept. I cannot understand why. Can anyone tell if this whole statement is not correct to begin with or if there is some truth in it? AI: When you increase the scale of $X_1$ by 10, the scale of the corresponding weight in you linear regression sees its scale divided by 10 (it is exactly 10 for standard linear regression and the same order of magnitude holds for lasso loss). As a consequence, the $L_1$ penalty for this coefficient is also divided by a factor 10 and it is less likely to be set to 0.
H: How understand scatter like this way? I am trying to learn some EDA skills and I get a scatter like this way: I don't see that there are some model relationships or group-related patterns between these two data. I mean what I thought was right? I am not pretty sure about it. If not mind, could anyone help me and give me some advice on multivariate plotting. Thanks in advance. AI: Simply put, your data shows that for a given value of $x$, you can have a wide range of $y$ values and that the ranges depend on the value of $x$. This usually happens when you have a partial view of the global picture, meaning that there are other variables in your data that could help explain the relationship. To leverage your plots, you can consider changing the color of the markers depending on an additional variable. You could also change the size of the marker depending on another one. There are a couple of options there, depending on what you look for. EDA is usually not the final step per se, you may rather ask yourself some questions related to the problem you are trying to solve, and see if the data accounts for it. Does this help?
H: Should estimated probabilities from multi class classification sum to 1 I am using a neural network with sigmoid activation function $h(z) = 1 / {(1+e^{-z})} $ in order to classify image data into 6 categories. When running the trained neural network over new image data, I noticed that the sums of the estimated probability from the hypothesis output for all 6 classes do not always sum to 1. For example given an input image, the hypothesis output for each class might be: Class 1 --- 0.10 Class 2 --- 0.11 Class 3 --- 0.12 Class 4 --- 0.13 Class 5 --- 0.14 Class 6 --- 0.15 I interpret this image as having a $13%$ probability of being classified into class 6. However, the sum across all classes is < 1. My intuition says the probability of each class should sum to 1 but again, I am very new to the machine learning world. Could there be a bug in my code or is this a 'normal' output? AI: Summing up to one or not both can have their special meaning that I'll try to explain them. If you have classes that are not mutually exclusive, say you have dog and cat classes and they both can exist in an image. In such cases, you should use sigmoid nonlinearity as the output of each class and interpret each one separately. Each one that has a value greater than half can explain the existence of the corresponding label. On the contrary, your inputs may be mutually exclusive which means in each input you may just have a cat or a dog. In this case, you should employ softmax nonlinearity and yes, it sums up to one. The winner would be the one with the highest value.
H: How would I be able to improve my CNN model (Keras)? Recently I read a research paper on age detection using facial images. So right now because of that I was trying to see how far I could get by applying a CNN to a dataset of facial images (with their respective ages) in order to predict their ages which would be in bins (ex. 0-10, 11-20, 21-30...). For training and testing training.shape (50000, 28, 28) testing.shape (2938, 28, 28) I tried to keep the images small as they would be able to run faster as well as using grayscale. And for the actual layers themselves I tried to keep it simple, for now, model = Sequential() model.add(Conv2D(64, kernel_size=3, activation='relu', input_shape=(28,28,1))) model.add(Conv2D(32, kernel_size=3, activation='relu')) model.add(Flatten()) model.add(Dense(10, activation='softmax')) Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 64) 640 _________________________________________________________________ conv2d_2 (Conv2D) (None, 24, 24, 32) 18464 _________________________________________________________________ flatten_1 (Flatten) (None, 18432) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 184330 ================================================================= Total params: 203,434 Trainable params: 203,434 Non-trainable params: 0 _________________________________________________________________ Compiled with the following model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) So far the best accuracy after running for 100 epochs has been 37.16. Which isn't great but recently I've gotten access to one of my schools gpu's so I wanted to fix anything that I'm doing wrong and improve my model. Is there anything you could recommend when it comes to improving the model, theres probably a lot this is more my first time trying to do this. AI: The first thing you should notice is that you've almost ruined your input signal. Take a look at a $28\times28$ image of a face? what can you see? is there any difference between a teenager and a middle-aged person? The point is that the network should be trained using data that does not have high Bayes error which means you as an expert can distinguish between inputs and label them correctly. Increase the size of your inputs. By doing so, if you use the current regime, you may have lots of trainable parameters between dense layers and convolutional layers. Consequently, try to employ more convolutional layers and some pooling layers among them. Also, try to add more dense layers with more neurons in each.
H: Class imbalance in one hot encoding for CNN I am building a 2D Convolutional Neural Network for MFCC features for audio classification. The issue I am facing is that there are 2 classes and huge imbalance between them. One class has 17687 samples while other has 67737 samples. I have done one hot encoding so I have two CNN outputs as [1,0] and [0,1]. From my research it seems that adding class_weights in model.fit only seems to work for binary classification problems. Is there any way I can assign weights to one hot encoded results? AI: You might want to consider using metrics such as the f1-score in order to measure how well your model does. Notice that if, in real life, the inbalance will be of a similar nature, then your model has to account for it anyway.
H: Why am I getting crossvalidation scores of 0 only I am trying Catboost package with iris dataset with following code: from sklearn.datasets import load_iris iris = load_iris() from catboost import CatBoostClassifier model = CatBoostClassifier(iterations=50, learning_rate=0.1, depth=4, loss_function='MultiClass') from sklearn.model_selection import cross_val_score scores = cross_val_score(model, iris.data, iris.target) print(scores) The output is: [0. 0. 0.] Why are the scores 0 only? I expected them to be close to 1. I tried adjusting parameters but results are still the same. Are these errors rather than classification accuracies? Thanks for your insight. Edit: It appears that with CatBoostClassifier, cross_val_score() uses KFold() rather than StratifiedKFold(), since adding cv=StratifiedKFold() in cross_val_score function solves this problem. With sklearn's classifiers such as LogisticRegression or SVC, cross_val_score uses StratifiedKFold as default (see here). AI: I think that perhaps the problem is that cross_val_score() in its default options for the parameter cv the documentation says: cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 3-fold cross validation, integer, to specify the number of folds in a (Stratified)KFold, An object to be used as a cross-validation generator. An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. So my guess is that, if cv not specified the split is being done without stratification. This, coupled with the fact that by default in the iris dataset the targets are perfectly sorted (50 label 0, then 50 label 1 and then 50 label 2) means that in each 3 k-fold you are training with two classes and predicting the third one and that's why the scores are 0. Two solutions: A) Shuffle the data: from sklearn.datasets import load_iris import pandas as pd iris = load_iris() from catboost import CatBoostClassifier model = CatBoostClassifier(iterations=50, learning_rate=0.1, depth=4, loss_function='MultiClass') from sklearn.model_selection import cross_val_score df = pd.DataFrame({'X0':iris.data[:,0],'X1':iris.data[:,1], 'X2':iris.data[:,2],'X3':iris.data[:,3],'Y':iris.target}) df = df.sample(frac=1).reset_index(drop=True) scores = cross_val_score(model, df[['X0', 'X1', 'X2', 'X3']], df['Y']) print(scores) Out: [0.96 0.94 0.94] B) Modify the cv parameter: from sklearn.datasets import load_iris iris = load_iris() from catboost import CatBoostClassifier model = CatBoostClassifier(iterations=50, learning_rate=0.1, depth=4, loss_function='MultiClass') from sklearn.model_selection import cross_val_score scores = cross_val_score(model, iris.data, iris.target, cv = 4) print(scores) Out: [1. 0.92105263 0.91891892 0.78378378]
H: Validation accuracy is always close to training accuracy I am trying to tune the hyperparameters of a LSTM I have to do time series forecasting. I have noticed that my validation accuracy is always very close to my training accuracy. I am not sure whether or not this is good or bad or what it implies in general? At the moment I have kept all the hyperparameters the same and have only varied the number of units in the LSTM layer from [1, 5, 26]. I expected 26 units to give me good results and have since added units 1 and 5 to help me investigate. I was also expecting to see that my validation accuracies are worse than my training accuracies but this does not seem to be the case. My validation accuracies track the training accuracies very well. Is this something to be concerned about? The plot below shows the average loss, average training and validation accuracies. You can see from the plot that each of the units training and validation accuracies stay very close together. Why is this? Does this mean that in general my model should generalise well on unseen data as there isn't much of a discrepancy? Obviously the accuracies of the model are not very good yet in general and it requires some tuning but to do the tuning process I was more expecting to see some differences between the training and validation data and not that they would stay largely similar to each other. EDIT: Further information as requested. The hyperparameters used for each the model can be seen in the legend of the plots. Here is an updated plot which shows a greater variation in accuracy from varying the number of neurons and batch size: From the plot it seems as if the model accuracy converges the quickest with a batch size of 1 and 39 neurons. However it is characteristic amongst all that the training and validation accuracies track each other closely. I hadn't expected that varying two hyperparameters independently would lead to a consistent result like that. In this problem I am working with, the model is to provide forecasts for one time series. I am using it as a 'toy' problem to try and learn what kind of models work well for my particular problem. I have 315 data points in total for the time series. I have left the last 52 out as a hold out test set which I haven't ever looked at to this point. 52 points as I am making weekly predictions. I then left the next last 52 points out to use as a validation set. Due to the time dependency I am unable to use a validation method such as KFold cross validation. Therefore I am using something called rolling origin analysis which is explained well here (see under predictive performance). this simulates well how the model would be used in practice and my accuracies are measurements using a modified sMAPE formula for a multi-step forecast. Essentially what this means is that I train (in this case) 27 separate models. I have a forecast horizon of 26, so what I do is take my train data and take the first 26 points of my validation set. I train my first model on the train data (which is 131 samples) and this is validated against 1 sample. I train the next model which uses the same training data except it is shifted one along so the very first point from train is dropped and the first point from the validation set is appended to the end of the train set. The validation set is then also shifted along by one. As a quick example (numbers represent indices of the time series): Model1 train: [1, 2, 3, 4, 5], val: [6, 7, 8] Model2 train: [2, 3, 4, 5, 6], val: [7, 8, 9] The average accuracy shown in the plots is therefore the average of the accuracies that each model computed against its own validation set. So each model is trained on the same amount of data and each model is completely fitted anew, that is Model1 doesn't share any information with Model2. The accuracies are computed in this way because I do not have much data and this is the best way I have found that allows me to get more than two validation sets and more than two test sets. AI: The fact that the training accuracy and the validation accuracy are close it is nothing to be concerned about. As you mention, it means your model are generalizing well. The thing to be worried about is the low training accuracy. It seems that your model is under-fitting the data but with low variance. This is the typical scenario of a model with high bias and low variance. The bias is the error (whatever you measure this error) in your training data. It tells you about how well the model is forecasting with seen data. High bias means poor performance (under-fitting). The variance is the difference in performance between the training data and the validation data. If the differences are high (far better performance in training than in validation) the model has high variance (over-fitting). Typically, decreasing the bias implies increasing the variance (bias-variance trade-off) then, probably, as you get better results in training, the validationa ccuracy may decrease. But, if the model is well designed, you may end up with low bias and low variance.
H: Data preprocessing, relative scale problems in features of same type I am using Keras NN with theanos backend in Python. In my data i have multiple features of the same type but in different columns (on purpose). Here is an example. F1 F2 0. 8673 7490 1. 5602 5602 2. 4352 2365 When i go to process this, say MinMaxScaler 0-1, even though '1.' in both features should be the same, because both columns have different min-max values won't the normalized value be different and therefor cause problems in pattern finding? If so, what method can i use to make sure that they are of the same relative scale? Thanks for any help. AI: If want you want is to scale both columns using the maximum of both and the minimum of both. You can do it like this: df = pd.DataFrame(data=data, columns=['F1', 'F2']) columns = ['F1', 'F2'] ma = float(np.amax(df[columns].values)) mi = float(np.amin(df[columns].values)) for column in columns: df[column] = df[column].apply(lambda e: (e - mi) / (ma - mi)) According to the documentation of sklearn that is what MaxMinScaler does, if you want to use predefined values for max and min, you can just change ma and mi. If you have more columns you just have to change the list of columns. For rescaling to an arbitrary range (max, min) read the sklearn documentation on MaxMinScaler.
H: GBM vs XGBOOST? Key differences? I am trying to understand the key differences between GBM and XGBOOST. I tried to google it, but could not find any good answers explaining the differences between the two algorithms and why xgboost almost always performs better than GBM. What makes XGBOOST so fast? AI: Quote from the author of xgboost: Both xgboost and gbm follows the principle of gradient boosting. There are however, the difference in modeling details. Specifically, xgboost used a more regularized model formalization to control over-fitting, which gives it better performance. We have updated a comprehensive tutorial on introduction to the model, which you might want to take a look at. Introduction to Boosted Trees The name xgboost, though, actually refers to the engineering goal to push the limit of computations resources for boosted tree algorithms. Which is the reason why many people use xgboost. For model, it might be more suitable to be called as regularized gradient boosting. Edit: There's a detailed guide of xgboost which shows more differences. References https://www.quora.com/What-is-the-difference-between-the-R-gbm-gradient-boosting-machine-and-xgboost-extreme-gradient-boosting https://xgboost.readthedocs.io/en/latest/tutorials/model.html
H: How does sigmoid saturate with large weights? In cs231n course , it is mentioned that If the initial weights are too large then most neurons would become saturated and the network will barely learn. How do the neurons get saturated? Large weights may lead to a z (output of saturation) which is not cery close 0 or 1 and so doesn't let z*(1-z) to saturate AI: The sigmoid function $$ \theta(z) = \frac{1}{1+e^{-z}}$$ looks like this : where $$z=w_i a_i + bias$$ for activations $a_i$ from the previous layer, and weights $w_i$ of the current neuron. When the weights $w_i$ are too large (positive or negative), $z$ tends to be large as well, driving the output of the sigmoid to the far left (value 0) or far right (value 1). These are saturation regions where the gradient/derivative is too small, slowing down learning. Learning slows down when the gradient is small, because the weight upgrade of the network at each iteration is directly proportional to the gradient magnitude.
H: Check similarity of table/csv of Product Names We've got a list of approximately 18,000 product names (they're from 80-90 sources, so quite a few that are similar but not duplicates - these were picked as DISTINCT from a table) unfortunately there are different ways of expressing these names. We have to try and normalize the dataset so we present our users with more meaningful names. For example, a list like this: Canon EOS 5D Mark III Canon EOS 5D mk III Canon EOS 5DMK3 Canon EF 70-200mm f/2.8L IS II USM Lens Canon EF 70-200mm f/2.8L IS II USM Telephoto Zoom Lens Canon EF 70-200mm f/2.8L IS USM Lens Canon EF 70-200mm f/4L USM Lens I'd like to be able to assess those strings and collapse them into something like this: Canon EOS 5D Mark III Canon EF 70-200mm f/2.8L IS II USM Canon EF 70-200mm f/2.8L IS USM Lens Canon EF 70-200mm f/4L USM Lens But I'd like to know how similar two strings are to be able to determine this. I do realise that the F2.8 IS II and IS USM may be a bit hard, but thought I'd throw it in. The real product names are far less exciting (they're parts for the farm equipment we stock). We also store these names in a Postgres (9.5) database table. Examples i've seen compare two lists, but we don't have a master product list to do that unfortunately. AI: Your problem is known as detection of near-duplicate documents, i.e. you have strings that are similar but not exact duplicate. The most common approaches are using cosine similarity and Jaccard similarity. You could check this page for more information First you have to convert your strings to a vector of features, this features could be tf-idf vectors of all the tokens (words) that appear in the database, could also be a vector of n-grams. For a discussion of semantic and syntactic features you can look here. Finally you would have to check the similarity of every pair of documents, in your case 18000. The brute force approach is O(n^2), so this could be unfeasible. A common technique to deal with this issue is to use fingerprinting (hashing) together with Locality Sensitive Hashing (LSH). You can find an introduction and a general discussion of the whole topic in Chapter 3 of Mining Massive Datasets
H: How to create a training set for sequence labelling I have a bunch of unlabeled text data that I would like to hand label. Are there any tools out there that you can use to produce your own labels? Ideally it would show a sentence at a time and give me the option of choosing a label for each word. AI: There are several tools for this you could check the Stanford Simple Annotation Tool, also Brat has online demos. For a comprehensive list you could check this question on Quora
H: Stanford parser Python : Combine NER and POS tags Hi I am experimenting with stanford parser and NER with python Input = "Rami Eid is studying at Stony Brook University in NY" Parser Output: NER Output : [(u'Rami', u'PERSON'), (u'Eid', u'PERSON'), (u'is', u'O'), (u'studying', u'O'), (u'at', u'O'), (u'Stony', u'ORGANIZATION'), (u'Brook', u'ORGANIZATION'), (u'University', u'ORGANIZATION'), (u'in', u'O'), (u'NY', u'O')] Now can I combine NER results with Parser result ? So that (u'Rami', u'NNP'), (u'Eid', u'NNP') ==> u('Rami EID', u'PERSON') (u'Stony', u'NNP'), (u'Brook', u'NNP'), (u'University', u'NNP') ==> (u'Stony Brook University',u'ORGANIZATION') will get replaced in the graph How this can be done ? AI: This question has already been answered. See Alexis' answer at https://stackoverflow.com/questions/30664677/extract-list-of-persons-and-organizations-using-stanford-ner-tagger-in-nltk If you want to use chunking NER without the Stanford library see alvas' answer at https://stackoverflow.com/questions/31836058/nltk-named-entity-recognition-to-a-python-list
H: Reinforcement learning, pendulum python I'm having trouble finding a good reward function for the pendulum problem, the function I'm using: $-x^2 - 0.25*(\text{xdot}^2)$ which is the quadratic error from the top. with $x$ representing the current location of the pendulum and $\text{xdot}$ the angular velocity. It takes a lot of time with this function and sometimes doesn't work. Any one have some other suggestions? I've been looking in google but didn't find anything i could use AI: You could use the same reward function that Openai's Inverted Pendulum is using: $costs=-(\Delta_{2\pi}\theta)^2 - 0.1(\dot{\theta})^2 - 0.001u^2$ where $(\Delta_{2\pi}\theta)$ is the difference between current and desired angular position performed using modulo $2\pi$. The variable $u$ denotes the torque (the action of your RL agent). The optimal is to be as close to zero costs as it gets. The idea here is that you have a control problem in which you can come up with a quadratic 'energy' or cost function that tells you the cost of performing an action at EVERY single time step. In this paper (p.33 section 5.2) you can find a detailed description. I have tested RL algorithms in this objective function and I did not encounter any problems for convergence in both MATLAB and Python. If you still have problems let us know what kind of RL approach you implemented and how you encoded the location of the pendulum. Hope it helps!
H: Accuracy improvement for logistic regression model I have achieved 68% accuracy with my logistic regression model. I want to increase the accuracy of the model. How can I apply stepwise regression in this code and how beneficial it would be for my model? What changes shall I make in my code to get more accuracy with my data set. I have attached my dataset below. Following is my code: library(dplyr) data1 <- read.csv("~/hj.csv", header=T) train<- data1[1:116,] VALUE<-as.numeric(rownames(train)) testset<- data1[1:116,] mylogit <- glm(VALUE ~ POINT1 + POINT2 + POINT3 + POINT4 , data = data1, family ="binomial") testset$predicted.value = predict(mylogit, newdata = testset, type="response") for (i in 1: nrow(testset)){ if(testset$predicted.value[i] <= 0.50) testset$outcome[i] <- 0 else testset$outcome[i] <- 1 } print(testset) tab = table(testset$VALUE, testset$outcome) %>% as.matrix.data.frame() accuracy = sum(diag(tab))/sum(tab) print(accuracy) print(tab) table(testset$VALUE, testset$outcome) Following is my dataset: Link 1: http://www.filedropper.com/hj_2 AI: Try: mylogit <- glm(VALUE ~ POINT1 * POINT2 * POINT3 * POINT4, data = data1, family ="binomial") with about 72% accuracy.
H: Clarification wanted for make_step function of Google's deep dream script From https://github.com/google/deepdream/blob/master/dream.ipynb def objective_L2(dst): # Our training objective. Google has since release a way to load dst.diff[:] = dst.data # arbitrary objectives from other images. We'll go into this later. def make_step(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True, objective=objective_L2): '''Basic gradient ascent step.''' src = net.blobs['data'] # input image is stored in Net's 'data' blob dst = net.blobs[end] ox, oy = np.random.randint(-jitter, jitter+1, 2) src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift net.forward(end=end) objective(dst) # specify the optimization objective net.backward(start=end) g = src.diff[0] # apply normalized ascent step to the input image src.data[:] += step_size/np.abs(g).mean() * g src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image if clip: bias = net.transformer.mean['data'] src.data[:] = np.clip(src.data, -bias, 255-bias) If I understand what is going on correctly, the input image in net.blobs['data'] is inserted into the NN until the layer end. Once, the forward pass is complete until end, it calculates how "off" the blob at the end is from "something". Questions What is this "something"? Is it dst.data? I stepped through a debugger and found that dst.data was just a matrix of zeros right after the assignment and then filled with values after the backward pass. Anyways, assuming it finds how "off" the result of the forward pass is, why does it try to do a backwards propagation? I thought the point of deep dream wasn't to further train the model but "morph" the input image into whatever the original model's layer represents. What exactly does src.data[:] += step_size/np.abs(g).mean() * g do? It seems like applying whatever calculation was done above to the original image. Is this line what actually "morphs" the image? Links that I have already read through https://stackoverflow.com/a/31028871/2750819 I would be interested in what the author of the accepted answer meant by we take the original layer blob and "enchance" signals in it. What does it mean, I don't know. Maybe they just multiply the values by coefficient, maybe something else. http://www.kpkaiser.com/machine-learning/diving-deeper-into-deep-dreams/ In this blog post the author comments next to src.data[:] += step_size/np.abs(g).mean() * g: "get closer to our target data." I'm not too clear what "target data" means here. Note I'm cross posting this from https://stackoverflow.com/q/40690099/2750819 as I was recommended to in a comment. AI: What is this "something"? Is it dst.data? I stepped through a debugger and found that dst.data was just a matrix of zeros right after the assignment and then filled with values after the backward pass. Yes. dst.data is the working contents of the layer inside the CNN that you are trying to maximise. The idea is that you want to generate an image that has a high neuron activation in this layer by making changes to the input. If I understand this correctly though, it should be populated immediately after the forward pass here: net.forward(end=end) Anyways, assuming it finds how "off" the result of the forward pass is, why does it try to do a backwards propagation? I thought the point of deep dream wasn't to further train the model but "morph" the input image into whatever the original model's layer represents. It is not training. However, like with training we cannot directly measure how "off" the source that we want to change is from an ideal value, so instead we calculate how to move toward a better value by taking gradients. Back propagation is the usual method for figuring out gradients to parameters in the CNN. There are some main differences with training: Instead of trying to minimise a cost, we want to increase a metric which summarises how excited the target layer is - we are not trying to find any stationary point (e.g. the maximum possible value), and instead Deep Dream usually just stops arbitrarily after a fixed number of iterations. We back propagate further than usual, all the way to the input layer. That's not something you do normally during training. We don't use any of the gradients to the weights. The weights in the neural network are never changed. We are only interested in the gradients at the input - but to get them we need to calculate all the others first. What exactly does src.data[:] += step_size/np.abs(g).mean() * g do? It seems like applying whatever calculation was done above to the original image. Is this line what actually "morphs" the image? It takes a single step in the image data along the gradients that we have calculated will trigger more activity in the target layer. Yes this alters the input image. It should be repeated a few times, according to how extreme you want the Deep Dream effect to be.
H: Overfitting in machine learning The graph above shows how accuracy stops increasing after reaching a certain number of features. There are also sudden drops in accuracy at some points. Can this be attrrubuted to overfitting? I am training a decision tree by the way. AI: I can tell from your screenshot that you are plotting the validation accuracy. When you overfit your training accuracy should be very high, but your validation accuracy should get lower and lower. Or if you think in terms of error rather than accuracy you should see the following plot in case of overfitting. In the figure below the x-axis contains the training progress, i.e. the number of training iterations. The training error (blue) keeps decreasing, while the validation error (red) starts increasing at the point where you start overfitting. This picture is from the wikipedia article on overfitting by the way: https://en.wikipedia.org/wiki/Overfitting Have a look. So to answer your question: No, I don't think you are overfitting. If increasing the number of features would make the overfitting more and more significant the validation accuracy should be falling, not stay constant. In your case it seems that more features are simply no longer adding additional benefit for the classification.
H: Feature Selection Algorithm for Attributes with Logical Relationships (like "AND") I'm looking at datasets where the the attributes and the target class have a logical relationsships. All attributes and the target class are binary. Here's an example: Neither feature#1 nor feature#2 have a significant correlation with the target class. But the conjunction feature#1 AND feature#2 is correlated with the target class. Are there any Feature Selection Algorithms able to cope with situations like that? I'm thinking that frequent itemsets could be useful. It would be tremendously helpful, if anyone could point me to a related paper. Cheers! AI: Emergent pattern mining could be useful for you. In classification, emergent patterns are subsets of features which differentiate one class from the others. Look at Example 1.1 in the paper. Two feature subests, denoted $X$ and $Y$, are EPs because $X$ has large support in one class and small support in the other, and vice versa for $Y$. The nifty thing about EP mining is that it considers subsets of features instead of individual features. Mining patterns in this way could address your problem of feature1 and feature2 having no correlation with the target class alone, but the combination feature1 AND feature2 having high correlation with the target class. Unfortunately, I don't see any open-source software implementations on the web. However, there are a number of papers out there which implement some sort of EP mining.
H: Feature selection where adding features are deteriorating model k I am training a kNN classifier with 144 features and graphed the accuracy vs number of features used and got this. What might be the reason for the drops in the accuracy at some points of the graph? I am using accelerometer-gyroscope-magnetometer fusion to recognize human activities. The one presented is validation accuracy. Should I use training accuracy instead? And why? I ranked the features using ReliefF feature selection algorithm. I used time domain features such as mean, standard deviation, rms, median, variance, iqr, mad, zcr and mcr, and frequency domain features such as skewness, kurtosis and pca Here are the top 8 features chosen. Peak accuracy occurs at 8 features. AI: I guess the measurements from accelerometer-gyroscope-magnetometer are noisy and redundant in some sense. This means you can find some sort of correlation among the values of the measurements, for instance a correlation between the values from the accelerometer and the gyroscope. PCA captures the principal directions of variation on your data, removing the correlation between the measurements and also reducing the noise, therefore increasing the accuracy. From the graph it can be seen that the accuracy just diminishes a little when using all the features. Other factor I will consider is the magnitude of the features, a feature with a very large magnitude affects the behavior of K-NN.
H: RNN's with multiple features I have a bit of self taught knowledge working with Machine Learning algorithms (the basic Random Forest and Linear Regression type stuff). I decided to branch out and begin learning RNN's with Keras. When looking at most of the examples, which usually involve stock predictions, I haven't been able to find any basic examples of multiple features being implemented other than 1 column being the feature date and the other being the output. Is there a key fundamental thing I'm missing or something? If anyone has an example I would greatly appreciate it. Thanks! AI: Recurrent neural networks (RNNs) are designed to learn sequence data. As you guess, they can definitely take multiple features as input! Keras' RNNs take 2D inputs (T, F) of timesteps T and features F (I'm ignoring the batch dimension here). However, you don't always need or want the intermediate timesteps, t = 1, 2 ... (T - 1). Therefore, Keras flexibly supports both modes. To have it output all T timesteps, pass return_sequences=True to your RNN (e.g., LSTM or GRU) at construction. If you only want the last timestep t = T, then use return_sequences=False (this is the default if you don't pass return_sequences to the constructor). Below are examples of both of these modes. Example 1: Learning the sequence Here's a quick example of training a LSTM (type of RNN) which keeps the entire sequence around. In this example, each input data point has 2 timesteps, each with 3 features; the output data has 2 timesteps (because return_sequences=True), each with 4 data points (because that is the size I pass to LSTM). import keras.layers as L import keras.models as M import numpy # The inputs to the model. # We will create two data points, just for the example. data_x = numpy.array([ # Datapoint 1 [ # Input features at timestep 1 [1, 2, 3], # Input features at timestep 2 [4, 5, 6] ], # Datapoint 2 [ # Features at timestep 1 [7, 8, 9], # Features at timestep 2 [10, 11, 12] ] ]) # The desired model outputs. # We will create two data points, just for the example. data_y = numpy.array([ # Datapoint 1 [ # Target features at timestep 1 [101, 102, 103, 104], # Target features at timestep 2 [105, 106, 107, 108] ], # Datapoint 2 [ # Target features at timestep 1 [201, 202, 203, 204], # Target features at timestep 2 [205, 206, 207, 208] ] ]) # Each input data point has 2 timesteps, each with 3 features. # So the input shape (excluding batch_size) is (2, 3), which # matches the shape of each data point in data_x above. model_input = L.Input(shape=(2, 3)) # This RNN will return timesteps with 4 features each. # Because return_sequences=True, it will output 2 timesteps, each # with 4 features. So the output shape (excluding batch size) is # (2, 4), which matches the shape of each data point in data_y above. model_output = L.LSTM(4, return_sequences=True)(model_input) # Create the model. model = M.Model(input=model_input, output=model_output) # You need to pick appropriate loss/optimizers for your problem. # I'm just using these to make the example compile. model.compile('sgd', 'mean_squared_error') # Train model.fit(data_x, data_y) Example 2: Learning the last timestep If, on the other hand, you want to train an LSTM which only outputs the last timestep in the sequence, then you need to set return_sequences=False (or just remove it from the constructor entirely, since False is the default). And then your output data (data_y in the example above) needs to be rearranged, since you only need to supply the last timestep. So in this second example, each input data point still has 2 timesteps, each with 3 features. The output data, however, is just a single vector for each data point, because we have flattened everything down to a single timestep. Each of these output vectors still has 4 features, though (because that is the size I pass to LSTM). import keras.layers as L import keras.models as M import numpy # The inputs to the model. # We will create two data points, just for the example. data_x = numpy.array([ # Datapoint 1 [ # Input features at timestep 1 [1, 2, 3], # Input features at timestep 2 [4, 5, 6] ], # Datapoint 2 [ # Features at timestep 1 [7, 8, 9], # Features at timestep 2 [10, 11, 12] ] ]) # The desired model outputs. # We will create two data points, just for the example. data_y = numpy.array([ # Datapoint 1 # Target features at timestep 2 [105, 106, 107, 108], # Datapoint 2 # Target features at timestep 2 [205, 206, 207, 208] ]) # Each input data point has 2 timesteps, each with 3 features. # So the input shape (excluding batch_size) is (2, 3), which # matches the shape of each data point in data_x above. model_input = L.Input(shape=(2, 3)) # This RNN will return timesteps with 4 features each. # Because return_sequences=False, it will output 2 timesteps, each # with 4 features. So the output shape (excluding batch size) is # (2, 4), which matches the shape of each data point in data_y above. model_output = L.LSTM(4, return_sequences=False)(model_input) # Create the model. model = M.Model(input=model_input, output=model_output) # You need to pick appropriate loss/optimizers for your problem. # I'm just using these to make the example compile. model.compile('sgd', 'mean_squared_error') # Train model.fit(data_x, data_y)
H: Visualize changes other time I am analyzing a population movement pattern and I would like to design a visualization like this one: Is there any tool or language (preferably R) I can use for that purpose? Here is a sample of my dataset. AI: Of course you can reproduce that graph with some ggplot! Here is the code to create it: library(ggplot2) library(scales) df <- read.csv2('Graphs_SK.csv') p <- ggplot(data = df, aes(x = Governorate)) + geom_segment(aes(y = Mid.May.2015, yend = Dec.16, xend = Governorate), color = 'grey', size = 1) + geom_point(aes(y = Mid.May.2015, color = 'start'), size = 2) + geom_point(aes(y = Dec.16, color = 'end'), size = 2) + scale_colour_manual(name = '(Population in millions)', labels = c('May 2015', 'Dec 2016'), values = c('start' = 'steelblue4', 'end' = 'steelblue')) + scale_y_continuous(labels = comma, name = '', position = 'top') + coord_flip() + ggtitle('1 Year of something of Governorate') + guides(colour = guide_legend(title.position = 'right')) + theme(panel.background = element_rect(fill = 'white'), legend.position = c(-.15, 1.03), legend.direction = 'horizontal', legend.justification = c(0, 0), legend.key = element_rect(fill = NA), legend.title = element_text(size = 10), axis.text.y = element_text(hjust = 0), axis.ticks.y = element_line(size = NA), plot.title = element_text(hjust = .5))
H: How to approach a Data Science case study question? I recently had a phone interview with a consumer tech company for a quant position. The question was basically, "imagine a facebook style social network site. Six months ago a new feature called 'mentions' was added which allows you to tag your friends with an @ sign. How would determine whether this feature was a success?" I was a bit taken aback by how broad the question was. I first asked if the feature was given to everyone in the network or a sample, to which the interviewer responded "you decide" - meaning I could approach the analysis either way. I talked in general terms about calculating week over week usage of the feature as well as month over month. I also discussed computing a baseline metric for product interaction and then comparing the usage of the new mentions feature relative the baseline statistic. Overall I left the interview feeling quite dumb, as I have a pretty solid command of stats, but came away looking like an idiot. Are there specific statistical procedures to test for something like this? al la A/B testing, or some kind of hypothesis test? And secondly, is there a good framework for approaching these types of open ended case study style questions in general? AI: This question (something I've asked variants of several times in interviews) has absolutely nothing to do with statistical or other quantitative procedures. What is being asked here is for an understanding of the overall data mining process itself. The first thing to determine is what the definition of success. So you have to ask. The stakeholder usually will not volunteer this unless asked anyway. Then, depending on the answer describe the overall process for data mining based on this end goal.
H: Question about the simple example for batch normalization given in "deep learning" book In the section about batch normalization of Deep Learning book by Ian Goodfellow (chapter link) there is the follwing text: As example, suppose we have a deep neural network that has only one unit per layerand does not use an activation function at each hidden layer: $y=x w_1 w_2 w_3 \ldots w_l$. Here, $w_i$ provides the weight used by layer $i$. The output of layer $i$ is $h_i=h_{i−1} wi$. The output $y$ is a linear function of the input $x$, but a nonlinear function of the weights $w_i$. Why y is nonlinear with respect to w_i? AI: I think what the statement meant was when given weights $w_1,...,w_n$ are fixed, output is linear proportional to $x$, but as it mentions nonlinear function of the weights w_i Given a set of weights (more than one being varied), they do not linearly add to produce an output like $y=w_1x_1+...+w_nx_n$ but rather non-linearly like $y=w_1w_2..w_n*x$ (each $w_i$ is a dimension in the hyperspace) And I think it would become more clearer from the statement "Output is linear to any weight $w_i$ but non-linear to weights $w_i$".
H: Binning data in one of the columns of a dataframe(Using R) I have a data frame that contains random values between 0-60 (inclusive 0 & 60). These values denote months. I want to bin the data into three categories (x<=6, 6< x <=12, x>12) and generate a new single columns which will be a factor containing 3 values (0,1,2) denoting the respective bins. I am able to generate 3 columns one-hot encoded style but I am unable to think of a way to generate single column having 3 factors. AI: Use cut: > df = data.frame(v=sample(1:60,1000,TRUE)) > df$cat = cut(df$v,c(-Inf,6,12,Inf)) > table(df$cat) (-Inf,6] (6,12] (12, Inf] 97 92 811 Also, simple R questions are better asked on StackOverflow.
H: Python Keras NN, handling n_samples of float outputs I am using Keras with Theanos backend in Python. I have 2117 samples and each sample has an individual target (on purpose) ie. 2117 outputs. As opposed to categories, the targets are ratings eg. (16.4714494876, 17.4129353234, 17.4476570289) the entirety of the number is important. I am having problems/don't know where to start. 1) When i run the NN it only outputs the targets as whole integers as opposed to the format of the actual values. eg. 16 instead of 16.xxxxxx 2) Presumably i will only be able to gauge the accuracy of predictions based on how close the output is to the target since there are so many targets, does this type of classification problem have a name that i can research? 3) In 3 research papers i have read that apply NN to my specific classification problem they list the output layer as only having 1 neuron but provide no further explanation, how could this be? Here is my model. # fix random seed for reproducibility seed = 7 np.random.seed(seed) X = np.array(df[FEATURES].values) Y = np.array(df["MTPS"].values) # define baseline model def baseline_model(): # create model model = Sequential() model.add(Dense(10, input_dim=(len(FEATURES)), init='normal', activation='relu')) model.add(Dense(2117, init='normal', activation='softmax')) # Compile model model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model #build model estimator = KerasClassifier(build_fn=baseline_model, nb_epoch=100, batch_size=5, verbose=2) #cross validation X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=seed) estimator.fit(X_train, Y_train) #print class predictions print estimator.predict(X_test) print Y_test Thanks for any help. AI: I'm confused. As you have said, the targets are ratings. It's definitely a regression problem to me, instead of a classification problem. There's several problems in your code: For regression problem, we usually use linear as activation function of the last layer (sometimes relu, even sigmoid). And also we use mse as metric(sometimes mae, msle, etc). categorical_crossentropy is used for classification problem, and sparse_categorical_crossentropy is used for sparse input classification problem. Ref KerasClassifier is used for classification problem, use KerasRegressor instead. metrics=['accuracy'] is used for classification problem, and it's meaningless in regression problem. Ref Assuming df is a Pandas DataFrame, then df.values is naturally a ndarrary, there's no need to cast np.array. Now answering your question: As I've never used wrappers for scikit-learn API, I'm not very sure why you have integer output, my best guess is because of KerasClassifier and .predict(in scikit-learn APIs, .predict returns integer, basically the predicted class and .predict_proba returns float, indicating the probability of each class). Try to use .predict_proba, it would help. BTW, you should really use KerasRegressor. As you have mentioned, there's so many classes to predict. In fact, it's a regression problem. Could you please add the links of these 3 papers? Neural networks having only 1 neuron in output layer seem to be a regression NN to me. Regression NN usually uses Dense(1, activation='linear') as the output layer. Here's my version of your code, it might work: # fix random seed for reproducibility seed = 7 np.random.seed(seed) X = df[FEATURES].values # no need to cast np.array Y = df["MTPS"].values # define baseline model def baseline_model(): # create model model = Sequential() model.add(Dense(10, input_dim=(len(FEATURES)), init='normal', activation='relu')) model.add(Dense(1, init='normal', activation='linear')) # one neuron, linear activation # Compile model model.compile(loss='mse', optimizer='adam') # mse loss return model #build model estimator = KerasRegressor(build_fn=baseline_model, nb_epoch=100, batch_size=5, verbose=2) # KerasRegressor for regression problem #cross validation X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=seed) estimator.fit(X_train, Y_train) #print class predictions print estimator.predict(X_test) print Y_test References
H: Uploading huge dataset I have few questions: Is there a website to upload huge research dataset (over 100GB) for free? Which type of compression (rar, zip ... etc) is good for jpeg images? In case of dataset of 120GB. what is the best split for this big files (eg: 20 GB each)? AI: Don't compress files that are already compressed (like how JPEG/PNG images or video files are). Their inherent compression is usually good enough, and you only trade some 5% in lower compressed size for a much lengthier and often non-seekable decompression that results in using twice the disk space. If you need to batch the files together, just use tar.
H: number of parameters for convolution layers In this highly cited paper, authors give the following discussion on the number of weight parameters. I am not very clear why it has $49C^2$ parameters. I think it should be $49C$ since each of $C$ input channels shares the same filter, which has $49$ parameters. AI: Actually it's $49C*C$, the first $C$ is the number of input channels, and the second $C$ is the number of filters. Quote from CS231n: To summarize, the Conv Layer: Accepts a volume of size $W_1 \times H_1 \times D_1$ Requires four hyperparameters: Number of filters $K$, their spatial extent $F$, the stride $S$, the amount of zero padding $P$. Produces a volume of size $W_2 \times H_2 \times D_2$ where: $W_2 = (W_1 - F + 2P)/S + 1$ $H_2 = (H_1 - F + 2P)/S + 1$ (i.e. width and height are computed equally by symmetry) $D_2 = K$ With parameter sharing, it introduces $F \cdot F \cdot D_1$ weights per filter, for a total of $(F \cdot F \cdot D_1) \cdot K$ weights and $K$ biases. In the output volume, the $d$-th depth slice (of size $W_2 \times H_2$) is the result of performing a valid convolution of the $d$-th filter over the input volume with a stride of $S$, and then offset by $d$-th bias. A common setting of the hyperparameters is $F = 3, S = 1, P = 1$. However, there are common conventions and rules of thumb that motivate these hyperparameters. See the ConvNet architectures section below.
H: Error opening Iris.tab file in Orange load data widget I am attempting to load in the Iris.tab data set into the Orange load data widget. However, when I do this, I get the following error message (shown above). This keeps popping up when I start up Orange Data Mining and try to load the iris.tab data file. It used to work before, but now it is showing this error. Does anyone have any idea why this might be happening and what might be causing this error? AI: This was an error in Orange 3.3.11, and fixed in release 3.3.12. Update to the newest release to get rid of the error.
H: Which Outlier Detection Method? Why? For detecting an outlier in a vector I have tested different well known outlier detection methods. Finally, I used combination of different methods and an agreement between those methods. Now, a person asks why did you choose this combination and algorithms!? You can reach different combinations and use other algorithms and they may yield better results. What should I answer? I cannot just say based on tests as there are many other algorithms that I haven't tested (cannot test all algorithms). It is not a logical response, I think. I'm looking for tests to justify my selected methods and combination and say why I have selected these methods. Please let me know your suggestions. AI: You can justify your choices by using data. Treat the anomaly detection like a supervised learning problem where the concept is being anomaly. Then you'll be able to present - for each method - its confusion matrix. Not only it will be a good justification, it will enable to understand the expected results. Many times, we have models and we wonder which confidence threshold we should use for alerting. In the supervised learning framework you'll be able to do trade-off like "increasing the confidence to X will lead to a better precision Y yet a decrease of the recall to Z".
H: Format for X_train in keras using theano I want to try out Keras (Theano backend) for regressions after already using sklearn. For this I uses this nice tutorial http://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/ and tried to replace the training data there with my own. import numpy import pandas import pickle from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasRegressor from sklearn.model_selection import cross_val_score from sklearn.model_selection import KFold from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler [X, Y] = pickle.load(open("training_data_1_week_imp_lt_15.pkl", "rb")) X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size=0.5, random_state=42) scaler = StandardScaler() scaler.fit(X_train) # Don't cheat - fit only on training data X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) print (X_train.shape) # define base mode def baseline_model(): # create model model = Sequential() model.add(Dense(8, input_dim=8, init='normal', activation='relu')) model.add(Dense(1, init='normal')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam') return model # fix random seed for reproducibility seed = 7 numpy.random.seed(seed) # evaluate model with standardized dataset estimator = KerasRegressor(build_fn=baseline_model, nb_epoch=100, batch_size=5, verbose=0) estimator.fit(numpy.array(X_train),y_train) However, i get the following error: Exception: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 6252 arrays: ... The format of X is the usual sklearn format: print (X_train.shape) = (6252, 8) How do I format my input X correctly. What I tried was transposing but this did not work. I also already searched the web but could not find a solution/explanation. Thanks! EDIT: here is a small sample file https://ufile.io/8a428 [X, Y] = pickle.load(open("test.pkl", "rb")) AI: I solved this (still banging my head against the wall): estimator.fit(numpy.array(X_train),numpy.array(y_train)) this works. I am not sure why. The error message is very misleading IMHO.
H: How do I integrate Github files to Orange for ML? I am currently trying to make a facial recognition workflow with Orange but since the widgets are still prototypes they are not all available in Orange through the add-on menu. I found the Github files for the widgets but don't know how to install this so that I can see and use the widgets in Orange. AI: There are three ways to install Orange add-ons: The Options -> Add-ons menu of Orange. Use pip: pip3 install orange3-prototypes Install from the source: Terminal commands: git clone https://github.com/biolab/orange3-prototypes.git cd orange3-prototypes python3 setup.py install
H: What algorithms should I choose for a recommender system and why? To my knowledge, recommender systems are broadly classified into collaborative and content. Collaborative in turn is divided into 1) Memory (uses similarity metrics) and 2) Model (well known Matrix/Tensor factorization). Content based involves constructing a user profile and then an algorithm like SVM to classify and recommend items. Now here are my questions: What other algorithms can I use for recommendation and why? Can I use neural networks? (understanding them has been a bit difficult for me) Is it true that neural networks (NN) are only suited for text and image processing and numerical data doesn't need complex algorithms like NN? AI: Short answer: It depends on your data. What do you want to do ? Longer answer: Use content based approaches if you have data on your items. Use collaborative approaches is you have data on your users. Use both if you have both. I would say content based approaches are general machine learning problems (how do I extract meaningful information from data) whereas collaborative filtering is really a recommender system specific work (how user's behavior can suggest users/items similarity/connections). Well you can. Neural nets are just a kind of algorithm, you surely can use them for content based analysis, and it might be possible to use them to enhance your collaborative algorithm. NN use texts and images as numerical data, so I don't understand your question. If you want a good insight in today's recommender systems, take a look at this article.
H: Is it acceptable to select a random child node when using a Decision Tree (trained via ID3) to predict if an unknown attribute value is encountered I am writing a decision tree trained with the ID3 algorithm from scratch. I wanted to be able to train on and classify continuous data, so I implemented k-means clustering and reduced the range of values of any input training or predicting data. However, I ran into the problem where an attribute value that was not encountered during training, in a deep node somewhere in the tree, but that exists further up was encountered. All data points with this specific attribute value probably ended up on a different branch of the tree. So to 'solve' this, whenever an unknown attribute value is encountered for a node, I sent it down a random existing branch. I get 95.45% accuracy using randomly split 85%-15% training-test data with iris. Is this an acceptable approach to take or have I gotten something wrong here? Here's the code: https://github.com/jamalmoir/ml_components/blob/master/ml_components/models/decision_tree.py Thanks AI: Do your leaf nodes return a probability distribution rather than a single class value e.g., the majority class of training instances that arrived at the node? If so, a better approach would be to send the test instance down every child path and average the probability distributions that are returned by the leaves. Your good score of 95.45% on iris data suggests to me that it's not a very important detail in this example. Try some more challenging datasets and see if it makes a difference.
H: Is there a term for "this month last year" in a report? I'm building a report that has month over month data, but also "this month last year". Is there a better/standard way of describing this? AI: Same period last year is what you want. BTW, your question does not appear to be about data science. References https://msdn.microsoft.com/en-us/library/ee634972.aspx http://www.wiseowl.co.uk/blog/s2477/same-period-previous-year.htm
H: Explanation for MLP classification probability I showed some results of one implemented NN MLP model. In the result, for classification of two categories, if I sum up the probabilities of both cats, them some sum would be greater than 1. When I was asked why the sum is greater than 1, I gave the guess that the probability stands for confidence concept in mathematics. Could anyone tell me am I right? AI: It depends a little on the specific model. If your model is using a softmax output layer, then the values are usually interpreted as mutually-exclusive class probabilities and should sum to 1. If your model is using a sigmoid output layer, then the values for a classification problem can be interpreted as the individual, non-exclusive probabilities. In that case, it is possible for an example to not be in either of your classes (an output close to [0,0]), or to definitely be in both classes (an output close to [1,1]) - if that is not the case in the problem that you have trained the NN to model, then you might want to consider altering the model. So, if your classification should be mutually-exclusive, then the person asking you the question has a valid point. The probabilities should sum to 1, and you may need to fix the model. In that case, you should use a softmax output layer, and would not have this problem because the probabilities would always sum to 1. Whether it is worth fixing the model, depends on your understanding of mutually-exclusive vs non-exclusive classes in the problem, whether the current model meets its goals (it is not a big issue in practice, if the model is not a strict theoretical match to the problem, but has a perfectly good accuracy) and how much effort it would take to alter your code and re-train the model. Your statement: the probability stands for confidence concept in mathematics. is technically correct, but not useful as an answer to your colleague's query. For classification, the terms confidence and probability are almost interchangeable, so you cannot really use one word to explain the other. If someone asked why one of the numbers was larger than another in your model, you would not answer "that is because x is greater than y" - essentially this is what you have done here. So your statement is not wrong as such, but neither does it explain the result.
H: Analysis of railway data - Detecting outliers I would like some pointers about the following problem: I would like to detect anomalies in a pretty huge collection of railway data. Or create a baseline model for detecting future anomalies. The data I have at my disposal exists out of coordinates and speed at the given coordinate (also the time of measurement). Could this perhaps be approached as a regression problem where there's a clear(?) connection between location and speed of a train. For instance a train suddenly moving at snail's pace on a track that from historical data is considered high velocity could be a potential anomaly. If this could indeed be approached in such a way would something like SVM be an option or should I look into other algorithms? AI: There are a number of directions you can take, and there are a number of questions related to this topic (anamoly-detection-for-transaction-data). One question to answer first is whether the data is seasonal or not. You already suggested a valid approach. Determine which areas are correlated to low and high speeds and than determine when a speed differs. I think it's best to start with a heuristic approach and than try more complicated methods to try to improve. If you want to use an SVM, a one-class SVM may work well (one-class SVM)
H: Estimating Titan X graphics card impact on performance I'm currently training CNNs using Tensorflow (Python) on my GTX 970 (specs here). I recently took a look at the new pascal based Titan Xs and I'm wondering what an estimated performance/speed gain would be if I upgraded? The memory increase is an obvious benefit for larger models, but I'm mostly wondering about speed. Does the doubling of core count and bump in memory speed offer that much of a performance gain? Anyone have any experience with the new (or old) Titan X cards? AI: There are a lot of parameters which matter when using GPU's for machine learning, some of them are: CUDA core count Memory bandwidth (GB/s) Memory per core (MB) Raw Speed (MHz) Total Memory available (GB) Performance on 16-bit, 32-bit floating ops/sec Tim Dettmers has an excellent (frequently updated) blog where he's compared different cards, near the end he's also given a simple speed-up comparison. Using that as a guide, it can be estimated that Titan X pascal would be upto 5 times faster than a GTX 970 for Deep Learning.
H: Difference between paragraph2vec and doc2vec Is paragraph2vec the same as Doc2vec or is every approach different? AI: There may be differing implementations, but these two terms refer to the same thing. Both convert a generic block of text into a vector similarly to how word2vec converts a word to vector. Paragraph vectors don't need to refer to paragraphs as they are traditionally laid out in text. They can theoretically be applied to phrases, sentences, paragraphs, or even larger blocks of text. Here's one definition of a paragraph vector: An unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. And the full paper if you are interested: https://cs.stanford.edu/~quocle/paragraph_vector.pdf
H: Expanding Standard deviation The problem I face is of a language barrier. I have a set of samples consisting of the following data: Sample number Organism Count 1 5 2 8 3 5 4 7 5 13 ... ... I need to find the number of samples required to make an accurate estimate of the organism within a square meter of ground. This is for a school assignment. I don't need an answer to that. What I need to do is find the standard deviations between the set like follows: (sample 1) = Standard deviation (sample 1, sample 2) = standard deviation (sample 1, sample 2, sample 3) = standard deviation In a rolling way like that. The only information I can find about rolling standard deviation is with a specific window of samples. What I need is to grow from the first one. What is this called? I'm writing a program in python to calculate and display it in a graph. I want to know the name so I can prevent reinventing the wheel in a likely less performant way (if there is a name to this specific question). It is likely that the libraries I'm working with already provide a method to calculate what I'm asking. So the problem is: I need to find the standard deviation of a 'growing set' and I want to know if that has a specific name, so I can look for it in the documentation of the python library I'm using. Feel free to change the name of the question to make it better fit, since I didn't know the name I couldn't state it in the title. AI: If you are using Python, you can use pandas. It comes with an expanding standard deviation function. import pandas as pd import numpy as np # Generate some random data df = pd.DataFrame(np.random.randn(100)) # Calculate expanding standard deviation exp_std = pd.expanding_std(df, min_periods=2) # Print results print exp_std
H: Would a convolution network make sense for policy based tic-tac-toe approach? This is inspired from my previous question, comments to which made me realize that a CNN was unsuitable for the problem The CNN required over 700k training datasets while a MLP did it in in less than 50k. Now, I'm trying to solve the next problem and need to figure out if a CNN makes sense. CNN detail - Input - The board as an array of 9 elements that represents the board (0=empty, 1='X', 2='O') Output - Recommended move as a one-hot encoded array of 9 elements. The index of 1 is the recommended move (for example, in [0,0,1,0,0,0,0,0,0] the recommended move is 2) So, basically the CNN will be trained with a dataset that consists of boards and the move that the winner of the game has made for each of the boards. Then during evaluation, it'll try to predict the best move for a given board. Does a convolution neural network make sense for this problem? Note: The convnet that I was going to use for this problem is the same as in my previous question AI: This problem is not complex enough to justify a large convolutional network. However, if you are determined to use a CNN, then you could, just try to keep the architecture very simple. Just one convolutional layer (probably SAME/padded), no pooling or dropout, only a few feature maps (e.g. no more than 4, maybe just 2 will do) - and a softmax fully-connected layer for the output. Bear in mind that it can take more epochs, and perhaps more tuning of hyper-params, in order to get a more complex model to fit a simple problem. If you follow the plan in your earlier problem, and train against the whole population of valid states and moves, then you don't need to worry about over-fitting. You should bear in mind that tic-tac-toe is simple enough that you can use tabular methods (i.e. approaches that simply enumerate and score all possible game states) to find an optimal policy. The network is being used in your case as a policy function approximation, and is a bit like using a sledgehammer to crack a nut. This makes sense if you are learning a technique used on more sophisticated grid-based games, using a toy problem. However, if your goal was more directly to learn a policy for a tic-tac-toe playing bot, then you would be better off not using any supervised learning model for the policy.
H: Why is sum a succinct constraint? I'm new to data mining and have been going through constraint-based query mining lately. I came across the concept of succinctness which basically details a constraint as succinct, if we can generate all the candidate item-sets precisely, based on an itemset satisfying the constraint. A more formal definition is : Given A1, the set of items satisfying a succinctness constraint C, then any set S satisfying C is based on A1 , i.e., S contains a subset belonging to A1 Example, min(S.Price) <= v is succinct But, sum(S.Price) >= v is not succinct I understand why the former is a succinct constraint => as all the candidates can be generated by ensuring that one of the subsets satisfies that constraint. But I fail to understand why the latter is not a succinct constraint. Any pointers on this would be helpful ! AI: We can show that the "sum above threshold" is not succinct by providing a counter example. As you wrote the definition is Given A1, the set of items satisfying a succinctness constraint C, then any set S satisfying C is based on A1 , i.e., S contains a subset belonging to A1 Hence, we can provide a counter example by providing a set A1 satisfying the constraint while none of its subsets satisfying it. Consider three items, a,b and c such that each one on them costs 1. Let the constraint C be sum(S.Price) >= 3 For the set {a, b, c} the sum of prices is 3 and therefore the constraint C is satisfied. For each of the subsets of {a, b, c} the sum of prices is lower than 3 and therefore C is not satisfied. We found a counter example in which a set satisfies a "sum above threshold" while none of its subsets satisfies it. Hence, "sum above threshold" is not succinct.
H: More features hurts when underfitting? I was training a binary classifier using XGBClassifier (basically boosted decision trees if I understand it correctly). I have 10K training examples. I have two distinct set of features (but they could be dependent), one contains 26 features (call the set A) and the other contains 96 features (call the set B). I tried to train 3 classifiers, each w/ a different combination of the feature sets, namely A, B and A+B. The result is that using only A is clearly better than using both A and B. At this point I thought it might be overfitting, so that using less number of features actually avoids the overfitting. The # of trees used in the above trainings was 100. So I used 10-fold cross validation to find the optimal # of trees for each feature set combination, and they are all beyond 100 (like 300-500). So it seems to me the models were learning w/ less-than-desired degree of freedom and they were underfitting if you will. So this confuses me: why providing additional features make things worse (i.e. using only A is better than using both A and B) when the models are on the under-fitting side (as opposed to overfitting)? Or maybe a more general question: how do I find out the real problem in this situation? Note: I have read this question about overfitting/underfitting and I still think my models should be underfitting, because apparently increasing model complexity helps (e.g. from 100 trees to 300-500 trees). AI: Theoretically speaking, more data leads to better model. However, in practice, more features often leads to the difficulty of model training. Assuming there are 30 "main" features of your dataset. Feature set A contains 20 "main" features, so it might be easy (20 out of 26) for one "main" feature to be "chosen" and "trained", under certain hyperparameters (in your case, 100 trees). When it comes to feature set B, which contains all "main" features, it's hard (30 out of 96) for one "main" feature to be chosen, and it's harder when there's only 100 trees (cause there are 66 "minor" features which should not be trained, relatively). That's what we called "under-fitting". Back to your question, when the model is under-fitting, if we're lucky that model trained on feature set B (namely model B) contains all these 30 "main" features, the model will be good, and we might not able to find out it's under-fitting. But in most cases, we're not that lucky, model can be ruined with all 66 "minor" features, relatively. In my ML practice, I'll try more training iterations when more samples come, and try a more complex model when more features come, with the control of over-fitting/under-fitting.
H: How to generate bulk graphics using R I have a dataset of Key Performance Indicator (KPI) and for each KPI I have a current level of achivement and 2 targets : Target1 and Target2. How can I automatically generate one graphic for each achievement as in the file attached here. As I have many KPIs I would like to generate my graphics in a batch process AI: There are multiple ways you can achieve this. I guess the easiest would be to create a function that saves a bar chart for your KPI to a file: save_KPI_plot <- function(fn, kpi_data) { png(paste0(fn, ".png")) # Plot code dev.off() } You can call the function either as follows, save_KPI_plot("kpi_1", kpi_1_data) or store your file names in one list and your data in another and loop over both: kpi_fns <- list("filename_1", "filename_2") kpi_data <- list(kpi_1_data, kpi_2_data) for (i in seq_along(kpi_fns)) { save_KPI_plot(kpi_fns[i], kpi_data[i]) } If you prefer to have a different image format you can change png() to bmp() or jpeg().
H: Association rule mining and Orange3 package Python how i use Orange3 package for association rule mining ,in Orange 2.7 version Orange.associate.AssociationRulesSparseInducer() method is present but its not available in Orange3 .. AI: In Orange 3, AssociationRulesSparseInducer is removed. The documentation of Orange-Associate may help you on your task. References https://github.com/biolab/orange3-associate/issues/4 https://stackoverflow.com/questions/39667272/orange-data-mining-version-3-3-python-association-rules