text
stringlengths
83
79.5k
H: How to cluster text-based software requirements I'm beginner in deep learning and I'd like to cluster text-based software requirements by themes (words similarities/frequency of words) using neural networks. Is there any example/tutorial/github code of unsupervised neural network that groups texts based on themes and words similarities? Thank you very much for your answers! AI: I recommend using word2vec as feature vector of words and LSTM autoencoder to encode a sentence (or text) . After you get a vector for each sentence (or text), you can cluster your sentences (or texts) using a variety of clustering techniques like k-means or dbscan and represent them using t-sne or u-map. Start from here: https://blog.myyellowroad.com/unsupervised-sentence-representation-with-deep-learning-104b90079a93
H: Story Tag Prediction - Optional Labels I'm currently working on a prediction for fiction. I have a database with fiction, which are each described with different story tags. My idea is to use a neural network that can tell you by processing a new story which tags are relevant. The problem is, that the original data wasn't generated but added by users. A story in the woods could be tagged with trees, nature etc. Another story that also takes place in the woods might not be tagged with nature, even though the tag applies. This might confuse the neural network. Is there a way to prevent this form happening? Thank you! AI: Welcome to our community! If I understood correctly, you don't trust the labels on your training dataset. The problem is, that the original data wasn't generated but added by users. That is not properly a problem, actually almost every dataset was created by human labeling. Your problem is that you don't trust the judgement of those users and you think that data might be incomplete. It is true that this might affect your model's performance. But, you should try either way. Sometimes the model learns to label even better than the training dataset: Recently I looked the master's dissertation of a friend (Wesley L. Passos,M.Sc. by UFRJ) which used deep learning to detect tires on drone images (for Aedes Aegyptis prevention procedures). The dataset was created by our group my manually annotating images with bounding boxes and while we missed some of the tires that were pretty well hidden the D-CNN model was capable of detecting those hard subjects. Note: This dissertation was recently accepted and isn't available online yet. Once it does, I will update this answer with the proper reference. Since wrangling with data is our everyday job this is a good opportunity for you to put in practice what you have learned: Clean your data by either removing incomplete samples or filling missing values. This is a common part of our jobs as data scientists. If you feel that the amount of work is a bit overwhelming you can try clustering instead or semi-supervised methods to speed up the cleaning process. Also you can try posting your data online for further contributing. Curiosity Note: Crowds are usually more intelligent than individual, so with problem statistical treatment data can actually be better annotated by crowds, using mean answers or by voting. Check for The Wisdom of Crowds (Not the TV Show).
H: Why Heaps' Law Equation looks so different in this NLP course? I'm actually not sure if this question is allowed on this community since it's more of a linguistics question than it is a data science question. I've searched extensively on the Web and have failed to find an answer and also the Linguistics Beta Stack Exchange community also doesn't seem to be able to help. If it's not allowed here please close it. Heaps' Law basically is an empirical function that says the number of distinct words you'll find in a document grows as a function to the length of the document. The equation given in the Wikipedia link is $$V_R(n) = Kn^\beta$$ where $V_R$ is the number of distinct words in a document of size $n$, and $K$ and $\beta$ are free parameters that are chosen empirically (usually $0 \le K \le 100$ and $0.4 \le \beta \le 0.6$). I'm currently following a course on Youtube called Deep Learning for NLP by Oxford University and DeepMind. There is a slide in a lecture that demonstrates Heaps' Law in a rather different way: The equation given with the logarithms apparently is also Heaps' Law. The fastest growing curve is a corpus for Twitter data and the slowest is for the Wall Street Journal. Tweets usually have less structure and more spelling errors, etc. compared to the WSJ which would explain the faster-growing curve. The main question that I had is how Heaps' Law seems to have taken on the form that the author has given? It's a bit of a reach but the author didn't specify what any of these parameters are (i.e. $C$, $\alpha$, $r(w)$, $b$) and I was wondering if anybody might be familiar with Heaps' Law to give me some advise on how to solve my question. AI: The plot shows Heaps' Law but the formula is something different, it is Zipf's Law. $f(w)$ is the relative frequency (or probability) of word $w$. That is, given a random word, it will be $w$ with probability $f(w)$. Therefore, if a document has $n$ words, it has on average $n\times f(w)$ occurrences of word $w$. The formula can be re-written as follows: $$f(w)=C(r(w)-b)^{-\alpha}$$ which is a power-law distribution that shows Zipf's Law, however with a slightly different parameterization by introducing cut-off $b$. $r(w)$ denotes the rank of word $w$. For example, if we sort all the words in a news corpus based on their frequency, $r(\text{'the'})$ would be 1, $r(\text{'be'})$ would be 2, and so on, Cut-off $b$ ignores highly frequent words $r(w) \le b$, effectively shifting up the rank of remaining words, $C$ is the normalizing constant, i.e. $C=\sum_{r=\left \lfloor b \right \rfloor + 1}^{\infty}(r-b)^{-\alpha}$, which gives $\sum_{w,r(w)>b} f(w) = 1$, and Exponent $\alpha$ denotes the rate of drop in probability when rank increases. Higher $\alpha$, faster drop. Exponent $\alpha$ is determined by fitting the formula to some corpus, as shown in the table. Generally, lower $\alpha$ (in the case of twitter), thus slower drop, means corpus has more word diversity.
H: What's the best way to plot a bar graph with large numeric difference in values? I'm graphing the value of the Ruble against the US Dollar in the 1990s. There was hyper-inflation from 1992-1997 (where the "y" values expanded from 125 to 6000). On Jan 1st 1998, the government revalued the money to 5. Since then the number has trended upward but has only reached 70 or so. This creates a problem in my bar or line graph. You can't understand the low values after the stabilization. Any suggestions to help with this discrepancy when viewing the whole dataset in one graph? Plotting using matplotlib AI: A log scale will make the difference before and after revaluation look less extreme, and a log transformation is usually pretty defensible for things measured in units of currency (especially when there are no 0-value entries). Other transformations exist, but since this is mostly for aesthetics I don't think that you'd need much that a log transformation couldn't offer. Fiddling with multiple vertical axes and continuity breaks may help, but at an almost-guaranteed cost of making the graph very difficult to view and interpret correctly. If you need to show both periods, I would probably log transform. You will know your needs far better than I do, but I do question a little bit how valuable seeing the before and after this way is. The revaluation was an arbitrary decision introduced at an arbitrary time, and behavior of the currency before the revaluation is probably not very informative of anything that happened afterwards (and vice-versa). It's far from clear to me that there is much value in showing both at once like this, at least from my naïve look at a single graph. The main reason I can think of to show both at once this way is to display how extreme the inflation was, in which case the scale differences would be a feature and not a bug.
H: Why is a lower bound necessary in proofs of VC-dimensions for various examples of hypotheses? In the book "Foundations of Machine Learning" there are examples of proving the VC dimensions for various hypotheses, e.g., for axis-aligned rectangles, convex polygons, sine functions, hyperplanes, etc. All proofs first derive a lower bound, and then show an upper bound. However, why not just derive the upper bound since the definition of VC dimension only cares about the "largest" set that can be shattered by hypothesis set $\mathcal{H}$? Since all examples ends up with a lower bound matching the upper bound, is the lower bound just helpful/useful to set a target when trying to show an upper bound? Reference: From page 41 of this book pdf version https://pdfs.semanticscholar.org/e923/9469aba4bccf3e36d1c27894721e8dbefc44.pdf AI: Lets use the quotes from the book. To give an upper bound, we need to prove that no set $S$ of cardinality $d + 1$ can be shattered by $H$ By doing this, we are proving that $\text{VCdim}(H) < d+1$, but it does not necessarily mean $\text{VCdim}(H) = d$. For example, we may go for an easier-to-prove failure like sets with $2d+1$ members, and consequently prove $\text{VCdim}(H) < 2d+1$, although we know $\text{VCdim}(H) = d$. Therefore: To give a lower bound $d$ for $\text{VCdim}(H)$, it suffices to show that a set $S$ of cardinality $d$ can be shattered by $H$ We need to prove the lower bound as well to show that $d \le \text{VCdim}(H) < d+1$, which implies $\text{VCdim}(H) = d$. The last equality means that for each size $1$ to $d$, $H$ can shatter at least one set $S$ with that size. Note that for the $2d+1$ example, we will fail to prove $2d \le \text{VCdim}(H)$, therefore $\text{VCdim}(H) \neq 2d$, and consequently we must try to prove a smaller (probably harder to prove) upper bound $d+1$.
H: Classifier performance evaluation I have an unbalanced dataset which has 920 samples in total, 689 belong to the first class, and 222 to second class. and both classes are significant for me. so when building a classifier model such as SVM or KNN. what measurement should I consider to evaluate the performance of the classifier? usually people use accuracy. but in my case some times I get high accuracy but zero specificity which clearly indicates that the class is biased towards the majority class (class one in my case). I've been advised to use the F-score which combines both specificity and sensitivity. Also, there is the AUC. so what do you suggest? AI: Useful metrics in such scenario are: F1 Score (and precision / recall) ROC Curves (Metric is : Area Under the ROC Curve (AUC)) Few articles on how to choose metrics for a specific project are: Evaluation Metrics, ROC-Curves and imbalanced datasets by David S. Batista, What metrics should be used for evaluating a model on an imbalanced data set? by Shir Meir Lador, Choosing the Right Metric for Evaluating Machine Learning Models — Part 2 by Alvira Swalin.
H: Will a Count vectorizer ever perform (slightly) better than tf-idf? For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf. Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf. TFIDF confusion matrix [[ 80 11] [ 6 103]] BoW confusion matrix [[ 81 10] [ 6 103]] I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong? I am sorry, if it is an immature question, but I am really new to ML. AI: I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases
H: What does localist one-hot vector mean in cs224n NLP course? Chris said one-hot is a "localist" representation. what does "localist" mean here? I've searched on recommended text, didn't find explanation. any clue? AI: One-hot vector is called "localist" because it contains information only about a single data point, and does not give clues about other points, in contrast to a distributed representation (e.g. result of an embedding algorithm) that contains information about other data points too. For example, one-hot encoding of two animals and two flowers is as follows dog = [1, 0, 0, 0] cat = [0, 0, 1, 0] rose = [0, 1, 0, 0] tulip = [0, 0, 0, 1] Distance between any pair is the same, and effectively by knowing the vector of "rose" we gain no information about the related "tulip" or unrelated (or less related) "dog" and "cat", hence the name "localist". But if we happen to find a distributed representation like dog = [1.0, 0.1] cat = [1.1, 0.2] rose = [0.0, 1.2] tulip = [0.1, 1.3] The vector of "rose" alone gives us clues about other flowers like "tulip", which are closer, and also about other entities like animals, which are farther, hence a name like "globalist". Two notes: Being "localist" is not a zero / one thing. The more clues a representation gives about other data points, the less it is localist. The notion of "relatedness" is task dependent. For example, in the task of "grouping things with similar colors", a red car is more related to a red rose than a blue car. Here is a random image of those entities for fun! If all pairs look equally different, your brain works with "localist" vectors, thus has no clue ;)
H: Why can't I use data augmentation with a pretrained convnet? Reading Deep Learning with Python by François Chollet. In section 5.3.1, we've instantiated a pretrained convnet, VGG16, and are given two options to proceed: A) Running the convolutional base over your dataset, recording its output to a Numpy array on disk, and then using this data as input to a standalone, densely connected classifier similar to those you saw in part 1 of this book. This solution is fast and cheap to run, because it only requires running the convolutional base once for every input image, and the convolutional base is by far the most expensive part of the pipeline. But for the same reason, this technique won’t allow you to use data augmentation. B) Extending the model you have (conv_base) by adding Dense layers on top, and running the whole thing end to end on the input data. This will allow you to use data augmentation, because every input image goes through the convolutional base every time it’s seen by the model. But for the same reason, this technique is far more expensive than the first. Why can't I use data augmentation to generate more training data from existing training samples then go with option A? Seems like I can run the VGG16 base over my augmented dataset and use the output as the input to a standalone densely connected classifier. AI: Found the answer in stats.stackexchange.com. Hopefully this helps anyone else with the same question. feature extraction: freezing convolutional base vs. training on extracted features
H: Is it possible for a neural net to score as high as a different form of supervised learning? I've been working with the Adult Census Income dataset from UCI http://archive.ics.uci.edu/ml/datasets/adult I've created two different models, one using a gradient boosted classifier with sklearn, and one with a neural net using Keras/Tensorflow. So I'm not interested in code hints or anything, but I have a general question about machine learning-I have a significantly higher accuracy using the gradient boosted classifier than I do with the neural net. In general, is it possible for my (or any, really) neural net to reach the same accuracy as any other kind of supervised learning? Does it just take a lot of hard work and elbow grease to tune the neural net well enough? I'm working with the same dataset, using the same feature engineering for both the NN and the gradient boosted classifier. AI: Generally speak, no. Deep learning models struggle to compete when it comes to tabular data. If we head over to kaggle were people compete to build the best model we find that usually the best performing non-ensemble models for this kind of data are gradient boosting trees. More specifically it tends to be either XGBoost or more often now days it is LightGBM that performs best. Both are highly optimized implementations of gradient boosting trees. Feature engineering and parameter tuning are both important to get extra performance. But the gap does not shrink since you would get the same improvements if you did it on a gradient boosting model instead. Actually other models tend to gain more from feature engineering than neural networks since one of the strengths of neural networks is that they perform a sort of automatic feature engineering when they are trained. Neural networks usually shine in the domain of unstructured data such as text and images.
H: How to measure the performance of a domain adaptation /Transfer learning technique? Given that the performance you achieve depends on how far the target from the source domain is, how can you judge the performance of an algorithm? AI: You can measure the divergence between the source and target domain using KL-Divergence (there are some ways to estimate k-l divergence e.g. depdended on k-nn algorithm). Then you can check if there is a correlation between the divergence and the accuracy of the models considering a few cases of source-target pairs of datasets. You can compare several algorithms of Transfer Learning/Domain Adaptation using the same source-target datasets.
H: What fits in a Data Description Report/ Data Exploration Report? So I am trying to get familiar with Crisp-DM and found the terms "Data Description Report" and "Data Exploration Report", which seem oddly vague in their definition. So far I only found this right here: https://www.ibm.com/support/knowledgecenter/en/SS3RA7_15.0.0/com.ibm.spss.crispdm.help/crisp_data_description_report.htm But this seems to be on the shorter end in my opinion. Is there any example of a Data Description Report anywhere? If not, is there any systematic methodology you personally use to record your findings while trying to understand data? AI: Generally speaking each data set may have different structure and may relate to different business aspects. Because of that I think its hard generalizing the description/exploration steps. Yet, these are my two cents: Data description: * Data sources - how was data created, generated, collected. * Data shape - number of rows, columns. * Data types(per each column) - numeric, strings, other.. * Missing values - how much, where, why? * Time frames - if relevant. * Entities(geographical, markets, population segments, devices) - what identifies a row in the data(if applicable). Data exploration: * Counts per entity(grouping keys). * Categorical variables breakdown. * Values ranges distributions. * Simple correlations. * Story telling(visualizations that depict key aspects in the data<>business realm). * Missing data deep dive.
H: Replace method in pandas not giving expected result I have data as below: 123.12.23.2 110.22.21.23 I want to mask this data as below one 1xx.xx.xx.x So I tried below code : readFile = pd.read_csv("C:/Users/siddhesh.kalgaonkar/Desktop/data01.txt",header=None) readFile.columns = ['IP'] readFile['IP']=readFile['IP'].str.replace("(?<! )","X").astype('str') readFile but I gives me data as below one which is not correct: IP 0 XXXXXXXXXXXXXXXXXXX 1XXXXXXXXXXXXXXXXXXXXXXXXX... 1 XXXXXXXXXXXXXXXXXXX 1XXXXXXXXXXXXXXXXXXXXXXXXX... 2 XXXXXXXXXXXXXXXXXXX 1XXXXXXXXXXXXXXXXXXXXXXXXX... I am new to pandas. So where am I going wrong ? Also, I want to do it without pandas because the platform on which I would be deploying this code may be won't have pandas. So need to be ready for the other scenario as well. Below is my code: readFiles=open("C:/Users/siddhesh.kalgaonkar/Desktop/data01.txt","r") finalValues = re.sub("(?<! ).","X",readFiles) It gives below error: >>> finalValues = re.sub("(?<! ).","X",readFiles) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\siddhesh.kalgaonkar\AppData\Local\Programs\Python\Python36\lib\re.py", line 191, in sub return _compile(pattern, flags).sub(repl, string, count) TypeError: expected string or bytes-like object I want to split this data on the basis of delimiter (in case I have multiple columns) and then I have to apply regex logic. Please help me out here. AI: Try this if you decide to use pandas: readFile = pd.read_csv("C:/Users/siddhesh.kalgaonkar/Desktop/data01.txt",header=None) readFile.columns = ['IP'] readFile['IP'] = readFile['IP'].replace(regex='((?<=[0-9])[0-9]|(?<=\.)[0-9])',value='X') print(readFile) and this without pandas: readFile = open("C:/Users/siddhesh.kalgaonkar/Desktop/data01.txt","r") for line in readFile: lines = line.strip() finalline = re.sub(pattern='((?<=[0-9])[0-9]|(?<=\.)[0-9])',repl='X',string=lines) print(finalline) (?<=[0-9])[0-9] this part matches if the current position in the string is a digit and is preceded by a digit. | or (?<=\.)[0-9]) this part matches if the current position in the string is is a digit and is preceded by a period.
H: How high rmse value can be? While solving the questions for machine learning, I got two values for R square from 2 different regressors, i.e, 0.9999 and 0.9769. So, which should go for as both could lead to overfitting? Thanks in advance AI: Are you asking about rmse or r-squared? The size of rmse depends on the general range of your response feature(predicting target). The lower the better. As to r-squared, both 0.9999 and 0.9769 are very likely to indicate overfitting. If you have to choose between these two models, my suggestion is to evaluate them using generalization error. You can also do cross-validation if no out-of-sample data are available.
H: Inverse Relationship Between Precision and Recall I made some search to learn precision and recall and I saw some graphs represents inverse relationship between precision and recall and I started to think about it to clarify subject. I wonder the inverse relationship always hold? Suppose I have a binary classification problem and there are positive and negative labeled classes. After training some of the actual positive examples are predicted as true positives and some of them false negatives and some of the actual negative examples are predicted as true negatives and some of them false positives. To calculate precision and recall I use these formulas: $$Precision = \frac{TP}{TP + FP}$$ and $$Recall = \frac{TP}{TP + FN}$$ If I decrease false negatives then true positives increases and in that case don't precision and recall both increase? AI: If we decrease the false negative (select more positives), recall always increases, but precision may increase or decrease. Generally, for models better than random, precision and recall have an inverse relationship (@pythinker's answer), but for models worse than random, they have a direct relationship (@kbrose's example). It is worth noting that we can artificially build a sample that causes a model which is better-than-random on true distribution to perform worse-than-random, so we are assuming that the sample resembles the true distribution. Recall We have $$TP = P - FN$$ therefore, recall would be $$r = \frac{P-FN}{P} = 1- \frac{FN}{P}$$ which always increases by decrease in $FN$. Precision For precision, the relation is not as straightforward. Lets start with two examples. First case: decrease in precision, by decrease in false negative: label model prediction 1 0.8 0 0.2 0 0.2 1 0.2 For threshold $0.5$ (false negative = $\{(1, 0.2)\}$), $$p = \frac{1}{1+0}=1$$ For threshold $0.0$ (false negative = $\{\}$), $$p = \frac{2}{2+2}=0.5$$ Second case: increase in precision, by decrease in false negative (the same as @kbrose example): label model prediction 0 1.0 1 0.4 0 0.1 For threshold $0.5$ (false negative = $\{(1, 0.4)\}$), $$p = \frac{0}{0+1}=0$$ For threshold $0.0$ (false negative = $\{\}$), $$p = \frac{1}{1+2}=0.33$$ It is worth noting that ROC curve for this case is Analysis of precision based on ROC curve When we lower the threshold, false negative decreases, and true positive [rate] increases, which is equivalent to moving to the right in ROC plot. I did a simulation for better-than-random, random, and worse-than-random models, and plotted ROC, recall, and precision: As you can see, by moving to the right, for better-than-random model, precision decreases, for random model, precision has substantial fluctuations, and for worse-than-random model precision increases. And there are slight fluctuations in all three cases. Therefore, By increase in recall, if model is better than random, precision generally decreases. If mode is worse than random, precision generally increases. Here is the code for simulation: import numpy as np from sklearn.metrics import roc_curve from matplotlib import pyplot np.random.seed(123) count = 2000 P = int(count * 0.5) N = count - P # first half zero, second half one y_true = np.concatenate((np.zeros((N, 1)), np.ones((P, 1)))) title = 'Better-than-random model' # title = 'Random model' # title = 'Worse-than-random model' if title == 'Better-than-random model': # GOOD: model output increases from 0 to 1 with noise y_score = np.array([p + np.random.randint(-1000, 1000)/3000 for p in np.arange(0, 1, 1.0 / count)]).reshape((-1, 1)) elif title == 'Random model': # RANDOM: model output is purely random y_score = np.array([np.random.randint(-1000, 1000)/3000 for p in np.arange(0, 1, 1.0 / count)]).reshape((-1, 1)) elif title == 'Worse-than-random model': # SUB RANDOM: model output decreases from 0 to -1 (worse than random) y_score = np.array([-p + np.random.randint(-1000, 1000)/1000 for p in np.arange(0, 1, 1.0 / count)]).reshape((-1, 1)) # calculate ROC (fpr, tpr) points fpr, tpr, thresholds = roc_curve(y_true, y_score) # calculate recall, precision, and accuracy for corresponding thresholds # recall = TP / P recall = np.array([np.sum(y_true[y_score > t])/P for t in thresholds]).reshape((-1, 1)) # precision = TP / (TP + FP) precision = np.array([np.sum(y_true[y_score > t])/np.count_nonzero(y_score > t) for t in thresholds]).reshape((-1, 1)) # accuracy = (TP + TN) / (P + N) accuracy = np.array([(np.sum(y_true[y_score > t]) + np.sum(1 - y_true[y_score < t])) /len(y_score) for t in thresholds]).reshape((-1, 1)) # Sort performance measures from min tpr to max tpr index = np.argsort(tpr) tpr_sorted = tpr[index] recall_sorted = recall[index] precision_sorted = precision[index] accuracy_sorted = accuracy[index] # visualize fig, ax = pyplot.subplots(3, 1) fig.suptitle(title, fontsize=12) line = np.arange(0, len(thresholds))/len(thresholds) ax[0].plot(fpr, tpr, label='ROC', color='purple') ax[0].plot(line, line, '--', label='random', color='black') ax[0].set_xlabel('fpr') ax[0].legend(loc='center left', bbox_to_anchor=(1, 0.5)) ax[1].plot(line, recall, label='recall', color='blue') ax[1].plot(line, precision, label='precision', color='red') ax[1].plot(line, accuracy, label='accuracy', color='black') ax[1].set_xlabel('1 - threshold') ax[1].legend(loc='center left', bbox_to_anchor=(1, 0.5)) ax[2].plot(tpr_sorted, recall_sorted, label='recall', color='blue') ax[2].plot(tpr_sorted, precision_sorted, label='precision', color='red') ax[2].plot(tpr_sorted, accuracy_sorted, label='accuracy', color='black') ax[2].set_xlabel('tpr (1 - fnr)') ax[2].legend(loc='center left', bbox_to_anchor=(1, 0.5)) fig.tight_layout() fig.subplots_adjust(top=0.88) pyplot.show()
H: Labeling classes conditionally I am working with a time series predicting whether web traffic will increase or decrease each day compared to the previous day for a given user. Initially I used binary classes: labeled 1 for next day traffic increases and 0 for traffic decreases (which distributed into 60/40 split). Next I tried something conditionally: if the user has increased traffic for 3 previous days in a row and they increase tomorrow, that is labeled 1, else 0. Otherwise, if the user has decreased traffic the previous 3 days and decreases traffic tomorrow, that is labeled 1, else 0. So 1 doesn't always/necessarily correspond to traffic increases, it depends on the condition which can easily be observed when using this algorithm for real life predictions (by simply looking at the data). With this 'conditional' dependent label encoding I have gotten much better results. The new binary classes are split 55/45 and accuracy and f1 have greatly improved for testing and training sets. Is this kind of class labeling acceptable and/or good practice? I think it is positive as I am introducing more data without increasing dimensions but I am worried about mixing up the classes with this approach. Thank you for your help! AI: There is no problem doing that, you can define your variable in every way. Just make sure you don't use the variables used to define your Y as predictors. The only problem is that, after you defined your new variable, your model could be facing nonlinearities which are difficult to model (meaning: I can get a true from Decrease,Decrease,Decrease and from Increase,Increase,Increase, so some variables could be working for the first case, but not for the second). If you have obtained better results with this new Y variable, I think you are fine.
H: Text Dataset Help: Need to figure out how to get the publication date of a text I am a highschool student working on a science fair project in which me and a friend plan to use a neural network for a classification problem. In this case the thing classified will be text and the labels will be the publication/creation date. Since our project requires a significant amount of data we were planning to use the Gutenberg project. However we came upon the issue that the creation date was not in each file but rather on a website associated with the name (so we have to go to a website which ends with the text's file name). We are wondering what the best approach would be to get our data - should we somehow get the date using the website (even though they say they will ban ip's of scrapers), find a new dataset, or build some sort of tool which will search the internet for that text's creation date. Thanks and if you have any suggestions please feel free to comment. If our question should be further expanded upon please clarify. Thanks again. AI: I would hazard a guess that most books uploaded on Gutenberg also have a Wikipedia page. Especially the well-known books. There are a few Wikipedia related Python packages (i.e. Wikipedia) that can make it very easy to search for the title of the book, navigate to the book's page, and extract the publication date. An optimal example might be Alice's Adventures in Wonderland. It'd be trivial to extract the date from that page, which appears neatly in the info box, and convert it to a machine readable format in Python. Or, you could attempt to scrape the info straight from Gutenberg. Perhaps consider Selenium, although make sure you limit your bot's rate so you don't adversely effect the site. Lastly, trying to predict the publication date might be similar to predicting the author; both may be derived from writing style. There's a lot of work on this topic, a Google search or an arxiv search will help. Here's one of the first articles I opened.
H: Categorical vs continuous feature selection/engineering I'm working with a dataset with a number of potential predictors like : Age : continuous Number of children : discrete and numerical Marital Situation : Categorical ( Married/Single/Divorced.. ) Id_User : Categorical ( an id of the user who conducted the first interview with this person ) I'm stopping at four potential predictors, there are more, but for the sake of shortness, these would be enough to ask my question. Question : Continuous features are easy to deal with, normalize, and feed it to the model, what about categorical and independant ? Note : I get that categorical features that follow a certain pattern can be encoded as integers and fed to the model, but what if those categorical features have no meaning as integers ( 1 for single, 2 for married , 3 for divorced ; for the model that treats it as a quantitative predictor it doesn't make sense to feed it to it like that) Any ways to deal with these different types of features? AI: What you are looking for are called dummy variables, they convert your categorical data into a matrix where the column is 1 if the person belongs to a category or 0 otherwise. The variable ID is not convertible because you don't want your model to overfit over your ID data (meaning: You don't want your model to remember the result for every ID, you want your model to be general). import pandas as pd dataset2 = pd.get_dummies(dataset)
H: What does sklearn's pairwise_distances with metric='correlation' do? I've put different values into this function and observed the output. But I can't find a predictable pattern in what is being outputed. Then I tried digging through the function itself, but its confusing because it can do a number of different calculations. Docs According to the docs: Compute the distance matrix from a vector array X and optional Y. I see it returns a matrix of height and width equal to the number of nested lists inputted, implying that it is comparing each one. But otherwise I'm having a tough time understanding what its doing and where the values are coming from. Examples I've tried: pairwise_distances([[1]], metric='correlation') >>> array([[0.]]) pairwise_distances([[1], [1]], metric='correlation') >>> array([[ 0., nan], >>> [nan, 0.]]) # returns same as last input although input values differ pairwise_distances([[1], [2]], metric='correlation') >>> array([[ 0., nan], >>> [nan, 0.]]) pairwise_distances([[1,2], [1,2]], metric='correlation') >>> array([[0.00000000e+00, 2.22044605e-16], >>> [2.22044605e-16, 0.00000000e+00]]) # returns same as last input although input values differ # I incorrectly expected more distance because input values differ more pairwise_distances([[1,2], [1,3]], metric='correlation') >>> array([[0.00000000e+00, 2.22044605e-16], >>> [2.22044605e-16, 0.00000000e+00]]) Computing correlation distance with Scipy I don't understand where the sklearn 2.22044605e-16 value is coming from if scipy returns 0.0 for the same inputs. # Scipy import scipy scipy.spatial.distance.correlation([1,2], [1,2]) >>> 0.0 # Sklearn pairwise_distances([[1,2], [1,2]], metric='correlation') >>> array([[0.00000000e+00, 2.22044605e-16], >>> [2.22044605e-16, 0.00000000e+00]]) I'm not looking for a high level explanation but an example of how the numbers are calculated. AI: Y = pdist(X, 'correlation') Computes the correlation distance between vectors u and v. This is $$ 1-\frac{(u-\overline{u})\dot{}(v-\overline{v})}{||u-\overline{u}||_2*||v-\overline{v}||_2} $$ where $\overline{u}$ is the mean of the elements of vector $u$, and $x\dot{}y$ is the dot product of $x$ and $y$. The correlation between any vector which have ONLY TWO entries is always 0 (or nearly zero: $2*10^-16$), why? Because correlation distance measures the distance as the linearity between the data. When I have [1,2] and [1,2] the equation $y=x$ fits perfectly, when I have [1,2] and [1,3] the equation $y=x+1$ also fits perfectly. The correlation distance says wheter a equation can be drawn for the data, in both cases the equation is perfect. If you want to try getting a different result, try putting 2 vectors of three elements, and you will see changes. Try with: [[1,2],[2,3],[3,4]] and [[1,4],[3,8],[-5,6]]. But first, plot them and you will comprehen what 'correlation' measures.
H: What happens to a machine learning technique (specifically Decision Tress and Logistic Regression) if the validation dataset has a new category? Let's suppose I have a dataset which has a categorical variable and the problem I am solving is a classification one. This categorical variable var has ['A','B','C'] as the possible set of data. What happens to a decision tree if a new category 'D' is seen only in the validation data set (meaning: absolutely new data)? Supposing the variable var is a feature used in the tree. With the decision tree: Does it give an error? The decision tree stops the path and returns the not-final-node's probability? With the Logistic Regression: The dummy variables are zero for all the categories (I suppose), and then the model runs normally? AI: There are no categorical value support in the decision trees used in scikit-learn. Either the values are just numbers, or they have been one-hot-encoded. One-hot encoding with an unseen input value will change the width of the feature array, which will give an error when running decision_tree.predict(X). Without one-hot encoding the output will most likely be the same as for the value closest to it. Because the tree learns rules on form feature[2] > 3 to distinguish say a value of 3 from a value of 4 (during training). So a value of 4,5,6... etc will be treated the same way. There are no feature[2] > 4 or feature[2] < 5 rules, since there was no data for that to cause splits that could be evaluate/optimize during training. So if you want an error also when not using one-hot-encoding, you will have to check the range of features before you pass it to the Decision Tree.
H: How Dummy Variables Should Be Modeled In A Linear Regression Model? I've a cross sectional model where I want predict number of users that take specific service, to make it I've many variables but have specifically two nominal: isWorkday(0 or 1) and weeday(1,2,3,...,7). When I make the model, taking into account the two variables, generates high multicollinearity. So I've delete one of them, so what's better have many dummies (weeday) or less dummies (isWorkday). AI: Since your task is to predict something, the better variable is the one that gives you a higher prediction accuracy. So you can simply test both and choose the one with which your model performs better. However, I would suggest considering to engineer your own feature that incorporates information of both variables. For example, you could create three dummy variables: workday, weekend and holiday and include two of them into your model (to prevent falling into the dummy variable trap). Another option would be to only include the interaction terms between isWorkday and weekday.
H: Maximize Precision Deep Learning For some binary image classification problems having close to 100% precision is super important and recall is much less important. What are best practices for maximizing precision? Setting the probability threshold for model.predict() greater than 0.5 seems like a reasonable approach. Are there any other recommended methods? AI: If you are exploring the best hyper-parameter combination that maximizes precision you could try GridsearchCV along with KerasClassifier. You create a wrapper of your model and use it as the estimator of the GridsearchCV method. Also, since you want to maximize precision, you would have to set the scorer parameter to precision. After that you are going to have to set the hyper-parameter space that you would like to explore and finally run the experiment. Please bear in mind that this approach is computationally expensive. If your dataset is very large, a space of n > 2 (i.e 2-d), would need a cluster so as to produce results in a timely manner.
H: How to feed a table per timestamp to LSTM neural network? I have a time-series dataframe like this feat1 feat2 target date id 0 1 12 16 192 2 15 6 90 3 2 9 18 1 1 0 3 0 2 0 9 0 3 56 9 504 2 1 5 9 45 2 6 9 54 3 5 8 40 and my problem is for regression. What I know about the LSTM sequences, is that normally, the row_id is the date, so you build sequences of n rows. In my data, as you can see, it's different. In each row that refers to the date, I have 3 more rows representing a product. What I have thinked for make the sequence to my problem, my sequence will be of 2 days: sequence = [day0,day1],[day1,day2] For every date, I have date0 = [id1,id2,id3]. For every id, I have: id = [feat1,feat2]. the sequence[0], and it would be something like [ [ [12,16], [15,6], [2,9] ],[ [0,3], [0.9], [56,9] ] ] Is this valid? Will the LSTM layer understand this? Or do I have to do some kind of extra transformation? AI: Your data format is feature1 feature2 target product1 1 12 2 timestamp product2 2 6 3 product3 4 3 4 There are two designs for two assumptions: Products are not related to each other. Therefore, each product could be modeled separately. That is, each timestamp is X(t) = [feature1, feature2] or, including the target, X(t)|y(t) = [feature1, feature2, target]. And we build a model for each product separately. In summary, LSTM receives two 1 x 3 sequences for t-1 and t, and outputs a 1 x 1 target for t + 1. In notation: $$(\overbrace{X_{t-1}|y_{t-1}}^{1 \times 3}, \overbrace{X_{t}|y_{t}}^{1 \times 3}) \rightarrow \overbrace{y_{t+1}}^{1 \times 1}$$ Products are related to each other, meaning product1 can help product2 to predict its target. For this, we just need to flatten the 3 x 2 matrix to a 1 x 6 vector, where the order of values does not matter. That is, X(t) = [product1_feature1, product1_feature2, ..., product3_feature2] or X(t) = [product1_feature1, product2_feature1, ..., product3_feature2] We can also add the targets, for example X(t)|y(t) = [product1_feature1, product2_feature1, ..., product3_feature2, target1, ..., target3] This way, dimension of each timestamp would be 9 (6 + 3), and a sequence of two timestamps would be [ [product1_feature1, product2_feature1, ..., product3_feature2, target1, ..., target3], # t-1 [product1_feature1, product2_feature1, ..., product3_feature2, target1, ..., target3] # t ] corresponding to a three dimensional target [target1, ..., target3] at t + 1. In summary, LSTM receives two 1 x 9 sequences for t-1 and t, and outputs a 1 x 3 target for t + 1. In notation: $$(\overbrace{X_{t-1}|y_{t-1}}^{1 \times 9}, \overbrace{X_{t}|y_{t}}^{1 \times 9}) \rightarrow \overbrace{y_{t+1}}^{1 \times 3}$$
H: Finding P value - Explain def get_pvalue(con_conv, test_conv,con_size, test_size,): lift = - abs(test_conv - con_conv) scale_one = con_conv * (1 - con_conv) * (1 / con_size) scale_two = test_conv * (1 - test_conv) * (1 / test_size) scale_val = (scale_one + scale_two)**0.5 p_value = 2 * stats.norm.cdf(lift, loc = 0, scale = scale_val ) return p_value I have this function and I would like to know what it is actually doing and how it is actually calculating the p-value. This is to find the difference between the conversion rate of control and test and group from an A/B test. con_conv --> Conversion rate for control group test_conv --> Conversion rate for test group con_size --> population size for control group test_size --> population size for test group I understand that scale_one and scale_two are calculating the variance for each group, but I don't understand why they are adding both of them to calculate the standard deviation and why they are multiplying the cdf with 2 to get the p_value. AI: p_value = 2 * stats.norm.cdf(lift, loc = 0, scale = scale_val ) This is the key for your question: The p-value is the probability that the null hypothesis is true. If the null hypothesis is true: Your model does not find any differences between groups. If false: Your model finds differences between groups. Given that you are using a model which its subyacent assumption is normallity (amongst others), the hypothesis test is to be tried comparing the probability in the context of a normal distribution. The function stats.norm.cdf returns the probability of "lift being close to zero" if lift is supposed to be "normal". If lift is zero, then there is no difference between groups, so a p-value of <0.01 tell us that the probability that the groups are equal is almost 0, meaning that your groups are different. The 2 is due to a concept called "two-tailed distribution": The difference between groups can be A greater than B or B greater that A, that's why you measure the difference in either two of the ways. The addition between standard deviations obeys the concept of: $Var(X+Y) = Var(X) + Var(Y)$ if $X$ and $Y$ are independent.
H: What math topics are most needed to become data scientist? I am trying to learn data science from very different sources like Coursera, edx and many other sources. But I can not able to find any proper syllabus that what actually topics are needed. So, if anyone can please list out topics that I can follow and able to complete data science math skills. Even if you post the name of books which can able to complete these criteria, then also it will be a great help. Thanks in advance. AI: Linear Algebra, Calculus, Probability and Statistics are fundamental topics. Lots of great books out there, Khan Academy is a good resource as well: https://www.khanacademy.org/math/linear-algebra https://www.khanacademy.org/math/calculus-1 https://www.khanacademy.org/math/statistics-probability
H: Anyone have fruit disease dataset? I am doing a project on fruit disease recognition and classification. Anyone have an existing dataset of fruit diseases? Can you help me to find one? AI: I work on plant disease image recognition for my Master's thesis. Until now, I did not find any good database with fruit images, but there are two quite large databases which contain leaf-images of infected plants. One is the Plantvillage-dataset which has pictures taken under laboratory conditions and the other one is the Digipathos-dataset which contains pictures that are nearer to real-life scenarios with multiple objects in focus and difficult backgrounds. You can download the Plantvillage-dataset from this Github-Repo. For the Digipathos-dataset, I wrote a downloader which you can also find on Github.
H: How to make this plot with python I want to draw plot by python based on this data Id Duration (s) BinaryLabel 1 5 correct 2 4 incorrect 3 2 incorrect 4 3 incorrect 5 9 correct 6 6 correct 7 12 incorrect My data is with 800 rows. and the duration is going from 0 to 55 seconds My X axis contain duration column and I want it to be divided like: 0-----5-----10------15------20-----...-----55 and the Y should contain the percent of 'correct' values. I.e. : from 0 to 5, I will have 25% correct, from 5 to 10 I will have 100% correct .etc. AI: Maybe you can modify my code: import numpy as np Duration_ = np.random.choice(range(0,56), 10000) Binary_Level = [] b_ =["correct", "incorrect"] Binary_Level = np.random.choice(b_, 10000) import pandas as pd dd = pd.DataFrame({'Duration_': Duration_, 'Binary_Level': Binary_Level}) xr_ = list(range(0,56,5)) y_ = [] for i in range(0,(len(xr_)-1)): a_ = np.logical_and(dd['Duration_'].values>=xr_[i], dd['Duration_'].values<xr_[i+1]) b_ = np.logical_and(np.logical_and(dd['Duration_'].values>=xr_[i], dd['Duration_'].values<xr_[i+1]), dd['Binary_Level'].values=='correct') y_.append(sum(b_)/sum(a_)) import matplotlib matplotlib.pyplot.plot(xr_[1:12], y_)
H: How to measure the similarity between two text documents? Assume, I have 100 text documents, and I want to cluster those documents. The first step is the construct pairwise similarity matrix 100X100 for the documents My question is: what are common way to measure similarity between two documents? Thanks, AI: In general,there are two ways for finding document-document similarity TF-IDF approach Make a text corpus containing all words of documents . You have to use tokenisation and stop word removal . NLTK library provides all . Convert the documents into tf-idf vectors . Find the cosine-similarity between them or any new document for similarity measure. You can use libraries like NLTK , Scikit learn ,Gensim for Tf-Idf implementation . Gensim provides many additional functionality . See : https://www2.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html Word Embedding Google's Doc2Vec ,which is available in Gensim library can be used for document similarity .Additonaly,teh Doc2Vec model itself can compute the similarity score ( no cosine or anything needed her ) . You just need the vectorise the docs by tokenizing ( use NLTK ) and make a Doc2vec model using gensim and fins similarity and many using Gensim inbuilt methods like model.n_similarity for similarity between two documents . Other Additionally,since your aim is to cluster documents,you can try Topic Modelling using LDA ( Latent Dirichet Allocation) or LSI ( Latent Semantic Indexing ) .
H: Training a classifier when some of the features are unknown I am training a classifier in Matlab with a dataset that I created. Unfortunately some of the features in the dataset were not recorded. I currently have the unknown features set as -99999. So, for example my dataset looks something like this: class1: 10 1 12 -99999 6 8 class1: 11 2 13 7 6 10 ... class2: 5 -99999 4 3 2 -99999 class2: -99999 16 4 3 1 8 ... class3: 18 2 11 22 7 5 class3: 19 1 9 25 7 5 ... and so on, where the -99999 are the places where the features werent able to be measured. In this case, each class has 6 features. I don't want to bias my classifier with the unknown features so I thought it would be a good idea to set the unknowns to -99999 so it would be way out of the range of normal features. I tested the classifier with the -99999's and it was 78% accurate. Then I changed the -99999 to 0's and tested the classifier again, this time it was 91% accurate. So my question is, what is a general rule for training a classifier when some of the features were not recorded? Was I right to assume setting the unknowns to a very high negative value? But why was it more accurate when I set the unknowns to 0s? Thanks for reading! AI: Welcome to Data Science SE! Well, we say that most of our jobs is to wrangle with data, and that is because data is usually trying to deceive us... jokes aside: You have a missing data problem that means your have to clean your data and fill those missing values. To perform this cleaning process your must take the most classic statistician inside of you and ask: Why is this data missing? How much data is missing? There are many reasons for a specific information to be unavailable. This will demand you to make assumptions and decide how to deal with this. Jeff Sauro posted at MeasuringU: 7 Ways to Handle Missing Data, some which I list here: Delete corrupted samples: If you have a large dataset and there is not much data missing, you can simply remove those corrupted data points and go on with life Recover the values: Some problems will allow you to go back and get missing information. We usually ain't that lucky, then you can Educated Guessing: Sometimes, you can infer what would be the feature value by simply looking their pears. That is a bit arbitrary but it might work. Average: This is the most common approach, simply use the average of that value whenever it is missing. This might artificially reduce your variance but so does using 0 or -9999... for every missing value. Regression Substitution: You can use a multiple regression to infer the missing value from the available values for each candidate. Some references on missing data are: Allison, Paul D. 2001. Missing Data. Sage University Papers Series on Quantitative Applications in the Social Sciences. Thousand Oaks: Sage. Enders, Craig. 2010. Applied Missing Data Analysis. Guilford Press: New York. Little, Roderick J., Donald Rubin. 2002. Statistical Analysis with Missing Data. John Wiley & Sons, Inc: Hoboken. Schafer, Joseph L., John W. Graham. 2002. “Missing Data: Our View of the State of the Art.” Psychological Methods. About your experiment: Adding -99... is creating outliers and that bit of information is heavy (numerically speaking, it is huge) and will affect parameter tuning. For example, suppose you have this data: | Feature1 | Feature2 | |----------|----------| | 0 | 8 | | -1 | 7 | | 1 | - | | - | 8 | And you try filling the missing values with -99, now try to fit a linear regression trough the data. Can you see that you don't be able to fit it properly? The line won't fit, and this will yield bad performance. Adding 0 values on the other hand will give a slightly better line: It is still not good, but slightly better since the scale of the parameters will be more realistic. Now, using average, is this case will give you even better curve, but using regression will give you a perfect fitting line: Note: I need to remake those images, but these should do until I have the time for it.
H: Calculating Confidence Interval at a certain confidence level def get_ci(mean, cl, sd): loc = stats.norm.ppf(1 - cl/2) rng_val = stats.norm.cdf(loc - mean/sd) lwr_bnd = value - rng_val upr_bnd = value + rng_val return_val = (lwr_bnd, upr_bnd) return(return_val) This function takes three parameters which are the following: mean --> It is the mean cl --> Confidence Level sd --> Standard Deviation Can someone explain how this function is working as for example if we are calculating 95% confidence interval, why can't we just return (-2*sd+mean,2*sd+mean)? AI: The CI is defined as the interval which contains your mean with a $\alpha$% of probability. Given that you are using a model which its subjacent assumption is normality (amongst others), the interval is to be obtained comparing the probability in the context of a normal distribution. The function stats.norm.cdf returns the probability of $loc - \frac{mean}{sd}$ being zero. The value returned is the $(\bar{X} - Z_{\alpha}\sigma,\bar{X} + Z_{\alpha}\sigma)$. We can't return $(-2 sd+mean,2 sd+mean)$ because it would not be general in terms of probability, we use tables which say $0.95 \rightarrow 2$ because we (humans) cannot calculate the number for every probability, but the stats.norm.cdf function can. $Z_{\alpha}$ is difficult for us to have all the possible values (what if we need 90%, 95%, 99%, 99.9% or 70% probability?)
H: How to interpreting the exponential coefficent in poisson regression with offset? I am trying to find the village level risk factors for malaria. Therefore, I ran a poisson model in r with the prevalence of malaria(y) as dependent variable, altitude(x1) and Forestation(x2) as independent variable and log of Population(x3) as offset. glmer(y~x1+x2+(1|cluster), family = poisson, offset = log(x3)) From what I have read, I understand that the interpretation of model with offset is different than a non-offset model. It would be very helpful, If any one can clear the air on how to interpret the coefficients and exponential coefficient in the above-mentioned case. Thanks AI: The offset means "exposure", so when you are calculating a regression with offset, instead of calculating absolute values, you are calculating ratios. You are changing "number of thefts" for "number of thefts for every 10.000 vehicles" $y = x1+x2+(1|cluster) + log(x_3)$: This is the equation you are calculating. I think you are missing a log in $y$ because the offset and the endogenous variable have to form a ratio: $log(y) = x1 + x2+(1|cluster)+log(x_3)$ $log(y)-log(x_3) = x1 + x2+(1|cluster)$ $log(\frac{y}{x_3}) = x1 + x2+(1|cluster)$ $log(ratio) = x1 + x2+(1|cluster)$
H: What is fractionally-strided convolution layer? In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said Since, the aim of this work is to estimate high-resolution and high-quality density maps, F-CNN is constructed using a set of convolutional and fractionally-strided convolutional layers. The set of fractionally-strided convolutional layers help us to restore details in the output density maps. The following structure is used for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C is convolutional layer, R is ReLU layer, T is fractionally-strided convolution layer and the first number inside every brace indicates the number of filters while the second number indicates filter size. Every fractionally-strided convolution layer increases the input resolution by a factor of 2, thereby ensuring that the output resolution is the same as that of input. I would like to know the detail of fractionally-strided convolution layer. AI: Here is an animation of fractionally-strided convolution (from this github project): where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below: A guide to convolution arithmetic for deep learning Here is a quote from the article: Figure [..] helps understand what fractional strides involve: zeros are inserted between input units, which makes the kernel move around at a slower pace than with unit strides [footnote: doing so is inefficient and real-world implementations avoid useless multiplications by zero, but conceptually it is how the transpose of a strided convolution can be thought of.] Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing. And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions: Transposed Convolutions (a.k.a. deconvolutions or fractionally strided convolutions) and Some sources use the name deconvolution, which is inappropriate because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.
H: Generate predictions that are orthogonal (uncorrelated) to a given variable I have an X matrix, a y variable, and another variable ORTHO_VAR. I need to predict the y variable using X, however, the predictions from that model need to be orthogonal to ORTHO_VAR while being as correlated with y as possible. I would prefer that the predictions are generated with a non-parametric method such as xgboost.XGBRegressor but I could use a linear method if absolutely necessary. This code: import numpy as np import pandas as pd from sklearn.datasets import make_regression from xgboost import XGBRegressor ORTHO_VAR = 'ortho_var' TARGET = 'target' PRED = 'yhat' # Create regression dataset with two correlated targets X, y = make_regression(n_features=20, random_state=245, n_targets=2) indep_vars = ['var{}'.format(i) for i in range(X.shape[1])] # Pull into dataframe df = pd.DataFrame(X, columns=indep_vars) df[TARGET] = y[:, 0] df[ORTHO_VAR] = y[:, 1] # Fit a model to predict TARGET xgb = XGBRegressor(n_estimators=10) xgb.fit(df[indep_vars], df[TARGET]) df[PRED] = xgb.predict(df[indep_vars]) # Correlation should be low or preferably zero pred_corr_w_ortho = df.corr().abs()[PRED][ORTHO_VAR] assert pred_corr_w_ortho < 0.01, "Correlation score: {0} is superior to the given threshold.".format(pred_corr_w_ortho) Returns this: --------------------------------------------------------------------------- AssertionError 1 pred_corr_w_ortho = df.corr().abs()[PRED][ORTHO_VAR] ----> 2 assert pred_corr_w_ortho < 0.01, "Correlation score: {0} is superior to the given threshold.".format(pred_corr_w_ortho) AssertionError: Correlation score: 0.5895885756753665 is superior to the given threshold. ...and I would like something that maintains as much predictive accuracy as possible while remaining orthogonal to ORTHO_VAR AI: This requirement can be satisfied by adding sufficient noise to predictions $\hat{y}$ to decorrelate them from orthogonal values $v$. Ideally, if $\hat{y}$ is already decorrelated from $v$, no noise would be added to $\hat{y}$, thus $\hat{y}$ would be maximally correlated with $y$. Mathematically, we want to create $\hat{y}'=\hat{y}+\epsilon$ from $\epsilon \sim \mathcal{N}(0, \sigma_{\epsilon})$, to satisfy $$r_{\hat{y}'v} = \frac{\sigma_{\hat{y}'v}}{\sigma_{\hat{y}'}\sigma_{v}} < \delta$$ for arbitrary threshold $\delta$. Now, lets expand this inequality to find a lower-bound for std of noise $\epsilon$, i.e. $\sigma_{\epsilon}$. $$\begin{align*} \sigma_{\hat{y}'}^2&=\sigma_{\hat{y}}^2 + \sigma_{\epsilon}^2,\\ \sigma_{\hat{y}'v}&={\Bbb E}\left[(\hat{y}+\epsilon - \mu_{\hat{y}} - \overbrace{\mu_{\epsilon}}^{=0})(v-\mu_{v})\right]\\ &={\Bbb E}\left[(\hat{y} - \mu_{\hat{y}})(v-\mu_{v})\right]+\overbrace{{\Bbb E}\left[\epsilon(v-\mu_{v})\right]}^{=0}&\\ &=\sigma_{\hat{y}v},\\ r_{\hat{y}'v} &= \frac{\sigma_{\hat{y}'v}}{\sigma_{\hat{y}'}\sigma_{v}} =\frac{\sigma_{\hat{y}v}}{\sigma_{v} \sqrt{\sigma_{\hat{y}}^2+\sigma_{\epsilon}^2}} < \delta\\ &\Rightarrow \sigma_{\hat{y}}\sqrt{\left(\frac{r_{\hat{y}v}}{\delta}\right)^2 - 1} < \sigma_{\epsilon} \end{align*}$$ Since all the variables in the left side of inequality can be calculated, we can sample noises from $\mathcal{N}(0, \sigma_{\epsilon})$ and add them to $\hat{y}$ to satisfy the original inequality. Here is a code that does the exact same thing: import numpy as np import pandas as pd from sklearn.datasets import make_regression from xgboost import XGBRegressor ORTHO_VAR = 'ortho_var' IND_VARNM = 'indep_var' TARGET = 'target' CORRECTED_VARNM = 'indep_var_fixed' seed = 245 # Create regression dataset with two correlated targets X, y = make_regression(n_samples=10000, n_features=20, random_state=seed, n_targets=2) indep_vars = ['var{}'.format(i) for i in range(X.shape[1])] # Pull into dataframe df = pd.DataFrame(X, columns=indep_vars) df[TARGET] = y[:, 0] df[ORTHO_VAR] = y[:, 1] # Fit a model to predict TARGET xgb = XGBRegressor(n_estimators=10) xgb.fit(df[indep_vars], df[TARGET]) df['yhat'] = xgb.predict(df[indep_vars]) delta = 0.01 # std of noise required to be added to y_hat to bring the correlation # of y_hat with ORTHO_VAR below delta std_y_hat = np.std(df['yhat'], ddof=1) corr_y_hat_ortho_var = np.corrcoef(df['yhat'], df[ORTHO_VAR])[1, 0] corr_y_hat_target = np.corrcoef(df['yhat'], df[TARGET])[1, 0] std_noise_lower_bound = std_y_hat * np.sqrt((corr_y_hat_ortho_var / delta)**2 - 1.0) std_noise = max(0, std_noise_lower_bound) + 1 print('delta: ', delta) print('std_y_hat: ', std_y_hat) print('corr_y_hat_target: ', corr_y_hat_target) print('corr_y_hat_ortho_var: ', corr_y_hat_ortho_var) print('std_noise_lower_bound: ', std_noise_lower_bound) print('std_noise: ', std_noise) # add noise np.random.seed(seed) noises = np.random.normal(0, std_noise, len(df['yhat'])) noises -= np.mean(noises) # remove slight deviations from zero mean print('noise_samples: mean:', np.mean(noises), ', std: ', np.std(noises)) df['yhat'] = df['yhat'] + noises # measure new correlation corr_y_hat_ortho_var = np.corrcoef(df['yhat'], df[ORTHO_VAR])[1, 0] corr_y_hat_target = np.corrcoef(df['yhat'], df[TARGET])[1, 0] print('new corr_y_hat_target: ', corr_y_hat_target) print('new corr_y_hat_ortho_var: ', corr_y_hat_ortho_var) # Correlation should be low or preferably zero assert corr_y_hat_ortho_var < delta, corr_y_hat_ortho_var assert -delta < corr_y_hat_ortho_var, corr_y_hat_ortho_var which outputs: delta: 0.01 std_y_hat: 69.48568725585938 corr_y_hat_target: 0.8207672834857673 corr_y_hat_ortho_var: 0.7663936356880843 std_noise_lower_bound: 5324.885500165032 std_noise: 5325.885500165032 noise_samples: mean: 1.1059455573558807e-13 , std: 5373.914830034988 new corr_y_hat_target: -0.004125016071865934 new corr_y_hat_ortho_var: -0.000541131379457552 You can experiment with other deltas. By comparing std_y_hat with std_noise_lower_bound, you can see that a huge noise must be added to $\hat{y}$ to decorrelate it from $v$ bellow $0.01$, which dramatically decolerates $\hat{y}$ from $y$ too. Note: Assertion might fail for too small thresholds $\delta$ due to insufficient sample count.
H: Does a Keras checkpointer save the best weights when using chekpoints with restore_best_weights? I m training a sequence model in Keras using the tensorflow backend. I've also included some callbacks to save checkpoints and revert to best weights if the model starts to overfit (which it will). My question - when fitting using this set of callbacks, does the final checkpoint contain the version of the model with the best weight? I know that the weights in classif_model will revert but I'm not sure if that also applies to the final saved state. from keras import callbacks as kc classif_model = my_model(input_shape) # Set up callbacks checkpointer = kc.ModelCheckpoint(filepath='results/'+name+'.h5', verbose=0) earlystopping = kc.EarlyStopping(monitor='val_loss', patience=patience, restore_best_weights = True) callbacks = [checkpointer, earlystopping] # train the model hist = classif_model.fit(x = X_tr, y = Y_tr, epochs = epochs, batch_size = batch_size, callbacks = callbacks, validation_data = (X_val, Y_val), verbose = 0) AI: It will if you set the save_best_only flag in your checkpoint callback definition: ModelCheckpoint(filepath, monitor='val_loss', save_best_only=True) From the docs: save_best_only: if save_best_only=True, the latest best model according to the quantity monitored will not be overwritten.
H: XGBRegressor hyperparameter optimization using xgb cv function I am trying to optimize hyper parameters of XGBRegressor using xgb's cv function and bayesian optimization (using hyperopt package). Here is the piece of code I am using for the cv part. dtrain = xgb.DMatrix(X_train, label=y_train) cv_results = xgb.cv(params,dtrain,num_boost_round = 1000, folds= cv_folds, stratified = False, early_stopping_rounds = 100, metrics="rmse", seed = 44) However, I am getting the following error within the xgb.cv function (part of the Trace): 414 cvfolds = mknfold(dtrain, nfold, params, seed, metrics, fpreproc, --> 415 stratified, folds, shuffle) 416 417 # setup callbacks /anaconda3/envs/py36/lib/python3.6/site-packages/xgboost/training.py in mknfold(dall, nfold, param, seed, evals, fpreproc, stratified, folds, shuffle) 261 except TypeError: 262 # Custom stratification using Sklearn KFoldSplit object --> 263 splits = list(folds.split(X=dall.get_label(), y=dall.get_label())) 264 in_idset = [x[0] for x in splits] 265 out_idset = [x[1] for x in splits] AttributeError: 'int' object has no attribute 'split' I can't figure out why am i getting this error. The documentation for xgboost is also not very clear and sparse. So any help would be greatly appreciated. Thanks AI: It seems to be complaining about the folds.split part, indicating that folds is an integer. If your cv_folds object is just the number of folds, pass that as nfolds instead; folds expects the actual sklearn KFold object rather than a number.
H: What are the possible ways to detect skin while classifying diseases? I am working on a skin disease classification problem where I have successfully created a classifier ( TensorFlow + Keras ) which can classify images of two skin diseases. The sample image needs to be classified in this manner : Whether the sample is an image of the skin. Does the skin have any two of the diseases ( Melanoma or Psoriasis ) If a disease is found, to which class does it belong (CLASS1: Melanoma or CLASS2: Psoriasis How can create Classifiers which could carry out the following tasks? Do I need the image localization or CNNs or something like YoLo? What steps should I implement? I have created a classifier ( with an accuracy of 96% ) which can classify an image of the two diseases efficiently. But it can't detect the presence of the disease ( Step 2 in the above task list ). AI: The other answers are correct, I just want to expand since you seem to wonder where step 1. fits in. I think you should add yet another class called Unknown. This class will be able to tell that it is not human skin, but preferably it should be even more precise. It also should be able to tell if a picture is a good input for disease detection. You only want to make classifications on data that is "similar enough" to data you trained on. Examples of pictures to train as Unknown could be: Random pictures of whatever Picture containing human skin but is too far away Picture containing human skin but with a bad resolution / blurry Picture containing human skin but with bad lighting etc When you got the negative data for the Unknown class your can use it in one of two ways. Either you train a general model that does everything in one or you train two specialist models. General model Build a general model for all your problems by making every distinction into classes. You can do this by adding another class of Unknown to the list @thanatoz wrote: Healthy: 0 Melanoma: 1 Psoriasis: 2 Unknown: 3 Specialist models Another approach is to built two models. One for skin detection and then one for disease detection, separating step 1 and step 2 in your list. Skin detector - The first model will not know anything about diseases, it will only tell if a picture contains human skin or not. So the classes would be: Unknown: 0 Skin: 1 Disease detector - This model is only activated if the skin detector has verified that you have a sample of representative human skin. This would then work the same as @thanaztoz answer with the classes being: Healthy: 0 Melanoma: 1 Psoriasis: 2
H: How to give a higher importance to certain features in a (k-means) clustering model? I am clustering data with numeric and categorical variables. To process the categorical variables for the cluster model, I create dummy variables. However, I feel like this results in a higher importance for these dummy variables because multiple dummy variables represent one categorical variable. For example, I have a categorical variable Airport that will result in multiple dummy variables: LAX, JFK, MIA and BOS. Now suppose I also have a numeric Temperature variable. I also scale all variables to be between 0 and 1. Now my Airport variable seems to be 4 times more important than the Temperature variable, and the clusters will be mostly based on the Airport variable. My problem is that I want all variables to have the same importance. Is there a way to do this? I was thinking of scaling the variables in a different way but I don't know how to scale them in order to give them the same importance. AI: You cannot really use k-means clustering if your data contains categorical variables since k-means uses Euclidian distance which will not make a lot of sense with categorical variables. Check out the answers to this similar question. You can use the following rules for performing clustering with k-means or one of its derivates: If your data contains only metric variables: Scale the data and use k-means (R) (Python). If your data contains only categorical variables: Use k-modes (R) (Python). If your data contains categorical and metric variables: Scale the metric variables and use k-prototypes (R) (Python).
H: Different t-test results from R and LibreCalc while I was trying to apply one-tailed t-test to two vector of data on R, I just wanted to apply same thing on librecalc also and then I noticed that p-values are different. Here is a summary what I have done so far: in R I applied: t.test(data[,1],data[,2],paired = TRUE,alternative = 'greater') and then result is: Paired t-test data: ttest[, 2] and ttest[, 3] t = -134.33, df = 49, p-value = 1 alternative hypothesis: true difference in means is greater than 0 95 percent confidence interval: -8.318141 Inf sample estimates: mean of the differences -8.2156 It returns 1 as p-value. However, same thing on Librecalc, it gives following: Paired t-test Alpha 0,05 Hypothesized Mean Difference 0 Variable 1 Variable 2 Mean 33,003568 41,219168 Variance 0,155220041404082 0,038756230791837 Observations 50 50 Pearson Correlation 0,04472285907661 Observed Mean Difference -8,2156 Variance of the Differences 0,187038752653061 df 49 t Stat -134,325507363656 P (T<=t) one-tail 7,16710643183049E-65 //(Here is the p-value) t Critical one-tail 1,67655089261685 P (T<=t) two-tail 1,4334212863661E-64 t Critical two-tail 2,00957523712924 Except the p-values, all the other results are same as you see. Can someone tell me what is going on here, or what should I do for a one-tailed t-test to compare two algorithms. (I have 50 different runs' results of both algorithm). Here is the data: 31,3888 41,028 33,3064 41,2048 32,6824 41,5216 33,4552 40,9848 32,6832 41,3152 33,2704 41,212 32,8344 41,2104 32,6376 41,2904 33,3552 41,22 32,6 41,2152 32,6752 41,4784 32,9464 41,1352 33,2064 41,4216 33,1752 41,0568 33,0952 41,2808 32,5272 41,7328 33,1624 40,8696 33,0192 41,0544 33,1744 41,1048 33,4464 41,0488 33,7448 41,3832 33,0568 41,4656 32,836 41,1192 33,1 41,0832 33,2144 41,4944 33,0544 41,2736 33,3232 41,4232 32,8616 41,2304 33,228 41,2184 33,2472 41,02 32,8952 40,9808 33,5056 41,3808 33,5248 41,348 33,4416 41,1072 33,0416 41,016 33,0504 41,168 32,4064 41,184 32,6256 41,572 32,488 40,7848 33,4304 41,2536 33,2128 41,2248 32,9576 41,5392 33,0136 41,4608 32,9296 41,0872 32,632 41,0128 33,2616 40,968 32,7664 41,0192 32,8952 41,3344 33,3512 41,2984 32,4408 41,1216 AI: If alternative = 'greater' means that alternative hypothesis is "x has greater mean than y". In your code: t.test(data[,1],data[,2],paired = TRUE,alternative = 'greater') it translates to data[,1] has grater mean that data[,2]. Which is obviously not true. Just change the order of inputs and you'll get a correct p-value t.test(data[,2],data[,1],paired = TRUE,alternative = 'greater') I suppose that in LibreCalc arguments are flipped w.r.t. R
H: which metric is better for boosting methods I work on a dataset of 300 000 samples and I try to make a comparison between logistic regression (with gradients descent) and a LightBoost for binary classification in order to choose the better one. I want to know in this case which metric should I use it and WHY? Accuracy ?? AUC Test value ?? RMSE ?? LogLoss ?? AI: Depends. The first thing that has to be clear is that you are running an experiment, which means you need to measure both with the same metric. Which one? Depends on which underlying problem you are solving, if what you are doing is to determine which algorithm is better, your conclusion will only be applicable to your specific dataset Accuracy: Is possible to measure the accuracy as the comparing metric, but it becomes trivial if your dataset is unbalanced, which means you have much more positives than negatives or viceversa. The accuracy is used when the dataset is balanced and if is equally bad to have a mistake on positives and negatives. Also, it has the problem of being too dependant on the threshold for defining positives/negatives. Area Under the Curve: The AUC is one of the most robust metrics for measuring the ability of a model to separate positives and negatives, is unsensitive to threshold and immune to unbalanceness. I would use this. RMSE: I only know RMSE for continuous regression, not for classification. LogLoss: Its use is in multinomial classification
H: Should I prevent individuals mating with themselves in EAs? In evolutionary algorithms, should I always avoid individuals mating (crossover) with themselves? That is, should I prevent the selection algorithm from selecting a single parent twice to produce a child with itself? I'm interested in the general case as well as for tournament selection. AI: When talking about Evolutionary Algorithms, you will have to decide between diversity or specialization of the possible solutions. You should not let individuals crossover themselves because you will be loosing diversity in your algorithm. But, there is the exception to this rule: You need to make sure you are not punishing the success of this individual if he is the best, you could make sure this individual passes to the next generation by preserving the best parents. The answer to your question is: Prevent auto-crossover but make sure you don't loose this solution when selecting the ones passing to the next generation (Elite group, passing n of the best parents) In tournament selection, is not common to select the same individual as a participant of the tournament twice, so this possibility should not be one of your worries. In roulette selection, be sure you don't eliminate successful individuals.
H: Linear regression incorrect prediction using Matlab In the plot below the red crossed line is the actual curve and the crossed blue line is the predicted curve. I am using least squares for linear prediction. I have used 1:79 examples in training and the remaining for testing. The test data points are never seen during training. What is my mistake? Why am I getting such a weird prediction? I want to see the sine curve as the predicted output which should be very close to the original data. %generate some data x=linspace(0,2*pi,100)'; y=sin(x); %response X=x; y=y; % Convert matrix values to double X = double(X(1:79)); y = double(y(1:79)); % Plot data plot(X, y, 'rx', 'MarkerSize', 10); m = length(y); % Add ones column X = [ones(m, 1) X]; % Gradient Descent with Normal Equation theta = (pinv(X'*X))*X'*y % Predict from 80 till last sample test_samples = x(80:end); test_samples_val = [ones(length(test_samples),1) test_samples]; % Calculate predicted value pred_value = test_samples_val * theta; X = vertcat(X, test_samples_val); regressionline = X*theta; % Plot predicted value with blue cross plot(test_samples, pred_value, 'bx', 'MarkerSize', 10); AI: You are calculating a linear regression in a nonlinear environment, which means you will need nonlinear exogenous variables. You are calculating $Y=\beta_0+\beta_1x$ (a linear equation) when you need at least a polynominal equation. The following code calculated this equation: $Y=\beta_0+\beta_1x+\beta_2x^2+\beta_3x^3$ (maybe not the best equation but a working one) %generate some data x=linspace(0,2*pi,100)'; y=sin(x); %response hold on X=x; y=y; % Convert matrix values to double X = double(X(1:79)); y = double(y(1:79)); % Plot data plot(X, y, 'rx', 'MarkerSize', 10); m = length(y); % Add ones column X = [ones(m, 1) X X.^2 X.^3]; % Gradient Descent with Normal Equation theta = (pinv(X'*X))*X'*y; % Predict from 80 till last sample test_samples = x(80:end); test_samples_val = [ones(length(test_samples),1) test_samples test_samples.^2 ... test_samples.^3]; % Calculate predicted value pred_value = test_samples_val * theta; X = vertcat(X, test_samples_val); regressionline = X*theta; % Plot predicted value with blue cross plot(test_samples, pred_value, 'bx', 'MarkerSize', 10);
H: Pandas dataframe with multiple hierarchical indices I have a data frame which looks like this FRUIT ID COLOR WEIGHT Apple 142 Red Heavy Mango 231 Red Light Apple 764 Green Light Apple 543 Green Heavy And I want the following result: FRUIT COUNT Apple COLOR Red 1 Green 2 WEIGHT Heavy 2 Light 1 Mango COLOR Red 1 Green 0 WEIGHT Heavy 0 Light 1 I tried different variations of set_index, groupby() and unstack() on the dataframe in combination with ['ID]'.count() and .size(), but my grouping works only on the first level (Fruit). On the second level I am not able to get color and weight separately, I always end up with weight as another subindex of color so it looks like FRUIT Apple COLOR WEIGHT COUNT Red Heavy 1 Light 0 Green Heavy 1 Light 1 How can I get my desired result? AI: I got it working with below simple line of code df.set_index('fruit').stack().groupby(level=[0,1]).value_counts().unstack(level=[1,2]).fillna(0)
H: What will be the policy if the state space is continuous in Reinforcement learning I have started recently with reinforcement learning. I have few doubts regarding the policy of an agent when it comes to continuous space. From my understanding, policy tells the agent which action to perform given a particular state. This makes sense when it comes to the maze example, where the state space is descrete and limited. What if the state space is continuous, will the agent have information of every possible state in the state space? Also will an RL agent be able to take decision if its in a new state that it has not encountered during training ? AI: You can still define state value functions $v(s)$, action value functions $q(s,a)$ and policy functions $\pi(s)$ or $\pi(a|s)$ when the state $s$ is from a very large or continuous space. Reinforcement Learning (RL) is still a well-defined problem in that space. What becomes harder is iterating through the state space. That rules out two simple approaches: Tabular methods - that store lists of all states with the correct action or value. Any method that needs to iterate through all states, e.g. the dynamic programming methods Policy Iteration or Value Iteration. These are important methods for RL. With tabulation and assuming you can iterate through all possibilities, then you can prove that you will find the optimal policy. However, RL methods can still work with large state spaces. The main method to do so is to use some form of function approximation, which then generalises the space so that knowledge learned about a single state is used to assess similar states. Function approximation can simply be discretising the space to make the numbers more manageable. Or you can use a parametrisable machine learning approach, such as neural networks. The combination of neural networks with reinforcement learning methods is behind the "deep" reinforcement learning approaches that have been subject of much recent research. If you use any function approximation with RL, then you are not guaranteed to find the most optimal policy. Instead you will find an approximation of that policy. However, that is often good enough for purpose. To answer the questions more directly: What will be the policy if the state space is continuous in Reinforcement learning There is no change at the theoretical level. You can express the policy as $\pi(s)$ for a deterministic policy, or $\pi(a|s)$ for a stochastic policy, regardless of the space of $s$. At the implementation level, you will need to implement a parametric function that takes $s$ as one of its inputs. The function parameters $\theta$ are what is learned. For instance if you use an action value based method such as Q-learning, then you will create an approximation to $Q(s,a)$ - in the literature you may see this directly represented as $\hat{q}(s,a,\theta) \approx Q(s,a)$ Using a neural network for $\hat{q}(s,a,\theta)$ is one common way to achieve this, where the neural network's weight and bias values are in $\theta$. What if the state space is continuous, will the agent have information of every possible state in the state space? Depends what you mean by "have information". The agent cannot possibly store separate data about each state. However, it may have information about similar states, or store its knowledge about states in a more abstract fashion (such as in the parameters $\theta$) Also will an RL agent be able to take decision if its in a new state that it has not encountered during training ? Yes. For this to work well with function approximation, it relies on successful generalisation between similar states. So it is important that the state space representation works towards this. For instance, if two states are close together in the state space representation you use, it should be expected that value function and policy functions are often similar - not always, the function can have arbitrary shape, but trying to learn effectively random mapping would be impossible.
H: How to count categorization instances in a NN? Say one-hot encoding is the perfect way to represent a series of objects such as clothing items. Ie: A hat is [1 0 0], a tie is [0 1 0]. I want to predict what a customer buys if they spend a certain amount of money, which will serve as an input to the NN. Eg: For one example, the customer spends $22. The thing is, the customer may buy multiples of the same clothing item. What would be the best way to make the neural network output something like [2 1 0] to show its prediction as 2 hats and 1 tie, given the input of $22? As far as I understand binary crossentropy is good for multilabel problems, but nowhere could I find an example of how I could apply this when the same label occurs more than once. The only solution I could think of was turning this into a regression problem, but doing so would seem bad since this is a categorical problem, and the network should output fixed natural numbers depending upon the number of times an object is predicted. AI: But your intuition about regression is right. Basically, you have multi target regression problem. Each category is a separate target. If your get float output as prediction, you can round it. Float output also makes sense, because it shows you how confident is your model about prediction and the expectation of the number of bought items. Number of bought items is structured as ordinals, so it makes sense to predict it as multi target regression. Moreover, you may not know the upper limit of the number of bought items, so it makes no sense to predict number of items as categories. Look at this for reference: Neural Network for Multiple Output Regression Resources for learning about multiple-target techniques?
H: What is GELU activation? I was going through BERT paper which uses GELU (Gaussian Error Linear Unit) which states equation as $$ GELU(x) = xP(X ≤ x) = xΦ(x).$$ which in turn is approximated to $$0.5x(1 + tanh[\sqrt{ 2/π}(x + 0.044715x^3)])$$ Could you simplify the equation and explain how it has been approximated. AI: GELU function We can expand the cumulative distribution of $\mathcal{N}(0, 1)$, i.e. $\Phi(x)$, as follows: $$\text{GELU}(x):=x{\Bbb P}(X \le x)=x\Phi(x)=0.5x\left(1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right)$$ Note that this is a definition, not an equation (or a relation). Authors have provided some justifications for this proposal, e.g. a stochastic analogy, however mathematically, this is just a definition. Here is the plot of GELU: Tanh approximation For these type of numerical approximations, the key idea is to find a similar function (primarily based on experience), parameterize it, and then fit it to a set of points from the original function. Knowing that $\text{erf}(x)$ is very close to $\text{tanh}(x)$ and first derivative of $\text{erf}(\frac{x}{\sqrt{2}})$ coincides with that of $\text{tanh}(\sqrt{\frac{2}{\pi}}x)$ at $x=0$, which is $\sqrt{\frac{2}{\pi}}$, we proceed to fit $$\text{tanh}\left(\sqrt{\frac{2}{\pi}}(x+ax^2+bx^3+cx^4+dx^5)\right)$$ (or with more terms) to a set of points $\left(x_i, \text{erf}\left(\frac{x_i}{\sqrt{2}}\right)\right)$. I have fitted this function to 20 samples between $(-1.5, 1.5)$ (using this site), and here are the coefficients: By setting $a=c=d=0$, $b$ was estimated to be $0.04495641$. With more samples from a wider range (that site only allowed 20), coefficient $b$ will be closer to paper's $0.044715$. Finally we get $\text{GELU}(x)=x\Phi(x)=0.5x\left(1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right)\simeq 0.5x\left(1+\text{tanh}\left(\sqrt{\frac{2}{\pi}}(x+0.044715x^3)\right)\right)$ with mean squared error $\sim 10^{-8}$ for $x \in [-10, 10]$. Note that if we did not utilize the relationship between the first derivatives, term $\sqrt{\frac{2}{\pi}}$ would have been included in the parameters as follows $$0.5x\left(1+\text{tanh}\left(0.797885x+0.035677x^3\right)\right)$$ which is less beautiful (less analytical, more numerical)! Utilizing the parity As suggested by @BookYourLuck, we can utilize the parity of functions to restrict the space of polynomials in which we search. That is, since $\text{erf}$ is an odd function, i.e. $f(-x)=-f(x)$, and $\text{tanh}$ is also an odd function, polynomial function $\text{pol}(x)$ inside $\text{tanh}$ should also be odd (should only have odd powers of $x$) to have $$\text{erf}(-x)\simeq\text{tanh}(\text{pol}(-x))=\text{tanh}(-\text{pol}(x))=-\text{tanh}(\text{pol}(x))\simeq-\text{erf}(x)$$ Previously, we were fortunate to end up with (almost) zero coefficients for even powers $x^2$ and $x^4$, however in general, this might lead to low quality approximations that, for example, have a term like $0.23x^2$ that is being cancelled out by extra terms (even or odd) instead of simply opting for $0x^2$. Sigmoid approximation A similar relationship holds between $\text{erf}(x)$ and $2\left(\sigma(x)-\frac{1}{2}\right)$ (sigmoid), which is proposed in the paper as another approximation, with mean squared error $\sim 10^{-4}$ for $x \in [-10, 10]$. Here is a Python code for generating data points, fitting the functions, and calculating the mean squared errors: import math import numpy as np import scipy.optimize as optimize def tahn(xs, a): return [math.tanh(math.sqrt(2 / math.pi) * (x + a * x**3)) for x in xs] def sigmoid(xs, a): return [2 * (1 / (1 + math.exp(-a * x)) - 0.5) for x in xs] print_points = 0 np.random.seed(123) # xs = [-2, -1, -.9, -.7, 0.6, -.5, -.4, -.3, -0.2, -.1, 0, # .1, 0.2, .3, .4, .5, 0.6, .7, .9, 2] # xs = np.concatenate((np.arange(-1, 1, 0.2), np.arange(-4, 4, 0.8))) # xs = np.concatenate((np.arange(-2, 2, 0.5), np.arange(-8, 8, 1.6))) xs = np.arange(-10, 10, 0.001) erfs = np.array([math.erf(x/math.sqrt(2)) for x in xs]) ys = np.array([0.5 * x * (1 + math.erf(x/math.sqrt(2))) for x in xs]) # Fit tanh and sigmoid curves to erf points tanh_popt, _ = optimize.curve_fit(tahn, xs, erfs) print('Tanh fit: a=%5.5f' % tuple(tanh_popt)) sig_popt, _ = optimize.curve_fit(sigmoid, xs, erfs) print('Sigmoid fit: a=%5.5f' % tuple(sig_popt)) # curves used in https://mycurvefit.com: # 1. sinh(sqrt(2/3.141593)*(x+a*x^2+b*x^3+c*x^4+d*x^5))/cosh(sqrt(2/3.141593)*(x+a*x^2+b*x^3+c*x^4+d*x^5)) # 2. sinh(sqrt(2/3.141593)*(x+b*x^3))/cosh(sqrt(2/3.141593)*(x+b*x^3)) y_paper_tanh = np.array([0.5 * x * (1 + math.tanh(math.sqrt(2/math.pi)*(x + 0.044715 * x**3))) for x in xs]) tanh_error_paper = (np.square(ys - y_paper_tanh)).mean() y_alt_tanh = np.array([0.5 * x * (1 + math.tanh(math.sqrt(2/math.pi)*(x + tanh_popt[0] * x**3))) for x in xs]) tanh_error_alt = (np.square(ys - y_alt_tanh)).mean() # curve used in https://mycurvefit.com: # 1. 2*(1/(1+2.718281828459^(-(a*x))) - 0.5) y_paper_sigmoid = np.array([x * (1 / (1 + math.exp(-1.702 * x))) for x in xs]) sigmoid_error_paper = (np.square(ys - y_paper_sigmoid)).mean() y_alt_sigmoid = np.array([x * (1 / (1 + math.exp(-sig_popt[0] * x))) for x in xs]) sigmoid_error_alt = (np.square(ys - y_alt_sigmoid)).mean() print('Paper tanh error:', tanh_error_paper) print('Alternative tanh error:', tanh_error_alt) print('Paper sigmoid error:', sigmoid_error_paper) print('Alternative sigmoid error:', sigmoid_error_alt) if print_points == 1: print(len(xs)) for x, erf in zip(xs, erfs): print(x, erf) Output: Tanh fit: a=0.04485 Sigmoid fit: a=1.70099 Paper tanh error: 2.4329173471294176e-08 Alternative tanh error: 2.698034519269613e-08 Paper sigmoid error: 5.6479106346814546e-05 Alternative sigmoid error: 5.704246564663601e-05
H: Delete all rows between two values in a dataframe that repeat multiple times in a column I have dataframe and let's say inside of it is a column_A. This column_A has 3 strings as values, call them 'new_records', 'deletions', 'changes' that repeat across the dataframe multiple times in that order always with multiple rows in between. I want to delete all rows from the beginning of deletions to the end of changes, i.e. I want to leave only new_records in the dataframe. The dataframe looks like this: column_A column_B column_C .... NEW_RECORDS val1 val2 string1_new val3 val4 string2_new val5 val6 NaN val9 val10 NaN val11 val12 string3_new ... DELETIONS val7 val8 string1_del ... ... NaN ... ... string2_del ... ... ... CHANGES ... ... str1_ch ... str2_ch ... NEW_RECORDS str200_new ... str300_new ... NaN NaN ... DELETIONS NaN str100_del NaN str290_del ... ... CHANGES str1000 str20000 NaN ... ... I want to have at the end only chunks of rows between new_records and deletions values, without rows that belong to the deletions group and changes group. How can I do that? UPDATE: There are many rows after the 'new_records' and before the start of 'deletions' group and there are many rows after the start of deletions group and beginning of the 'changes' group. I need to extract only rows that belong to the new_records group. So all rows after the value 'new_records' and before the value of 'deletions' across all dataframe. AI: You can achieve this by forward filling the blank values, and then selecting only those with new_records: df.fillna(method='ffill') df = df[df['column_A'] == 'new_records'] Depending on the actual values in your data frame, you may need to first replace what appear to be empty/space strings with NaN's: df['column_A'] = df['column_A'].replace(r'^\s*$', np.nan, regex=True)
H: how to check the distribution of the training set and testing set are similar I have been playing the Kaggle Competition and I find there is a situation that the distribution of the training set and testing set are different, so I am wondering how to check the distribution of the training set and testing set are similar. And I search it and find a blog which check the similarity of distributions by converting it into a binary classification problem. If it gets a high AUC, the distribution of the training set and the testing set must be different. And the idea he gives as follows: If there exists a covariate shift, then upon mixing train and test we’ll still be able to classify the origin of each data point (whether it is from test or train) with good accuracy. But I still can't understand why he can check the similarity of these two distributions in this way. And are there other ways to check the similarity of it? It will be appreciated, if anyone could help me. AI: It looks like the person that wrote the blog is combining the samples from the test set and train set into one dataframe and then predicting if each sample comes from the test set or training set (his y variable is called “is_train” which indicates whether the sample came from the training set or not). I think his point is that if you are able to accurately classify whether the sample comes from the test or training set then the predictor variables have different underlying distributions. That would signify that your original model will probably not work well on this test data. Also — he is using AUROC as the performance metric. A high AUROC means that the model is performing well, and in this case it means that there is a big difference in distributions of predictor variables between the training and test set. Ideally, the distribution of the predictors for the training and test set should be the same, so you would want to get an AUROC that is close to 0.5. I think this situation would only be relevant in cases where you have your model deployed and you need to check if your model is still relevant over time. If you are building a new model you shouldn’t need to do something like this because the test data is randomly sampled from the dataset. Additionally, if you’re doing cross validation then there is even less reason to worry about something like that.
H: How can I get variable values from a plot? I would like to know if there is any way in python to automatically determine the values of point a and b marked in the graph below. Said a different way, finding the values when my plot shows a marked decrease or after a marked increase. This is the source code of my plot if it is helpful: #load data min_duration = data['Time_Secondes'].min() max_duration = data['Time_Secondes'].max() xr_ = list(range(min_duration, max_duration, 60)) y_ = [] for i in range(0,(len(xr_)-1)): Time_proba_ = np.logical_and(data['Time_Secondes'].values >= xr_[i], data['Time_Secondes'].values < xr_[i+1]) Time_proba_1_ = np.logical_and(np.logical_and(data['Time_Secondes'].values >= xr_[i], data['Time_Secondes'].values < xr_[i+1]), data['outcome'].values==1) y_.append(sum(Time_proba_1_ )/sum(Time_proba_)) import matplotlib matplotlib.pyplot.plot(xr_[1:len(xr_)], y_) ``` AI: It seems what you are looking for is a function of your data, not of matplotlib. I would think of this as a second derivative problem -- you care about differences in slopes of successive lines. Sort a dataframe [x,y] by x values, in increasing order. Calculate the first discrete derivative delta1 = (y[i+1] - y[i]) / (x[i+1] - x[i]). This tells you the slope of each line. Now you can do one of two things, depending on what you want to identify. Calculate the second discrete derivative delta2 = (delta1[i+1] - delta1[i]) / (x[i+1] - x[i]). Calculate the difference of successive slopes diff = delta1[i+1] - delta1[i] Select all is for which absolute value of {diff or delta2} is greater than a certain threshold. Selecting all x values for which np.abs(x[i]) > threshold will give you the x-locations of these large changes in slope. Here's an example I just coded up,for identifying the jump in a jump function $x / |x|$. import numpy as np import pandas as pd n_data = 100 x = np.random.uniform(-10,10,n_data) y = x / np.abs(x) delta = [0] * n_data diff = [0] * n_data data = pd.DataFrame({'x':x, 'y':y}) data = data.sort_values('x') for i in range(n_data - 1): a, fa = data.iloc[i] b, fb = data.iloc[i+1] diff_quot = (fb - fa) / (b-a) delta[i] = diff_quot data['delta'] = delta for i in range(n_data - 1): a, fa, da = data.iloc[i] b, fb, db = data.iloc[i+1] dd = db - da diff[i] = np.abs(dd) data['diff'] = diff plt.plot(data['x'],data['y']) plt.plot(data['x'],data['diff']) threshold = 1 jumps = [dd > threshold for dd in diff] Here is the jump function, whose jump we might want to locate Here is a graph of the differences in slopes, where we see the jump is located near x = 0
H: With a MLP (regression), is it appropriate to initialize bias in the final layer to be a value near the expected mean? For instance, when predicting IQ in a population you would expect the mean to be 100. If you initialize the bias in the final layer you are basically giving the network a head start, telling it in what range of values it should be guessing. Another way of getting at this would be to predict a scaled outcome value with a mean of 0, which would make the standard bias initializer value of 0 be spot-on. Are there any guiding rules/norms when it comes to this situation? Are both methods appropriate? AI: Cool idea! Looks like it really helps. Training starts with smaller error, so you might be able to train the model in shorter time with this trick. You should be okay as long as you initialise the weights properly. I trained model with different biases and plotted error at the start and the end of of the training. Expected mean of the data is 100. See code below. from keras.models import Sequential from keras.layers import Dense from keras.initializers import Constant, Zeros, Ones from keras.metrics import mean_squared_error import matplotlib.pyplot as plt import numpy as np def getData(): n = 200 X = np.random.randn(n, 2) Y = 20 * X[:,0] + 10 * X[:,1] + 100 return X, Y def getModel(bias): m = Sequential() m.add(Dense(1, input_shape=(2,), bias_initializer=bias)) m.compile('adam', loss='mse') return m X, Y = getData() constants = [0, 1, 10, 50, 100, 150, 200] loss_at_start = [] loss_at_end = [] for c in constants: m = getModel(Constant(c)) m.fit(X,Y, epochs=20, validation_split=0.2, validation_steps=20, steps_per_epoch=1000) loss_at_start.append(m.history.history['loss'][0]) loss_at_end.append(m.history.history['loss'][-1]) plt.plot(constants, loss_at_start) plt.plot(constants, loss_at_end) plt.xlabel('Bias') plt.ylabel('Loss') plt.legend(['Loss at training start','Loss at training end']) plt.title('Expected mean: 100') plt.show()
H: Combining Latitude/Longitude position into single feature I have been playing with two dimensional machine learning using pandas (trying to do something like this), and I would like to combine Lat/Long into a single numerical feature -- ideally in a linear fashion. Is there a "best practice" to do this? AI: A note: for those who've ended here looking for a hashing technique, geohash is likely your best choice. Representing latitude and longitude in a single linear scale is not possible due to the fact that their domain is inherently a 3D space. Reducing that as per your needs would require a spatial flattening technique that's unheard of to me. Reasoning As far as lat/long merging goes, the best of best practices would be to resort to the Haversine formula, which calculates the distance between two points over a spherical surface, and receives those points' coordinates as input. One way to incorporate that in your use case - where each point should probably have an independent lat/long combination - would be to assume the distance's origin point coordinates to be $(\varphi_1, \lambda_1) = (0, 0)$, which would render $$d =2r \arcsin\left(\sqrt{\sin^2\left(\frac{\varphi_2 - 0}{2}\right) + \cos(0) \cos(\varphi_2)\sin^2\left(\frac{\lambda_2 - 0}{2}\right)}\right)$$ $$= 2r \arcsin\left(\sqrt{\sin^2\left(\frac{\varphi_2}{2}\right) + \cos(\varphi_2)\sin^2\left(\frac{\lambda_2}{2}\right)}\right)$$ With $r$ being Earth's radius (~6371km) and $(\varphi_2, \lambda_2)$ your point's latitude and longitude, respectively. However, as stated before, that couldn't possibly give you a linear relation, as you can see by 3d plotting the function: Implementation The circumstances imply you're likely to be using pandas, or at least should be. Here's an example implementation of this relativized Haversine formula: from math import radians, cos, sin, asin, sqrt def single_pt_haversine(lat, lng, degrees=True): """ 'Single-point' Haversine: Calculates the great circle distance between a point on Earth and the (0, 0) lat-long coordinate """ r = 6371 # Earth's radius (km). Have r = 3956 if you want miles # Convert decimal degrees to radians if degrees: lat, lng = map(radians, [lat, lng]) # 'Single-point' Haversine formula a = sin(lat/2)**2 + cos(lat) * sin(lng/2)**2 d = 2 * r * asin(sqrt(a)) return d Which could be used as in the below minimal example: >>> import pandas as pd >>> df = pd.DataFrame([[45.0, 120.0], [60.0, 30.0]], columns=['x', 'y']) >>> df x y 0 45.0 120.0 1 60.0 30.0 >>> df['harvesine_distance'] = [single_pt_haversine(x, y) for x, y in zip(df.x, df.y)] >>> df x y harvesine_distance 0 45.0 120.0 12309.813344 1 60.0 30.0 7154.403197
H: How to encode a time series as an image to feed it into CNN? I want to try CNN in the task of stock chart pattern recognition. I suspect that feeding a line chart won't work because the image will have a lot of empty pixels. What time series encoding options are available to get a lossless, dense image? AI: 1D CNN : You do not have to convert it into an image for CNN. CNN can work directly on time-series (1D Convolution Network). More Details : What is a 1D Convolutional Layer in Deep Learning? Examples for time-series : https://towardsdatascience.com/stock-price-prediction-system-using-1d-cnn-with-tensorflow-js-machine-learning-easy-and-fun-fe5323e68ffb https://jeddy92.github.io/JEddy92.github.io/ts_seq2seq_conv/ https://machinelearningmastery.com/how-to-develop-convolutional-neural-network-models-for-time-series-forecasting/ Image as input to CNN Image can be fed into CNN like any other image. CNN will learn to ignore all whitespace and tune first few filters as edge-detection filters. In such models, output variable will be the direction or a band of percentages (say next stock tick will be in a band of 0 - 1% or 0 -1%).
H: How to plot logistic regression decision boundary? I am running logistic regression on a small dataset which looks like this: After implementing gradient descent and the cost function, I am getting a 100% accuracy in the prediction stage, However I want to be sure that everything is in order so I am trying to plot the decision boundary line which separates the two datasets. Below I present plots showing the cost function and theta parameters. As can be seen, currently I am printing the decision boundary line incorrectly. Extracting data clear all; close all; clc; alpha = 0.01; num_iters = 1000; %% Plotting data x1 = linspace(0,3,50); mqtrue = 5; cqtrue = 30; dat1 = mqtrue*x1+5*randn(1,50); x2 = linspace(7,10,50); dat2 = mqtrue*x2 + (cqtrue + 5*randn(1,50)); x = [x1 x2]'; % X subplot(2,2,1); dat = [dat1 dat2]'; % Y scatter(x1, dat1); hold on; scatter(x2, dat2, '*'); hold on; classdata = (dat>40); Computing Cost, Gradient and plotting % Setup the data matrix appropriately, and add ones for the intercept term [m, n] = size(x); % Add intercept term to x and X_test x = [ones(m, 1) x]; % Initialize fitting parameters theta = zeros(n + 1, 1); %initial_theta = [0.2; 0.2]; J_history = zeros(num_iters, 1); plot_x = [min(x(:,2))-2, max(x(:,2))+2] for iter = 1:num_iters % Compute and display initial cost and gradient [cost, grad] = logistic_costFunction(theta, x, classdata); theta = theta - alpha * grad; J_history(iter) = cost; fprintf('Iteration #%d - Cost = %d... \r\n',iter, cost); subplot(2,2,2); hold on; grid on; plot(iter, J_history(iter), '.r'); title(sprintf('Plot of cost against number of iterations. Cost is %g',J_history(iter))); xlabel('Iterations') ylabel('MSE') drawnow subplot(2,2,3); grid on; plot3(theta(1), theta(2), J_history(iter),'o') title(sprintf('Tita0 = %g, Tita1=%g', theta(1), theta(2))) xlabel('Tita0') ylabel('Tita1') zlabel('Cost') hold on; drawnow subplot(2,2,1); grid on; % Calculate the decision boundary line plot_y = theta(2).*plot_x + theta(1); % <--- Boundary line % Plot, and adjust axes for better viewing plot(plot_x, plot_y) hold on; drawnow end fprintf('Cost at initial theta (zeros): %f\n', cost); fprintf('Gradient at initial theta (zeros): \n'); fprintf(' %f \n', grad); The above code is implementing gradient descent correctly (I think) but I am still unable to show the boundary line plot. Any suggestions would be appreciated. logistic_costFunction.m function [J, grad] = logistic_costFunction(theta, X, y) % Initialize some useful values m = length(y); % number of training examples grad = zeros(size(theta)); h = sigmoid(X * theta); J = -(1 / m) * sum( (y .* log(h)) + ((1 - y) .* log(1 - h)) ); for i = 1 : size(theta, 1) grad(i) = (1 / m) * sum( (h - y) .* X(:, i) ); end end EDIT: As per the below answer by @Esmailian, now I have something like this: [m, n] = size(x); x1_class = [ones(m, 1) x1' dat1']; x2_class = [ones(m, 1) x2' dat2']; x = [x1_class ; x2_class] AI: Regarding the code You should plot the decision boundary after training is finished, not inside the training loop, parameters are constantly changing there; unless you are tracking the change of decision boundary. x1 (x2) is the first feature and dat1 (dat2) is the second feature for the first (second) class, so the extended feature space x for both classes should be the union of (1, x1, dat1) and (1, x2, dat2). Decision boundary Assuming that data is $\boldsymbol{x}=(x_1, x_2)$ ((x, dat) or (plot_x, plot_y) in the code), and parameter is $\boldsymbol{\theta}=(\theta_0, \theta_1,\theta_2)$ ((theta(1), theta(2), theta(3)) in the code), here is the line that should be drawn as decision boundary: $$x_2 = -\frac{\theta_1}{\theta_2} x_1 - \frac{\theta_0}{\theta_2}$$ which can be drawn as a segment by connecting two points $(0, - \frac{\theta_0}{\theta_2})$ and $(- \frac{\theta_0}{\theta_1}, 0)$. However, if $\theta_2=0$, the line would be $x_1=-\frac{\theta_0}{\theta_1}$. Where this comes from? Decision boundary of Logistic regression is the set of all points $\boldsymbol{x}$ that satisfy $${\Bbb P}(y=1|\boldsymbol{x})={\Bbb P}(y=0|\boldsymbol{x}) = \frac{1}{2}.$$ Given $${\Bbb P}(y=1|\boldsymbol{x})=\frac{1}{1+e^{-\boldsymbol{\theta}^t\boldsymbol{x_+}}}$$ where $\boldsymbol{\theta}=(\theta_0, \theta_1,\cdots,\theta_d)$, and $\boldsymbol{x}$ is extended to $\boldsymbol{x_+}=(1, x_1, \cdots, x_d)$ for the sake of readability to have$$\boldsymbol{\theta}^t\boldsymbol{x_+}=\theta_0 + \theta_1 x_1+\cdots+\theta_d x_d,$$ decision boundary can be derived as follows $$\begin{align*} &\frac{1}{1+e^{-\boldsymbol{\theta}^t\boldsymbol{x_+}}} = \frac{1}{2} \\ &\Rightarrow \boldsymbol{\theta}^t\boldsymbol{x_+} = 0\\ &\Rightarrow \theta_0 + \theta_1 x_1+\cdots+\theta_d x_d = 0 \end{align*}$$ For two dimensional data $\boldsymbol{x}=(x_1, x_2)$ we have $$\begin{align*} & \theta_0 + \theta_1 x_1+\theta_2 x_2 = 0 \\ & \Rightarrow x_2 = -\frac{\theta_1}{\theta_2} x_1 - \frac{\theta_0}{\theta_2} \end{align*}$$ which is the separation line that should be drawn in $(x_1, x_2)$ plane. Weighted decision boundary If we want to weight the positive class ($y = 1$) more or less using $w$, here is the general decision boundary: $$w{\Bbb P}(y=1|\boldsymbol{x}) = {\Bbb P}(y=0|\boldsymbol{x}) = \frac{w}{w+1}$$ For example, $w=2$ means point $\boldsymbol{x}$ will be assigned to positive class if ${\Bbb P}(y=1|\boldsymbol{x}) > 0.33$ (or equivalently if ${\Bbb P}(y=0|\boldsymbol{x}) < 0.66$), which implies favoring the positive class (increasing the true positive rate). Here is the line for this general case: $$\begin{align*} &\frac{1}{1+e^{-\boldsymbol{\theta}^t\boldsymbol{x_+}}} = \frac{1}{w+1} \\ &\Rightarrow e^{-\boldsymbol{\theta}^t\boldsymbol{x_+}} = w\\ &\Rightarrow \boldsymbol{\theta}^t\boldsymbol{x_+} = -\text{ln}w\\ &\Rightarrow \theta_0 + \theta_1 x_1+\cdots+\theta_d x_d = -\text{ln}w \end{align*}$$
H: How do I decide if I need to go for Normalization and not Standardization or vice-versa? While designing a ML model, how do I decide if I need to go for Normalization and not Standardization or vice-versa? On what factor is this decision made? AI: Before we start keep in mind that in most cases it doesn't play much of a difference which of the two you'll choose. Now to answer your question, generally speaking the choice should be made based on what model you want to employ: If you use a distance-based estimator (e.g. k-NN, k-means) it's better to normalize your features so that they occupy the same exact range of values (i.e. $[0,1]$). This forces your estimator to treat each feature with equal importance. If you're using Neural Networks, it's better to standardize your features, because gradient descent has some useful properties when your data is centered around $0$ with unit variance. Tree-based algorithms don't require any form of scaling, so its irrelevant if you scale or normalize your features. As a rule of the thumb, I usually standardize the data (unless I'm going to strictly work with distance-based algorithms).
H: Multi class Imbalanced datasets under-sampling imblearn I have an imbalanced dataset. I am looking to under-sample. Even though, the oversampling process takes less time, the model training takes a lot of time. I have taken a look at imbalanced-learn website. There are several under sampling methods. I am looking at method that tries to undersample the classes with much as possible information intact. I tried .ClusterCentroids() methods and found it takes way too long to balance the classes. I have tried other methods that have been mentioned in the website. However, even with sampling_strategy to equal values eg: sampling_strategy={0: 2000, 1: 2000, 2: 2000} The resulting dataset is not balanced. Such as in .CondensedNearestNeighbour() and .AllKNN() methods. Would anyone be able to help me create a class balanced dataset using these methods. Thanks Michael AI: If you're looking for a fast workaround to solve this you have to increase n_neighbors parameter in AllKNN, but I wouldn't recommend using this type of undersampling algorithm for what you want to do! Explanation: AllKN is an under-sampling technique based on Edited Nearest Neighbors. These techniques try to under-sample your majority classes by removing samples that are close to the minority class, in order to make your classes more separable. The way they work is that they remove samples from the majority class that have at least 1 nearest neighbor in the minority class. The thing is that if the classes are separable enough and the majority samples have no minority nearest neighbors, they can't be removed! If you want a technique that undersamples your data in order to get exactly the same number of samples from the minority and the majority class, I'd recommend using a different technique (e.g. ClusterCentroids, which you've used is such). ENN-based undersampling techniqes aren't built for that. You can also read this tutorial which compares different resampling algorithms imblearn. As a final remark, if possible, I'd recommend oversampling...
H: How to create dictionary with multiple keys from dataframe in python? I have a pandas dataframe as follows, I want to convert it to a dictionary format with 2 keys as shown: id name energy fibre 0 11005 4-Grain Flakes 1404 11.5 1 35146 4-Grain Flakes, Gluten Free 1569 6.1 2 32570 4-Grain Flakes, Riihikosken Vehnämylly 1443 11.2 I am expecting the result to be of nutritionValues = { ('4-Grain Flakes', 'id'): 11005, ('4-Grain Flakes', 'energy'): 1404, ('4-Grain Flakes', 'fibre'): 11.5, ('4-Grain Flakes, Gluten Free', 'id'): 11005, ('4-Grain Flakes, Gluten Free', 'energy'): 1569, ('4-Grain Flakes, Gluten Free', 'fibre'): 6.1, ('4-Grain Flakes, Riihikosken Vehnämylly', 'id'): 32570, ('4-Grain Flakes, Riihikosken Vehnämylly', 'energy'): 1443, ('4-Grain Flakes, Riihikosken Vehnämylly', 'fibre'): 11.2} foods, fiber = multidict({ '4-Grain Flakes': 11.5, '4-Grain Flakes, Gluten Free': 6.1, '4-Grain Flakes, Riihikosken Vehnämylly': 11.2}) How can I achieve this? AI: In order to be able to create a dictionary from your dataframe, such that the keys are tuples of combinations (according to your example output), my idea would be to use a Pandas MultiIndex. This will then generate a dictionary of the form you want. First I just recreate your example dataframe (would be nice if you provide this code in the future!): import pandas as pd # Create the example dataframe df = pd.DataFrame(["4-Grain Flakes", "4-Grain Flakes, Gluten Free", "4-Grain Flakes, Riihikosken Vehnämylly"]) df["id"] = [11005, 35146, 32570] df["energy"] = [1404, 1569, 1443] df["fibre"] = [11.5, 6.1, 11.2] df.columns = ["name"] + list(df.columns[1:]) print(df) name id energy fibre 0 4-Grain Flakes 11005 1404 11.5 1 4-Grain Flakes, Gluten Free 35146 1569 6.1 2 4-Grain Flakes, Riihikosken Vehnämylly 32570 1443 11.2 Now we can create the combinations of each value in "name" with each of the other column names. I will use lists, within a list comprehension, where I bundle up the values together into tuples. We end with a list of tuples: names = df.name.tolist() others = list(df.columns) others.remove("name") # We don't want "name" to be included index_tuples = [(name, other) for name in names for other in others] We can create the MultiIndex from this list of tuples as follows: multi_ix = pd.MultiIndex.from_tuples(index_tuples) Now we can create a new dataframe using out multi_ix. To populate this dataframe, notice that we simple need to row-wise values from columns ["id", "energy", "fibre"]. We can do this easily by extracting as an n * 3 NumPy array (using the values attribute of the dataframe) and then flattening the matrix, using NumPy's ravel method: df1 = pd.DataFrame(df[others].values.ravel(), index=multi_ix, columns=["data"]) print(df1) data 4-Grain Flakes id 11005.0 energy 1404.0 fibre 11.5 4-Grain Flakes, Gluten Free id 35146.0 energy 1569.0 fibre 6.1 4-Grain Flakes, Riihikosken Vehnämylly id 32570.0 energy 1443.0 fibre 11.2 Now we can simply use to to_dict() method of the datframe to create the dictionary you are looking for: nutritionValues = df1.to_dict()["data"] print(nutritionValues) {('4-Grain Flakes', 'energy'): 1404.0, ('4-Grain Flakes', 'fibre'): 11.5, ('4-Grain Flakes', 'id'): 11005.0, ('4-Grain Flakes, Gluten Free', 'energy'): 1569.0, ('4-Grain Flakes, Gluten Free', 'fibre'): 6.1, ('4-Grain Flakes, Gluten Free', 'id'): 35146.0, ('4-Grain Flakes, Riihikosken Vehnämylly', 'energy'): 1443.0, ('4-Grain Flakes, Riihikosken Vehnämylly', 'fibre'): 11.2, ('4-Grain Flakes, Riihikosken Vehnämylly', 'id'): 32570.0} It is also possible to get your final example of a multidict, directly from the multi-indexed dataframe. You need to just use multi-index slicing: fibre_df = final_df.loc[(slice(None), ["fibre"]), :] print(fibre_df) 0 4-Grain Flakes fibre 11.5 4-Grain Flakes, Gluten Free fibre 6.1 4-Grain Flakes, Riihikosken Vehnämylly fibre 11.2 You can then generate a dictionary as before: d = final_df.loc[(slice(None), ["fibre"]), :].to_dict()[0] print(d) {('4-Grain Flakes', 'fibre'): 11.5, ('4-Grain Flakes, Gluten Free', 'fibre'): 6.1, ('4-Grain Flakes, Riihikosken Vehnämylly', 'fibre'): 11.2} And you can drop the "fibre" value from the tuple-keys with a simple dictionary comprehension: final_dict = {k[0]: v for k, v in d.items()} print(final_dict) {'4-Grain Flakes': 11.5, '4-Grain Flakes, Gluten Free': 6.1, '4-Grain Flakes, Riihikosken Vehnämylly': 11.2}
H: Can we use k fold Cross Validation without any extra (excluded) Test Set? I have seen this in two Papers: The authors use 10 fold cross validation, and then present the results from this validation or even odder the results from the best Fold as their modelling Result. there has been no testing data put aside to validate the final model. These are peer reviewed papers i am talking about. Is this right to do ? Can one even cite such sources? AI: The key question is Is K-fold cross validation is used to select the final model (or algorithm)? If yes, as you said, then the final model should be tested on an extra set that has no overlap with the data used in K-fold CV (i.e. a test set). If no, the average score reported from K-fold CV is the final test score, no extra set is required. Therefore, if authors only picked the best model from all K models, it should have been tested on an extra set. The score of model on the validation fold (from K-fold CV) is not acceptable. Also, here is a related post on this site (my answer) that goes into more detail about test and validation sets (scores). EDIT: I have found a similar question on stats.stackexchange.com. Also, this comment by amoeba suggestions "nested CV" instead of "CV + test set", which I think it is worth fleshing out here. K-fold CV evaluation 1. For k = [1,..,K] 1. tr = (K-1)/K of data, ts = 1/K of data 2. m[k] = model trained using tr (can be further split into tr2 + v) 3. score[k] = score of m[k] on ts 3. Test score = average of score[1],...,score[K] K-fold CV selection and evaluation 1. tr = 80% of data, ts = 20% of data (or some other ratio) 2. For k = [1,..,K] 1. tr2 = (K-1)/K of tr, v = 1/K of tr 2. m[k] = model trained using tr2 (can be further split into tr3 + v2) 3. score[k] = score of m[k] on v 3. M = best of m[1],...,m[K] 4. Test score = score of M on ts Nested k-fold CV selection and evaluation 1. For k = [1,..,K] 1. tr = (K-1)/K of data, ts = 1/K of data 2. For k2 = [1,..,K2] 1. tr2 = (K2-1)/K2 of tr, v = 1/K2 of tr 2. m[k2] = model trained using tr2 (can be further split into tr3 + v2) 3. score[k2] = score of m[k2] on v 3. M = best of m[1],...,m[K2] 4. score[k] = score of M on ts 2. Test score = average of score[1],...,score[K] Note that in these algorithms, model trained includes parameter learning, hyper-parameter tuning, and model selection (of course, except for the outermost model selection). For example, for selecting between hyper-parameters 8 and 12, we need a deeper loop inside model trained.
H: Running multiple times of a model is for model randomness or data randomness? When a paper report the average and std of a model on a dataset, it means that they have changed the split of training and test sets and run the model multiple times or they just run the model on constant splitting multiple times to find the randomness of model? Which one is more common? AI: Usually, we report mean and variance for k-fold crossvalidation and similar techniques. We run the model multiple times in different data, but that can be applied to the second case too as, for example, a Neural Network can be initialized with random weights multiple times for test in the model. If the paper does not explicitly say that they re-splited the dataset, then is more likely that this variance comes from random initialization. You should read the experimental procedures used and try to determine all sources of variance for papers results. So it might be either or a combination of both dataset random splits and model random initialization
H: how to implement a hierarchical clustering technique using parallel execution in R In R, currently to implement wards method for hierarchical clustering, I use the following code - results <- hclust(data, "ward.D2"). However as my data size has increased exponentially, I wanted to implement parallel execution. Since I found no package, I tried to re-write the hclust method, however most of its code is written in fortan. Does there exist any way to implement parallel execution here? AI: The complexity of that algorithm is O(n³), and it needs O(n²) memory. So if your data grows "exponentially", you better settle for a sampling-based approach! Seriously: benchmark the run time and memory requirements for 5k, 10k, 20k, 40k, 80k instances. You should be able to observe something between O(n²) (for computing the distance matrix) and O(n³) for the clustering. Then estimate how long this will take (and how much memory you'll need). Even if you are very optimistic, divide this by the number of threads M you can run. A more realistic value will be M/2 if you can run M threads. When you are really convinced that you'll be okay with the resulting runtime, just implement the algorithm yourself. It is a very simple algorithm. No need to study the Fortran code. But most likely your code will be 2x slower (20x if using pure Python or R) than a well written old Fortran code...
H: algorithm to predict numeric values with sklearn I'm new to ML and trying to learn it. I scraped information (using python) from a website where people try to sell their cars. I put data in a pandas data frame. Now I'm wondering how should I predict a car's price according to its manufacturing year and its mileage! I drew some plots and there is a correlation between price-year and price-mileage! AI: You should try linear regression with sci-kit learn. You can use both year and mileage as predictors for price.
H: What's the point of the test set? I get the point of a validation and training set, but the importance of a test set doesn't click for me. Let's say you train a model, and you try your best to avoid overfitting by testing your model on the validation set. After you've decided you have a model you're proud of, you do a final sanity check on the test set, and let's say the performance is trash. Are you really going to start all over? What decision-making does it inform? In my workplace, the way we structure timelines, there's no time to start over. AI: The point of a test set is to give you a final, unbiased performance measure of your entire model building process. This includes all modelling decisions in your pipeline, so any preprocessing, algorithm selection, feature engineering, feature selection, hyper parameter tuning and how you trained your model in general (5 fold? Bootstrapping? etc.). All of these decisions can lead to overfitting; for instance, selecting a set of hyperparameters that are coincidentally optimal for a particular validation set but not for the general population. If we have no test set you would not be able to identify this and would potentially be reporting highly optimistic scores. Also, because the above modelling pipeline can get very complex, the possibility of leaking data and overfitting becomes very high. If you tune to your validation set, how will you know if your entire modelling process is not leaking data (and therefore overfitting?) You bring up a good point; of course if we see that the test set score is poor then we will probably go back and tweak again. Thus, this just demotes the test set into a validation one if you use it too many times as you now run into the possibility of overfitting the test set (see almost every Kaggle competition). However, through repeated test set evaluation (train the model, then test it, then repeat with a different partioning) you will at least get a gauge on how variable your model is to help mitigate this problem. The amount of times you repeat will depend on how much the test set scores vary and how much uncertainty you are willing to accept (also time constraints). In my opinion, in the business setting you should always make time to properly test your model. The dangers of overfitting are way too high and even worse; you would not even know it. If the test set scores end up being "trash" then at least you know the model is trash and you don't use it and/or you change your approach. This is way better than thinking the model is fantastic based off non rigorous validation and then having the model fail in production. The scientific method is there for a reason right?
H: How to mathematically define the architecture of neural network model? And the function space associated with it? My goal is to properly define a search space for neural architecture search (NAS). I think a proper definition must handle the following issues. how to mathematically quantify the topology? how to define the number in each layer? activation functions? AI: You can use directed graphs. Each neuron is a node in the graph. Each neuron has list of its neighbours and activation function. You can add remove nodes to this graph. If you just want to search for simple architectures you can use two lists. One for number of neurons in layer and one for activation function of neurons in that layer. So this is a neural network with 3 layers. First layer has 5 neurons and echo of them have Relu activation function. [5,3,2] ['Relu', 'sigmoid', 'Relu'] Hope that helps.
H: Having trouble figuring out how loss was calculated for SQuAD task in BERT paper The BERT Paper https://arxiv.org/pdf/1810.04805.pdf Section 4.2 covers the SQuAD training. So from my understanding, there are two extra parameters trained, they are two vectors with the same dimension as the hidden size, so the same dimensions as the contextualized embeddings in BERT. They are S (for start) and E (for End). For each, a softmax is taken with S and each of the final contextualized embeddings to get a score for the correct Start position. And the same thing is done for E and the correct end position. I get up to this part. But I am having trouble figuring out how the did the labeling and final loss calculations, which is described in this paragraph "and the maximum scoring span is used as the prediction. The training objective is the loglikelihood of the correct start and end positions." What do they mean by "maximum scoring span is used as the prediction"? Furthermore, how does that play into "The training objective is the loglikelihood of the correct start and end positions"? From this Source: https://ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/ It says the log-likelihood is only applied to the correct classes. So the we are only calculating the softmax for the correct positions only, Not any of the in correct positions. If this interpretation is correct, then the loss will be Loss = -Log( Softmax(S*T(predictedStart) / Sum(S*Ti) ) -Log( Softmax(E*T(predictedEnd) / Sum(S*Ti) ) AI: From your description it sounds like for every position $i$ in the input text the model predicts $$p_S(i) = \mathbb P(\text{correct start position is } i)$$ and $$p_E(i) = \mathbb P(\text{correct end position is } i).$$ Now let $\hat s = \arg\max_i p_S(i)$ and $\hat e = \arg\max_i p_E(i)$ be the most probable start and end positions (according to the model). Then by "maximum scoring span is used as the prediction" they just mean that they output $(\hat e, \hat s)$ when predicting. Then “The training objective is the loglikelihood of the correct start and end positions” means that if the correct start and end positions are $s^*$ and $e^*$, they try to maximize the predicted probability of $s^*$ and $e^*$. If the start and end positions are independent then this is equal to $p_S(s^*) p_E(e^*)$ and taking the negative log the loss becomes $$L(e^*, s^*) = -\log p_S(s^*) -\log p_E(e^*).$$
H: retrieve food groups from food item list I have a dataframe of food items as follows: I have to create a food_group list that gives the food group it belongs to, for-example all type of yogurts should be in one group called yogurt. I used a snippet to take the first segment of the comma separated name, but I do not get the result like putting all yogurts in one group food_group_0 = [i.split(',') for i in data['name']] food_group = [item[0] for item in food_group_0] #To count how many of each entry there are in the list you can use the Counter class in the collections module: from collections import Counter c = Counter(food_group) print(c) the dataframe 0 4-Grain Flakes 1 4-Grain Flakes, Gluten Free 2 4-Grain Flakes, Riihikosken Vehnämylly 3 Almond 4 Almond Drink, Sweetened, Alrpo 5 Almond Drink, Unsweetened, Alrpo 6 Amaranth Flakes 7 Anchovy 8 Apple, Average, With Skin 9 Apple, Domestic, Without Skin 10 Apple, Domestic, With Skin 11 Apple, Dried 12 Apple, Imported, Without Skin 13 Apple, Imported, With Skin 14 Apple Chips 15 Apple Crisp Delight, Apple, Oat Flakes 16 Apple Jam 17 Apple Juice, Unsweetened, Vitamin C 18 Apple Kissel, Apple Soup, Dried Apples 19 Apple Kissel, Apple Soup, Fresh Apples 20 Apple Pie, Basic Sweet Dough, Gluten-Free, Con... 21 Apple Pie, Basic Sweet Dough, Low-Fat Milk 22 Apple Pie, Basic Sweet Dough, Naturally Gluten... 23 Apple Pie, Basic Sweet Dough, Whole Milk 24 Apple Pie, Shortbread Crust 25 Apple Pie, Shortbread Crust, Gluten-Free, Cont... 26 Apple Pie, Shortbread Crust, Naturally Gluten-... 27 Apple Pie, Shortbread Crust With Sour Milk 28 Apple Pie, Soft, Low-Fat Milk 29 Apple Pie With Quark Filling, Shortbread Crust ... 4068 Yoghurt, Plain, A+, Fat 2.5%, 1 Ug Vitamin D, ... 4069 Yoghurt, Plain, A+, Fat 2.5%, Lactose-Free, 1 ... 4070 Yoghurt, Plain, A+, Fat 4%, 1 Ug Vitamin D, La... 4071 Yoghurt, Plain, A+, Fatfree, 1 Ug Vitamin D, L... 4072 Yoghurt, Plain, A+ Greek, 2 % Fat, Lactose-Fre... 4073 Yoghurt, Plain, Ab, 0.2% Fat, Probiotics 4074 Yoghurt, Plain, Ab, 2.5% Fat, Probiotics 4075 Yoghurt, Plain, Activia, 3.4% Fat 4076 Yoghurt, Plain, Arla Protein, 1% Fat, Lactose-... 4077 Yoghurt, Plain, Bulgarian, 9% Fat 4078 Yoghurt, Plain, Fat-Free 4079 Yoghurt, Plain, Fat-Free, Lactose-Free, 1 Ug V... 4080 Yoghurt, Plain, Fat-Free, Low-Lactose, 0.5 Ug ... 4081 Yoghurt, Plain, Greek, 7% Fat, Lactose-Free 4082 Yoghurt, Plain, Organic, 3% Fat 4083 Yoghurt, Plain, Pirkka Reducol, 2.5% Fat, Low-... 4084 Yoghurt, Turkish/Greek, 10% Fat 4085 Yoghurt, Turkish/Greek, 10% Fat, Lactose-Free 4086 Yoghurt Sauce 4087 Yoghurt With Jam, Fat-Free 4088 Yoghurt With Muesli, A+, Fat 3.5%, Low-Lactose 4089 Yoghurt With Quark, Flavoured, Arla, 1.4% Fat,... 4090 Yoghurt With Quark, Flavoured, Luonto+, 1.2% F... 4091 Yoghurt With Quark, Flavoured, Valio, 1.7% Fat... 4092 Zander, Pike-Perch 4093 Zucchini, Boiled Without Salt 4094 Zucchini, Summer Squash 4095 Zucchini Filled With Minced Meat 4096 Zucchini Filled With Soya And Rice 4097 Zucchini Filled With Vegetables AI: You can actually do the string-spitting and indexing on the columns themselves - no need to extract the column and do list comprehensions. Below I take whatever is before the first comma and put it in a column called food_group and then the first field after the same column and put it in a new column called sub_cat-egory: df["food_group"] = df.name.str.split(",").str[0] df["sub_cat"] = df.name.str.split(",").str[1] Here is example output for some Yogurt data: id name food_group sub_cat 44 4082 Yoghurt, Plain, Organic, 3% Fat Yoghurt Plain 45 4083 Yoghurt, Plain, Pirkka Reducol, 2.5% Fat, Low-... Yoghurt Plain 46 4084 Yoghurt, Turkish/Greek, 10% Fat Yoghurt Turkish/Greek 47 4085 Yoghurt, Turkish/Greek, 10% Fat, Lactose-Free Yoghurt Turkish/Greek 48 4086 Yoghurt Sauce Yoghurt Sauce NaN Notice that any fields that are empty are filled with NaN. This will happen, when your name column only contains a single field (i.e. no commas). EDIT Here is the top of my dataframe, after the operation above: In [13]: df.head(10) Out[13]: id name food_group sub_cat 0 0 4-Grain Flakes 4-Grain Flakes NaN 1 1 4-Grain Flakes, Gluten Free 4-Grain Flakes Gluten Free 2 2 4-Grain Flakes, Riihikosken Vehnämylly 4-Grain Flakes Riihikosken Vehnämylly 3 3 Almond Almond NaN 4 4 Almond Drink, Sweetened, Alrpo Almond Drink Sweetened 5 5 Almond Drink, Unsweetened, Alrpo Almond Drink Unsweetened 6 6 Amaranth Flakes Amaranth Flakes NaN 7 7 Anchovy Anchovy NaN 8 8 Apple, Average, With Skin Apple Average 9 9 Apple, Domestic, Without Skin Apple Domestic EDIT In order to replace a row with another string, given a desired string is in that row, you can perform the following: for keyword in keywords: df["new_col"] = df.name.apply(lambda x: keyword if keyword in x else x) where keywords could be a list like this: keywords = ["Yogurt", "chicken", "Drink"] It still requires manually defining a list of keywords andlooping over them. You could also make this insensitive to the case of the word, but doing everything in e.g. lower-case: lower_keywords = ["yogurt", "chicken", "drink"] for keyword in lower_keywords: df["new_col"] = df.name.apply(lambda x: keyword if keyword in x.tolower() else x) You could continue to make a multi-index from these two new columns, but is might not be necessary - it depends on what you want to do afterwards with the data.
H: How to explain get_weight with autoencoder in keras? I built an autoencoder model of three layers with 9 5 9. Input dim =9, encoder dim =5, output dim=9 When I get the model weights, weight1=autoencoder.layers[1].get_weights() weight2=autoencoder.layers[2].get_weights() print(weight1) [array([[ 0.0023533 , -0.02289476, -0.01658 , 0.03487475, -0.38416424], [ 0.00594878, 0.01835718, 0.01768207, 0.04458401, 0.10922299], [ 0.03288281, 0.22234452, 0.04393397, -0.14807932, 0.04412287], [ 0.16347113, 0.02014653, -0.05967368, -0.09127634, 0.9797626 ], [-0.0901033 , 0.1602385 , -0.16297013, 0.43326673, -0.2514738 ], [ 0.00272129, -0.00525797, 0.01420719, -0.04066049, -0.01261563], [ 0.40665478, -0.07740633, -0.02576585, 0.0406443 , -0.218632 ], [-0.00641229, 0.08050939, -0.02497054, -0.12399215, 0.10901988], [-0.14366671, 0.02168852, 0.19099002, -0.10509221, -0.4306924 ]], dtype=float32), array([-0.0133634 , 0.14412224, 0.13419336, -0.32834613, -0.31566525], dtype=float32)] There have two array in weight1. I know the first array means but how to explain the second array? AI: Does the layer contain two matrices, one for the actual weights and one for the biases? There could be one bias value for each of the columns in your weight matrix, depending on how you built your model.
H: Replace a value in a column if that value appears only once I have a dataframe, I want to replace the values in one column by "other" if the value count of that value in that column is exactly 1 i Food_group 0 Flake 1 Flake 2 Flake 3 Almond 4 Drink 5 Drink 6 Flake I have tried, data["food_group"] = data.food_group.apply(lambda x: "other" if x.value_counts()==1 else x) I got error AttributeError: 'str' object has no attribute 'value_counts' AI: It is easier if you store the value count separately to avoid redoing it inside the apply loop. You can do it like this: food_count = data["food_group"].value_counts() data["food_group"] = data["food_group"].apply(lambda x: "other" if food_count[x]==1 else x)
H: Are CNNs applicable on structured data? I can use CNN to classify MNIST images, but I don't know whether CNNs are applicable on iris data as well? If not, why? AI: CNNs are applicable wherever the input signal contains spatial information. For instance, you can think of the following list: 1-D signal: Recorded voices are examples of one-dimensional inputs where there are relations between adjacent entries of inputs. This means adjacent entries have patterns which are valuable for different tasks, such as classification. You can employ 1-D convolutional layers for these input signals. 2-D signal: Images are an example of this kind, albeit they may have different channels, like RGB. It's clear that adjacent pixels are roughly like each other, and other than that, the adjacent pixels share patterns which may be repeated over the entire image multiple times. Consequently, 2-D convolutional layers can be employed for these signals. 3-D signal: Video frames are examples of this kind. Other than the similarity you can find inside each frame, the different frames that exist one after another in the time axis can have similar meaningful patterns which are replicated. You can employ 3-D convolutional layers for these signal. About structured data where you can find them as the rows of relational databases which each column belongs to a specified feature, it does not have any meaning to use convolutional layers. The reason is clear. They do not have any spatial information. The adjacent rows should not share a common concept otherwise they would be redundant. Moreover, they are not spatially related to each other. For structured data, people utilise dense layers.
H: Preparing dataframe to carry k-means clustering Im trying to apply 3 different algorithms of clustering on my dataset. to check which one fits the best. I'm confused how should I convert my dataframe -k-means -DBSCAN -hierarchical clustering Dataset (bank.csv) https://archive.ics.uci.edu/ml/datasets/Bank+Marketing (bank.csv with 10% of the examples and 17 inputs, randomly selected from 3 (older version of this dataset with less inputs)) https://archive.ics.uci.edu/ml/machine-learning-databases/00222/ (direct download) 'data.frame': 4521 obs. of 17 variables: $ age : int 30 33 35 30 59 35 36 39 41 43 ... $ job : chr "unemployed" "services" "management" "management" ... $ marital : chr "married" "married" "single" "married" ... $ education: chr "primary" "secondary" "tertiary" "tertiary" ... $ default : chr "no" "no" "no" "no" ... $ balance : int 1787 4789 1350 1476 0 747 307 147 221 -88 ... $ housing : chr "no" "yes" "yes" "yes" ... $ loan : chr "no" "yes" "no" "yes" ... $ contact : chr "cellular" "cellular" "cellular" "unknown" ... $ day : int 19 11 16 3 5 23 14 6 14 17 ... $ month : chr "oct" "may" "apr" "jun" ... $ duration : int 79 220 185 199 226 141 341 151 57 313 ... $ campaign : int 1 1 1 4 1 2 1 2 2 1 ... $ pdays : int -1 339 330 -1 -1 176 330 -1 -1 147 ... $ previous : int 0 4 1 0 0 3 2 0 0 2 ... $ poutcome : chr "unknown" "failure" "failure" "unknown" ... $ y : chr "no" "no" "no" "no" ... I know that my data for clustering should be converted to integers. So i converted it like this (badly): #Import data to R bank <- read.table(file="bank.csv", sep=";", header=TRUE,stringsAsFactors = FALSE) # character data into numeric format bank$job <- as.numeric(as.factor(bank$job)) bank$marital <- as.numeric(as.factor(bank$marital)) bank$education <- as.numeric(as.factor(bank$education)) bank$default<- ifelse(bank$default == "yes", 1, 0) bank$housing <- ifelse(bank$housing== "yes", 1, 0) bank$loan<- ifelse(bank$loan== "yes", 1, 0) bank$month <- as.numeric(as.factor(bank$month)) bank$contact <- as.numeric(as.factor(bank$contact)) bank$poutcome <- as.numeric(as.factor(bank$poutcome)) bank$y <- ifelse(bank$y== "yes", 1, 0) # create normalization function normalize <- function(x) { return ((x - min(x)) / (max(x) - min(x))) } #normalize the data to get rid of outliers if present in the data set bank <- as.data.frame(lapply(bank, normalize)) AI: No, it does not make sense to encore the data this way. You use Euclidean distance. You need to encode the variables in a way that Euclidean distance computes a similarity that is useful for your problem. It strictly is not enough to encode stuff somehow as numbers! Garbage in, garbage out is what you get this way.
H: Convert a repetitive list into Pandas Dataframe How do you convert something like this: A: 1 B: 2 C: 3 ### A: 5 B: 5 C: 6 ### A: 2 B: 5 C: 7 into a dataset where the first row would be the first section with A as column-1 B as column-2 and C as column-3 so we get this: A B C 1 2 3 5 5 6 2 5 7 AI: If I understand this correctly your sequence is always 3 elements. Then you can do this: a = ['A:1','B:2','C:3','A:5','B:5','C:6','A:2','B:5','C:7'] b = [] rep_len = 3 # Looping with step size equal to repetition length for i in range(0,len(a),rep_len): # Selecting a repetition length c = a[i:i+rep_len] # Extracting everything in after letter and colon and casting to integer c = [int(x[2:]) for x in c] # Append to a list of lists b.append(c) df = pd.DataFrame(b, columns=['A', 'B', 'C']) Resulting in: A B C 0 1 2 3 1 5 5 6 2 2 5 7
H: Is it OK to use the testing sample to compare algorithms? I'm working on a little project where my dataset have 6k lines and around 300 features, with a simple binary outcome. Since I'm still learning ML, I want to try all the algorithms I can manage to find and compare the results. As I've read in tutorials, I split my dataset into a training sample (80%) and a testing sample (20%), and then trained my algorithms on the training sample with cross-validation (5 folds). My plan is to train all my models this way, and then measure their performance on the testing sample to chose the best algorithm. Could this cause overfitting? If so, since I cannot compare several models inside model_selection.GridSearchCV, how can I prevent it to overfit? AI: Basically, every time you use the results of a train/test split to make decisions about a model- whether that's tuning the hyperparameters of a single model, or choosing the most effective of a number of different models, you cannot infer anything about the performance of the model after making those decisions until you have "frozen" your model and evaluated it on a portion of data that has not been touched. The general concept addressing this issue is called nested cross validation. If you use a train/test split to choose the best parameters for a model, that's fine. But if you want to estimate the performance of that, you need to then evaluate on a second held out set. If you then repeat process for multiple models and choose the best performing one, again, that's fine, but by choosing the best result the value of your performance metric is inherently biased, and you need to validate the entire procedure on yet another held out set to get an unbiased estimate of how your model will perform on unseen data.
H: Printing Feature Contributions in a Random Forest algorithm from the Treeinterpreter library leading to errors I am working on a dataset where I predict the risks of developing pancreatic cancer with respect to a number of variables. I have created a random forest, and want to find the feature contributions. I have already used the "Treeinterpreter" library, resulting in a contributions array that is three-dimensional. I want to display the contributions in the array beside the name of the factor/variable. I have used the code below to do so, however, the code responsible for displaying the contributions does not work. I have tried multiple methods, including converting the dataframe to a numpy array, and other methods such as .all() and .any(). However, none are producing the desired result. What can be the right way to display the feature contributions with respect to each of the feature it represents? # -*- coding: utf-8 -*- """ Created on Mon Apr 15 13:39:19 2019 @author: GoodManMcGee """ import pandas as pd from sklearn.metrics import accuracy_score from sklearn import tree from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn.metrics import confusion_matrix from sklearn.ensemble import RandomForestClassifier from IPython.display import Image from sklearn.tree import export_graphviz from treeinterpreter import treeinterpreter as ti import matplotlib.pyplot as plt import numpy as np import itertools data = pd.read_csv("pancreatic_cancer_smokers.csv") target = data['case (1: case, 0: control)'] data.drop('case (1: case, 0: control)', axis=1, inplace=True) x_train, x_test, y_train, y_test = train_test_split(data, target, test_size = 0.2) clf = RandomForestClassifier(n_estimators=100) clf.fit(x_train, y_train) y_pred = clf.predict(x_test) clf_accuracy = accuracy_score(y_test, y_pred) clf_pred, clf_bias, contributions = ti.predict(clf, x_test) #The code below was taken from DataDive's treeinterpreter tutorial. #The aforementioned messages applies to all code between the underscores #/////////////////////////////////////////// for i in range(len(x_test)): print ("Instance", i) print ("Bias (trainset mean)", clf_bias[i]) print ("Feature contributions:") for c, feature in sorted(zip(contributions[i], data.feature_names), key=lambda x: -abs(x[0])): #An error occurs in the "data.feature_names" method in the code above:AttributeError: 'DataFrame' object has no attribute 'feature_names'. I have tried referenceing columns from datasets also, but that also leads to errors: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() print (feature, round(c, 2)) print ("-"*20) #/////////////////////////////////////////// AI: Try this in the last part of your code: for i in range(len(x_test)): print ("Instance", i) print ("Bias (trainset mean)", clf_bias[i]) print ("Feature contributions:") for c, feature in sorted(zip(contributions[i,:,0], data.columns),key=lambda x: -abs(x[0])): print (feature, round(c, 2)) print ("-"*20) The problem is that you are sorting contributions without taking into account that contributions is a 3D array and the column names is accesible with data.columns, not data.feature_names.
H: Lemmatization Vs Stemming I have been reading about both these techniques to find the root of the word, but how do we prefer one to the other? Is "Lemmatization" always better than "Stemming"? AI: I would say that lemmatization is generally the preferred way of reducing related words to a common base. This Quora question is a good resource on the subject: Is it advisable to choose lemmatization over stemming in NLP? The top answer quotes another good resource that motivates why lemmatization is usually better, Stemming and lemmatization, from Stanford NLP: Why lemmatization is better Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma. But that is just generally, it is not always better. Stemming still has some advantages and it will depend on the use case. Some reasons you would use stemming over lemmatization could be: Some possible exceptions when stemming can be better Simplicity Speed Memory constraints
H: Normalization(minmax) gives me worse results than before in KNN, follow up actions? Hello I'm studying a classification problem with KNN right now. I have many numeric features that I normalized with MinMaxScaler, I also got some OHE categorical features that not seem to cause the drop, and with this method my accuracy dropped almost 10%. I used holdout method and cross validation for the results. What are some follow up actions I could use to try to up my accuracy? Should I try another normalization technique? Should I try another model ? Anything else? AI: Is bad luck. Look what (I think) is happening: When you use MinMaxScaler, what you do is relativize (reduce or augment) the distance between individuals in a way it was not before the change. Let's suppose your best model is defined by the variable $X_j$ from the variable set $X=\{X_1,X_2,...,X_n\}$. When you change the set to an scalled one: $S=\{S_1,S_2,...,S_n\}$, what you do is giving the same importance to every variable, instead of preserving what the variables said before scalling. $X_3$ gives better information on the output than $S_3$ (and in $S$, all are scalled), $S_3$ gives less information because is mixed with all other variables. Imagine $X_3$ as something like "age" (and you are determining probability of having cancer), the greater the age, the greater the probability. This is true for $S_3$ also, but when $S_3$ is combined with the rest of $S$, $S_3$ looses importance amongst them. With $X_3$, age keeps its relative importance (bigger values amongst lesser values). This does not happen very often, you could find yourself on the opposite situation: When combined $S$ might be a very powerful set. That is why I think you were simply, unlucky.
H: What techniques are used for machine translation besides RNNs? I am doing a university report and it seems that encode-decode RNNs are optimal for machine translation. I would like something else to compare it to, but I can't seem to make a proper Google search for it. Can you recommend what else can be used for machine translation that I can compare RNNs to? AI: Recursive Neural Networks assume you have a relation between the current input and the previous input, if you organize data correctly CNNs can be used in place, some CNNs models have score higher than RNNs in BLEU metric. There is an article in Medium about it Why Convolutional Neural Networks are a Great Architecture for Machine Translation Some implementation examples are Facebook's architecture proposed in the paper A novel approach to neural machine translation and this one from a Stanford's professor. Also CNN's have been used by Google for language understanding and even in some fast-compact models or their new AI that talks on the phone.
H: read csv file directly from URL / How to Fix a 403 Forbidden Error The csv file is downloadable. I can download the file and use read_csv, But I want to read the file via direct URL in jupyter, I used the following code, but I get the HTTP 403 Forbidden error from io import StringIO import pandas as pd import requests url="https://fineli.fi/fineli/en/elintarvikkeet/resultset.csv" s=requests.get(url).text c=pd.read_csv(StringIO(s)) c how do I read the csv file via URL directly in python with a delimeter ";" AI: The problem is that the url you have doesn't accept "non-browser" requests. The default header of Python requests is 'User-Agent': 'python-requests/2.13.0' You can pass your own headers as an argument like that from io import StringIO import pandas as pd import requests headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36'} url="https://fineli.fi/fineli/en/elintarvikkeet/resultset.csv" s=requests.get(url, headers= headers).text c=pd.read_csv(StringIO(s), sep=";") c
H: How to obtain original coefficients after performing linear regression on normalized data? I am reading data from a file using pandas which looks like this: data.head() ldr1 ldr2 servo 0 971 956 -2 1 691 825 -105 2 841 963 -26 3 970 731 44 4 755 939 -69 I proceed to normalize this data to perform gradient descent: my_data = (my_data - my_data.mean())/my_data.std() my_data.head() ldr1 ldr2 servo 0 1.419949 1.289668 0.366482 1 -0.242834 0.591311 -1.580420 2 0.647943 1.326984 -0.087165 3 1.414011 0.090200 1.235972 4 0.137231 1.199041 -0.899949 I perform multivariate regression and end up with fitted parameters on the normalized data: Thetas: [[-3.86865143e-17, 8.47885685e-01, -5.39083511e-01]] I would like to plot the plane of best fit on the original data and not the normalized data using the normalized thetas. I used scipy.optimize.curve_fit to perform multivariate linear regression and come up with the optimal fitted parameters. I know that the original thetas should be close to the following: [ 0.26654135 -0.15218007 -107.79915373] How can I get the 'original' thetas for the original data-set in order to plot, without using Scikit-Learn? Any suggestions will be appreciated. AI: Coefficients of the linear regression for unnormalized features If parameters in the normalized space are denoted as $(\theta_0', \theta_1', \theta_2')$, parameters in the original space $(\theta_0, \theta_1, \theta_2)$ can be derived as follows $$\begin{align*} y' &= \theta_2'x_2'+\theta_1'x_1'+\theta_0'\\ \frac{y-\mu_Y}{\sigma_Y} &= \theta_2'\frac{x_2 - \mu_{X_2}}{\sigma_{X_2}} + \theta_1'\frac{x_1 - \mu_{X_1}}{\sigma_{X_1}} +\theta_0' \\ y &= \overbrace{\left(\frac{\sigma_{Y}}{\sigma_{X_2}}\theta_2'\right)}^{\theta_2}x_2+ \overbrace{\left(\frac{\sigma_{Y}}{\sigma_{X_1}}\theta_1'\right)}^{\theta_1}x_1 + \overbrace{\sigma_Y\left(-\theta_2'\frac{\mu_{X_2}}{\sigma_{X_2}}-\theta_1'\frac{\mu_{X_1}}{\sigma_{X_1}} + \theta_0'\right) + \mu_Y}^{\theta_0} \end{align*}$$ Generalization to D features $$\begin{align*} \theta_d = \left\{\begin{matrix} \sigma_Y \left(\theta_0' - \sum_{i=1}^{D}\theta_i'\frac{\mu_{X_i}}{\sigma_{X_i}} \right) + \mu_Y& d=0\\ \frac{\sigma_{Y}}{\sigma_{X_d}}\theta_d' & d > 0 \end{matrix}\right. \end{align*}$$ A trick For visualization, we can plot the plane in original, un-normalized space without changing the parameters (Thetas). We only need to re-label (re-scale) the plot axes. For example, a point $(x_1', x_2', y')$ in the normalized space corresponds to point $$(x_1, x_2, y)=(\sigma_{X_1}x_1'+\mu_{X_1}, \sigma_{X_2}x_2'+\mu_{X_2}, \sigma_{Y}y'+\mu_{Y})$$ in the original space. So you just need to rename the plot axes from $(x_1', x_2', y')$ to $(x_1, x_2, y)$. Note that $y'=\theta_2'x_2'+\theta_1'x_1'+\theta_0'$ is still calculated using normalized features.
H: Continuously training one model with different dimensions each time I'm trying to solve a problem where I need to train one model with N dimensions and again train on top of that model with M dimensions. How can I achieve it? To give you guys some context, I have 1 bluetooth beacon talking to 5 bluetooth readers that give out some value. Based on the value given out by the reader, I'm predicting the location of the beacon. My training data has 5 dimensions because there are values from 5 readers. Now I have another scenario where I have 10 readers instead of 5. How do I train on top of my first model with 10 readers this time? Is it even possible? The number of readers varies depending on the size of the room. So there's no way I can tell that there will always be a fixed number of readers (that is, fixed number of dimensions). How do I go about solving this problem? Any help is appreciated! AI: You could try to summarise the data before handling it to the model: If you have 5 readers, you could gather the information of "mean", "max", "min", "standard deviation" of them, this will work on 10, 15, any N of readers. Unless there is something which stops you from calculating mean, max, etc.
H: What could cause training CNN accuracy to drop after 7th epoch? I am training a CNN on some new dataset. Usually, the accuracy steadily improves over 10-20 epochs. I have created a new but similar dataset (using same methods) but now I see a sharp drop after 7th epoch, from which is never recovers. What might this mean I am doing wrong? 3520/3662 [===========================>..] - ETA: 7s - loss: 0.1260 - acc: 0.9753 3552/3662 [============================>.] - ETA: 5s - loss: 0.1294 - acc: 0.9752 3584/3662 [============================>.] - ETA: 4s - loss: 0.1283 - acc: 0.9754 3616/3662 [============================>.] - ETA: 2s - loss: 0.1360 - acc: 0.9751 3648/3662 [============================>.] - ETA: 0s - loss: 0.1348 - acc: 0.9753 3662/3662 [==============================] - 199s 54ms/step - loss: 0.1387 - acc: 0.9752 Epoch 7/50 32/3662 [..............................] - ETA: 3:29 - loss: 1.1921e-07 - acc: 1.0000 64/3662 [..............................] - ETA: 3:44 - loss: 1.1921e-07 - acc: 1.0000 96/3662 [..............................] - ETA: 3:43 - loss: 0.5037 - acc: 0.9688 128/3662 [>.............................] - ETA: 3:36 - loss: 0.6296 - acc: 0.9609 160/3662 [>.............................] - ETA: 3:29 - loss: 0.8059 - acc: 0.9500 192/3662 [>.............................] - ETA: 3:23 - loss: 0.7555 - acc: 0.9531 224/3662 [>.............................] - ETA: 3:20 - loss: 0.7915 - acc: 0.9509 256/3662 [=>............................] - ETA: 3:17 - loss: 0.6926 - acc: 0.9570 288/3662 [=>............................] - ETA: 3:14 - loss: 0.7276 - acc: 0.9549 320/3662 [=>............................] - ETA: 3:11 - loss: 0.9570 - acc: 0.9406 352/3662 [=>............................] - ETA: 3:08 - loss: 1.0990 - acc: 0.9318 384/3662 [==>...........................] - ETA: 3:05 - loss: 1.1531 - acc: 0.9271 416/3662 [==>...........................] - ETA: 3:02 - loss: 1.1032 - acc: 0.9303 448/3662 [==>...........................] - ETA: 3:00 - loss: 1.2832 - acc: 0.9174 480/3662 [==>...........................] - ETA: 2:58 - loss: 1.3991 - acc: 0.9104 512/3662 [===>..........................] - ETA: 2:55 - loss: 1.4691 - acc: 0.9062 544/3662 [===>..........................] - ETA: 2:53 - loss: 1.4420 - acc: 0.9081 576/3662 [===>..........................] - ETA: 2:50 - loss: 1.5018 - acc: 0.9045 608/3662 [===>..........................] - ETA: 2:48 - loss: 1.4492 - acc: 0.9079 640/3662 [====>.........................] - ETA: 2:46 - loss: 1.3768 - acc: 0.9125 672/3662 [====>.........................] - ETA: 2:44 - loss: 1.3112 - acc: 0.9167 704/3662 [====>.........................] - ETA: 2:42 - loss: 1.3431 - acc: 0.9148 736/3662 [=====>........................] - ETA: 2:40 - loss: 1.3066 - acc: 0.9171 768/3662 [=====>........................] - ETA: 2:38 - loss: 1.2941 - acc: 0.9180 800/3662 [=====>........................] - ETA: 2:36 - loss: 1.3431 - acc: 0.9150 832/3662 [=====>........................] - ETA: 2:34 - loss: 1.4464 - acc: 0.9087 864/3662 [======>.......................] - ETA: 2:32 - loss: 1.5980 - acc: 0.8993 896/3662 [======>.......................] - ETA: 2:30 - loss: 1.7568 - acc: 0.8895 928/3662 [======>.......................] - ETA: 2:28 - loss: 2.0263 - acc: 0.8728 960/3662 [======>.......................] - ETA: 2:27 - loss: 2.1434 - acc: 0.8656 992/3662 [=======>......................] - ETA: 2:25 - loss: 2.3017 - acc: 0.8558 1024/3662 [=======>......................] - ETA: 2:23 - loss: 2.3557 - acc: 0.8525 1056/3662 [=======>......................] - ETA: 2:21 - loss: 2.5133 - acc: 0.8428 1088/3662 [=======>......................] - ETA: 2:20 - loss: 2.6468 - acc: 0.8346 1120/3662 [========>.....................] - ETA: 2:18 - loss: 2.8590 - acc: 0.8214 1152/3662 [========>.....................] - ETA: 2:16 - loss: 2.9475 - acc: 0.8160 1184/3662 [========>.....................] - ETA: 2:14 - loss: 3.1265 - acc: 0.8049 1216/3662 [========>.....................] - ETA: 2:13 - loss: 3.2430 - acc: 0.7977 1248/3662 [=========>....................] - ETA: 2:11 - loss: 3.3794 - acc: 0.7893 1280/3662 [=========>....................] - ETA: 2:09 - loss: 3.4964 - acc: 0.7820 1312/3662 [=========>....................] - ETA: 2:07 - loss: 3.5831 - acc: 0.7767 1344/3662 [==========>...................] - ETA: 2:05 - loss: 3.7257 - acc: 0.7679 1376/3662 [==========>...................] - ETA: 2:04 - loss: 3.8030 - acc: 0.7631 1408/3662 [==========>...................] - ETA: 2:02 - loss: 3.8883 - acc: 0.7578 1440/3662 [==========>...................] - ETA: 2:00 - loss: 4.0369 - acc: 0.7486 1472/3662 [===========>..................] - ETA: 1:58 - loss: 4.0587 - acc: 0.7473 1504/3662 [===========>..................] - ETA: 1:56 - loss: 4.2081 - acc: 0.7380 1536/3662 [===========>..................] - ETA: 1:55 - loss: 4.2673 - acc: 0.7344 1568/3662 [===========>..................] - ETA: 1:53 - loss: 4.4167 - acc: 0.7251 1600/3662 [============>.................] - ETA: 1:51 - loss: 4.4895 - acc: 0.7206 1632/3662 [============>.................] - ETA: 1:49 - loss: 4.5694 - acc: 0.7157 1664/3662 [============>.................] - ETA: 1:48 - loss: 4.6074 - acc: 0.7133 1696/3662 [============>.................] - ETA: 1:46 - loss: 4.6916 - acc: 0.7081 1728/3662 [=============>................] - ETA: 1:44 - loss: 4.7726 - acc: 0.7031 1760/3662 [=============>................] - ETA: 1:42 - loss: 4.8415 - acc: 0.6989 1792/3662 [=============>................] - ETA: 1:40 - loss: 4.8810 - acc: 0.6964 1824/3662 [=============>................] - ETA: 1:39 - loss: 4.9279 - acc: 0.6935 1856/3662 [==============>...............] - ETA: 1:37 - loss: 4.9558 - acc: 0.6918 1888/3662 [==============>...............] - ETA: 1:35 - loss: 4.9828 - acc: 0.6901 1920/3662 [==============>...............] - ETA: 1:33 - loss: 4.9921 - acc: 0.6896 1952/3662 [==============>...............] - ETA: 1:32 - loss: 4.9846 - acc: 0.6901 1984/3662 [===============>..............] - ETA: 1:30 - loss: 5.0423 - acc: 0.6865 2016/3662 [===============>..............] - ETA: 1:28 - loss: 5.1062 - acc: 0.6825 2048/3662 [===============>..............] - ETA: 1:26 - loss: 5.1602 - acc: 0.6792 2080/3662 [================>.............] - ETA: 1:25 - loss: 5.1970 - acc: 0.6769 2112/3662 [================>.............] - ETA: 1:23 - loss: 5.2022 - acc: 0.6766 2144/3662 [================>.............] - ETA: 1:21 - loss: 5.2449 - acc: 0.6740 2176/3662 [================>.............] - ETA: 1:20 - loss: 5.2937 - acc: 0.6710 2208/3662 [=================>............] - ETA: 1:18 - loss: 5.3337 - acc: 0.6685 2240/3662 [=================>............] - ETA: 1:16 - loss: 5.3295 - acc: 0.6687 2272/3662 [=================>............] - ETA: 1:14 - loss: 5.3538 - acc: 0.6673 2304/3662 [=================>............] - ETA: 1:13 - loss: 5.3843 - acc: 0.6654 2336/3662 [==================>...........] - ETA: 1:11 - loss: 5.4072 - acc: 0.6640 2368/3662 [==================>...........] - ETA: 1:09 - loss: 5.4022 - acc: 0.6643 2400/3662 [==================>...........] - ETA: 1:08 - loss: 5.4510 - acc: 0.6613 2432/3662 [==================>...........] - ETA: 1:06 - loss: 5.4853 - acc: 0.6591 2464/3662 [===================>..........] - ETA: 1:04 - loss: 5.5188 - acc: 0.6571 2496/3662 [===================>..........] - ETA: 1:02 - loss: 5.5707 - acc: 0.6538 2528/3662 [===================>..........] - ETA: 1:01 - loss: 5.5958 - acc: 0.6523 2560/3662 [===================>..........] - ETA: 59s - loss: 5.6455 - acc: 0.6492 2592/3662 [====================>.........] - ETA: 57s - loss: 5.6691 - acc: 0.6478 2624/3662 [====================>.........] - ETA: 55s - loss: 5.7044 - acc: 0.6456 2656/3662 [====================>.........] - ETA: 54s - loss: 5.7631 - acc: 0.6419 2688/3662 [=====================>........] - ETA: 52s - loss: 5.7964 - acc: 0.6399 2720/3662 [=====================>........] - ETA: 50s - loss: 5.8467 - acc: 0.6368 2752/3662 [=====================>........] - ETA: 48s - loss: 5.8783 - acc: 0.6348 2784/3662 [=====================>........] - ETA: 47s - loss: 5.8918 - acc: 0.6340 2816/3662 [======================>.......] - ETA: 45s - loss: 5.9279 - acc: 0.6317 2848/3662 [======================>.......] - ETA: 43s - loss: 5.9405 - acc: 0.6310 2880/3662 [======================>.......] - ETA: 42s - loss: 5.9640 - acc: 0.6295 2912/3662 [======================>.......] - ETA: 40s - loss: 6.0037 - acc: 0.6271 2944/3662 [=======================>......] - ETA: 38s - loss: 6.0260 - acc: 0.6257 2976/3662 [=======================>......] - ETA: 36s - loss: 6.0370 - acc: 0.6250 3008/3662 [=======================>......] - ETA: 35s - loss: 6.0639 - acc: 0.6233 3040/3662 [=======================>......] - ETA: 33s - loss: 6.0743 - acc: 0.6227 3072/3662 [========================>.....] - ETA: 31s - loss: 6.0897 - acc: 0.6217 3104/3662 [========================>.....] - ETA: 29s - loss: 6.1048 - acc: 0.6208 3136/3662 [========================>.....] - ETA: 28s - loss: 6.1351 - acc: 0.6189 3168/3662 [========================>.....] - ETA: 26s - loss: 6.1443 - acc: 0.6184 3200/3662 [=========================>....] - ETA: 24s - loss: 6.1685 - acc: 0.6169 3232/3662 [=========================>....] - ETA: 23s - loss: 6.1822 - acc: 0.6160 3264/3662 [=========================>....] - ETA: 21s - loss: 6.1858 - acc: 0.6158 3296/3662 [==========================>...] - ETA: 19s - loss: 6.1991 - acc: 0.6150 3328/3662 [==========================>...] - ETA: 17s - loss: 6.2073 - acc: 0.6145 3360/3662 [==========================>...] - ETA: 16s - loss: 6.2729 - acc: 0.6104 3392/3662 [==========================>...] - ETA: 14s - loss: 6.2945 - acc: 0.6091 3424/3662 [===========================>..] - ETA: 12s - loss: 6.2875 - acc: 0.6095 3456/3662 [===========================>..] - ETA: 11s - loss: 6.3225 - acc: 0.6073 3488/3662 [===========================>..] - ETA: 9s - loss: 6.3431 - acc: 0.6061 3520/3662 [===========================>..] - ETA: 7s - loss: 6.3633 - acc: 0.6048 3552/3662 [============================>.] - ETA: 5s - loss: 6.4058 - acc: 0.6022 3584/3662 [============================>.] - ETA: 4s - loss: 6.3980 - acc: 0.6027 3616/3662 [============================>.] - ETA: 2s - loss: 6.4217 - acc: 0.6012 3648/3662 [============================>.] - ETA: 0s - loss: 6.4537 - acc: 0.5992 3662/3662 [==============================] - 200s 55ms/step - loss: 6.4730 - acc: 0.5980 Epoch 8/50 AI: I don't know anything about the data you're using so I'll offer some suggestions. Your learning rate might be too high. Maybe you're over-shooting the gradient descent such that after you approach a nice vector of coefficients, you're then jumping too far in one epoch and thus losing progress towards the "global optima". Your activation function might be saturating your units.
H: Does PCA decrease the feature on my Data set or just decrease the dimension? I'm new in AI and sorry if my question is simple. I have a data set and want to use PCA to decrease the feature but after some research on the internet I'm confused about decreasing dimensions and features. As an example I have a data set with 50 rows and 10 columns, if I use PCA it will reduce a data set with 50x5 (as an example) or 50x10 and just removed some dimensions? I want to do it in MATLAB and want to use PCA function and don't want to write PCA function by myself. What is the PCA parameters in MATLAB to decrease the feature? It's a lot of parameters and confused me. AI: From the documentation: coeff = pca(X) returns the principal component coefficients, also known as loadings, for the $n$-by-$p$ data matrix X. Rows of X correspond to observations and columns correspond to variables. The coefficient matrix is $p$-by-$p$. Each column of coeff contains coefficients for one principal component, and the columns are in descending order of component variance. By default, pca centers the data and uses the singular value decomposition (SVD) algorithm. The values in coef represent the transformation from the original features (rows of coef) to the principal components (columns of coef). You'll want to keep only the first $k$ columns, then multiply your data matrix by this matrix. Your features are the dimensions that your data lives in, so number of features and dimension are the same. Very roughly speaking, PCA rotates the the feature axes to align to the most significant directions rather than the original feature directions, then selects only the most significant directions to keep, thus reducing the dimensionality of your problem. But it also means your new columns won't represent pure features anymore, but linear combinations of them. If you keep all of the new columns, you won't reduce dimensionality; but your new features will be ordered in such a way that the first ones capture the most variance of your data.
H: What's the difference between multi label classification and fuzzy classification? Is it just the between academics and practitioners in term usage? Or is theoretical difference of how we consider each sample: as belonging to multiple classes at once or to one fuzzy class? Or this distinction has some practical meaning of how we build model for classification? AI: Multi-label classification (Wiki): Given $K$ classes, find a map $f:X \rightarrow \{0, 1\}^K$. Fuzzy classification (a good citation is needed!): Given $K$ classes, find a map $p: X \rightarrow [0, 1]^K$ where $\sum_{k=1}^{K} p(k)=1$. In multi-label classification, as defined, there is no "resource limit" on classes compared to fuzzy classification. For example, a neural network with a softmax layer does fuzzy classification (soft classification). If we only select a class with the highest score, then it will become a single-label classification (hard classification), and if we select top $k$ classes, it will be a multi-label classification (again hard classification). Fuzzy classification: [0.5, 0.2, 0.3, 0, 0] Single-label classification: [1, 0, 0, 0, 0] Multi-label classification: [1, 0, 1, 0, 0] As another example for multi-label classification, we could have $K$ neural networks for $K$ classes with sigmoid outputs, and assign a point to class $k$ if output of network $k$ is higher than 0.5. Outputs: [0.6, 0.1, 0.6, 0.9, 0.2] Multi-label classification: [1, 0, 1, 1, 0] Practical considerations As demonstrated in the examples, the key difference is the "resource limit" that exists in fuzzy classification but not in multi-label classification. Including the limit (in the first example), or ignoring it (in the second example) depends on the task. For example, in a classification task that has mutually exclusive labels, we want to include the "resource limit" to impose the "mutually exclusive" assumption on the model. Note that the $\sum_{k=1}^{K} p(k)=1$ restriction in fuzzy classification is merely a "definition", there is no point in arguing about a definition. We can either propose another classification, or argue when to use - and when not to use - such classification.
H: How to find which features have been selected by PCA algorithm? I used PCA function in MATLAB to decrease features on my data set. By this code I can reduce features from 12 to 8(as an example). It works good but my question is that how can I found with feature have been removed or which feature are selected on the result? [~, pca_scores, ~, ~, var_explained] = pca(myDataSet, 'NumComponents', 8); AI: PCA doesn't remove any specific feature. What PCA does it to calculate linear combinations of your variables in such way that they get "summarized" in fewer variables. You don't eliminate variables, you reexpress them.
H: What is Data Lake? I am familiar with the concept of "Big Data" but how does "Data Lake" differ from Big Data? Is it derived from Big Data? Please explain. AI: Big Data is a term related to extraction of information from big datasets. It is also sometimes used in context of predictions from large datasets. Big Data points to the 'big' aspect of data. Data lake is a concept of storing and providing data in data system, no matters what size of data it is. Data lake aims to be a single repository of data in a company for better management and access to the data.
H: Training accuracy decreases I have a program in which I use sequence to sequence approach as a prediction model with attention. The problem is, while training, the accuracy is always decreasing at each epoch, like shown in the figure attached. If anyone can tell me the reason, I will be soo thankful. AI: The screenshot looks like output of Keras, so I assume you're using a deep neural network. The training (epoch) is organized with batches of data, so that optimization function is calculated within subset of whole dataset. The console output shows the accuracy of the full dataset, so the optimization of a single batch can decrease the accuracy of the other part of the dataset and decrease the global result. The other possible reason is that you provide test set to Keras API and the console output is result of the test set that is not used for training of the model. If it's true you can see that your model is overfitting.
H: From most frequent words how to extract technical skill words I've scrape 30 job description web and stored them into a list called job_desc where each item is a job description. # each item is a list of tokenized job_description tok = [nltk.word_tokenize(job.lower()) for job in job_desc] # ignore stop words, bullets, etc. And put it into one list from nltk.corpus import stopwords stop = stopwords.words('english') def clean_token(what_to_clean): cleaned_tok = [] for lists in what_to_clean: for item in lists: if len(item)>2 and (item not in stop): cleaned_tok.append(item) return cleaned_tok After cleaning job description I've found most frequent words using: freq = nltk.FreqDist(clean_token(tok)) most_freq_words = freq.most_common(100) Which outputs: [('data', 211), ('experience', 78), ('learning', 70), ('business', 65), ('team', 53), ('science', 51), ('machine', 48),..... From here I only want to extract words like machine, python, C+, technical skills. How can I go about it? Also you can see there is word "machine" showing up 48 times and I am not sure whether it is talking about machine learning how can I go about this, I know if I want to make predictions I could've used CountVectorizer and n-grams. AI: From what I understand, you need more than 1-word terms. Thus, it's better to go for n-grams. Start with creating n-grams import re from nltk.util import ngrams s = s.lower() s = re.sub(r'[^a-zA-Z0-9\s]', ' ', s) tokens = [token for token in s.split(" ") if token != ""] output = list(ngrams(tokens, 2)) You should get 1-gram, 2-grams and 3-grams at least to cover all possible terms. Then you can use a Computer Science Ontology (CSO) which includes about 14k technical terms. An alternative would be to use their own Classifier that gets a text as input and annotate terms on it from the same ontology. You can find a demo here https://cso.kmi.open.ac.uk/classify/ When I try to classify the abstract of Machine Learning from the Wikipedia article, I get the following results Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task.1:2 Machine learning algorithms are used in a wide variety of applications, such as email filtering, and computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning.[3][4] In its application across business problems, machine learning is also referred to as predictive analytics. artificial intelligence computer systems computer vision correlation analysis data mining email inference machine learning optimization unsupervised learning
H: How can I categoriese / classify a cluster of words? I am just wondering if it is possible to classify word clusters? For example if I provide you an array of words [bird,chicken,dock,park,apple,grapes,furits,juice] what I need is to convert this array to something like this (or nearest possible) [ "Birds"=>[bird,chicken,dock,park], "Fruits"=>[apple,grapes,furits,juice,park] ] Any direction to how I can achieve this, please? AI: Basic solution: You could convert the words to vectors, word2vec. If you can get trained word embeddings that would be a good starting point. Then you need to cluster the vectors which should be straight forward, e.g using KMeans. What you are asking for is some hierarchy as well, which could be tricky to find with a clustering method, at least if it needs to make sense. If you want the words closest to "Birds" you could try to just get words with the closes distance to "Bird" and test if that works out. If the hierarchy is important: Then you likely need to use other techniques in NLP, which I am not experienced with. Have a look at "Entity linking" as an example. Hope this helps somewhat.
H: Clustering of users in a dataset I have a dataset of book reviews: | user_id | ISBN | vote | votes_for_user | average_user_vote | ISBN_categ | 213 3242X 4.5 12 3.4 1 563 1245X 3.2 74 2.3 2 where vote = rating given by user to a certain book votes_for_user = number of votes the user has in the dataset (nr of rows) average_user_vote = average of a user's votes ISBN_categ = integer categorical of the ISBN (since that is a string). I want to apply a clustering algorithm such as DBSCAN to see how many clusters I can form with this dataset. My question is: Should I apply the clustering on the dataframe as it is (minus the ISBN column) or should I construct more features for every user and construct a dataframe where every user appears only once, together with their features, and cluster that? Remember, the intent here is to cluster users (by user_id), not data points (votes). AI: If your objective is to find clusters of users, then you are interested in finding groups of "similar" reviewers. Therefore you should: Retain information which relates to the users in a meaningful way - e.g. votes_for_user. Discard information which has no meaningful relationship to a user - e.g. user_id (unless perhaps it contains some information such as time / order). Be mindful of fields which may contain implicit relationships involving a user - e.g. vote may be a result of the interaction between user and ISBN.
H: Need help with entity tagging I need to design a system which can identify movie and production company names in a sentence. The approach that comes to my mind is to train a NER Named-entity recognition system on labeled data so that it identifies the corresponding entities. But what about new entities (movie or production company name) which trained system hasn't seen, how can we tag them. Re-training the model every time with new released movies won't be feasible. Labeled data: Sentences with the position of words that corresponds to movie or production company name I am a beginner in NLP any help would be appreciated AI: But what about new entities (movie or production company name) that trained system hasn't seen how can we tag them. Re-training the model every time with new released movies won't be feasible. A NER model should not have to be retrained to tag a new text it has not seen before. If trained successfully it will use information it learned from the labeled data and be able to apply it on new data. For your use case this could be information such as: Capitalization - common for movie titles and production company names Words used - frequently used words for either movie titles or company names Where in text - movie titles perhaps appear early in texts spaCy is a good library to get started with NER. Here is an example on how to train one using your own data.
H: Create an Algorithm, that translates human language into code I am an experienced programmer and want to get into ML. My final goal in ML is to create an algorithm, that converts written human language into a query language like SQL. Example: Input: "Give me all clients, with an email" Expected output: SELECT * FROM Clients where email != null Where should I look into, to achieve something like this sometime in the future? AI: A simple approach would be just syntactic conversion, as done in precompilers/preprocessors. A second step might be interpretation of some sort of controlled language where you allow more complex phrases, as long as they are not ambiguous. A not very different, but data-driven, approach is represented by form-filling chat bots (couldn't quickly find a generic link, but plenty of examples in search results) where you aim to identify/locate the necessary bits to build what you expect to be your queries. A more advanced approach that might give you some interesting ideas is this screenshot-to-HTML builder, based on recurrent neural networks, namely LSTM and GRU. This could go further with named entity extraction, context interpretation, error-correction, and so on. The list continues infinitely. The last, simply because, as many do, you have left out much of your problem definition implicit, which reminds me of our little friend Alice in Wonderland: Alice: Would you tell me, please, which way I ought to go from here? The Cheshire Cat: That depends a good deal on where you want to get to. Alice: I don't much care where. The Cheshire Cat: Then it doesn't much matter which way you go. Alice: ...So long as I get somewhere. The Cheshire Cat: Oh, you're sure to do that, if only you walk long enough.
H: Is it possible to measure the object using deep learning Is there a way we can measure the length, width and the depth of an object in the picture using deep learning? AI: Not without some kind of reference object. I also don’t see a reason to use deep learning for such a task. Sure - you could create a dataset with all kinds of objects and their typical dimensions. But the results for inference would probably be pretty bad and the effort to create the dataset ridiculously huge.
H: What does the color coding and normalized values in confusion matrix actually specify? I am unable to infer anything about the model from the following confusion matrix. What is the color coding actually specifying? For example, when predicted label is 1 and true label is 1, the value in the matrix at that point is 0.20. Does that mean its accuracy? Does it mean that the model is only able to predict 1 only 20% of times when the label is actually 1? PS: SO URL for a more clear image AI: Each element_ab shows the probability of predicting label a (horizontal axis) when the true label is b (vertical axis) For example, when the true label is 0, it will be predicted as label 2 with the probability of 0.14 The color intensity indicates the probability of each element in a row
H: Time Series - Values recorded every 10 minutes - Fill missing values I have a time series data from a sensor that records value periodically - sometimes - every 10 minute period, other times every 5 minute period etc. I have to find out anomalies in real time (as and when data comes) based on breach of a static threshold value. Question is related to the minutes fall within the period and do not have any recordings. Let's assume today the sensor is recording values every 10 minutes (APPROXIMATELY) as shown below. What is the best possible way to handle the data in between the periodic interval. In below example, should I do a back fill with value 12 for all the points from 13:02 to 13:09? 2019-04-25T13:01:00 10, 2019-04-25T13:02:00 NAN, 2019-04-25T13:03:00 NAN, 2019-04-25T13:04:00 NAN, 2019-04-25T13:05:00 NAN, 2019-04-25T13:06:00 NAN, 2019-04-25T13:07:00 NAN, 2019-04-25T13:08:00 NAN, 2019-04-25T13:09:00 NAN, 2019-04-25T13:10:00 12, 2019-04-25T13:11:00 NAN, 2019-04-25T13:12:00 NAN, 2019-04-25T13:13:00 NAN, 2019-04-25T13:14:00 NAN, 2019-04-25T13:15:00 NAN, 2019-04-25T13:16:00 NAN, 2019-04-25T13:17:00 NAN, 2019-04-25T13:18:00 NAN, 2019-04-25T13:19:00 11, 2019-04-25T13:20:00 NAN AI: One option would be to interpolate the missing values. There are various methods depending on your use case (ie. linear, cubic, etc.) https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.interpolate.html You could also drop the missing values df.dropna() and just use the known values you have for thresholding. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html
H: How many time a recommender system can recommand the same item to an user? I'm working on an hybrid music recommender system project, my goal is to create recommendation playlists in accordance with users tastes. I already implemented the first part which use a collaborative filtering algorithm, and I am now working on the content-based filtering part. In order to make my recommender system accurate I read dozens of research papers talking about evaluating recommender system. There are many variables taken in account in those evaluations (such as coverage etc..) Some of those papers talk about user confidence in recommender systems, defining it as one of the most important parameters to take in account. I read that for 10 items recommended if the user like something like 3 or 4 of them, it's enough to gain his confidence. There is a point that i could'nt find in all those papers: How many times can we recommend the same item to an user ? To explain my question, during the playlists generation process there is a risk that the same music appear in two different playlists and I'm wondering what will be the impact on the users confidence in my recommender system in this case. AI: I don't think it matters if you recommend the same song in multiple playlists, just that your recommendations were accurate. For example, imagine a user has two playlists a rock playlist and a classical playlist. If you recommended Crazy Train - Ozzy Osbourne in both playlists, you could theoretically have a "successful recommendation" in the rock playlist, and an bad recommendation in the country playlist. The number of times a song appears is a design decision/variable - there is no right answer. You could potentially optimize on in the future using A/B testing (if I limit a song to being recommended x times, do those users then value my recommendations more than the general population?).
H: Help with my training data I'm working on my first NN following a tensorflow tut and trying to use my own data. After about 80 attempts of formatting my data and trying to load it into a dataset to train I'm throwing the towel. Here is how my data currently looks syslog_data = [ [302014,0,0,63878,30,3,1], [302014,0,0,3891,0,0,0], [302014,0,0,15928,0,0,2], [305013,5,0,123,99999,0,3], [302014,0,0,5185,0,0,0], [305013,5,0,123,99999,0,3], [302014,0,0,56085,0,0,0], [110002,4,2,50074,99999,0,4], In this the last item in each list is the label. If you can tell me if I need to reformat my data and how or just how to get it loaded into a dataset properly. Thanks for any help or advice you can give Here is the full code: import tensorflow as tf import numpy as np from tensorflow.keras import layers from . import syslog print(tf.VERSION) print(tf.keras.__version__) model = tf.keras.Sequential() # Adds a densely-connected layer with 64 units to the model: model.add(layers.Dense(64, activation='relu')) # Add another: model.add(layers.Dense(64, activation='relu')) # Add a softmax layer with 10 output units: model.add(layers.Dense(10, activation='softmax')) model.compile(optimizer=tf.train.AdamOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) dataset = tf.data.dataset.from_tensor_slices(syslog) model.fit(dataset, epochs=10, steps_per_epoch=30) AI: There are a couple of problems and things you might want to add to your existing script. Below I separate your example data into two NumPy arrays: input values x labels y It is also important to make sure they are of type float32, because Tensorflow will complain if you pass it integers (as they otherwise would be interpreted). The following works for me, the model trains to completion: import numpy as np import tensorflow as tf from tensorflow.keras import layers syslog_data = [ [302014, 0, 0, 63878, 30, 3, 1], [302014, 0, 0, 3891, 0, 0, 0], [302014, 0, 0, 15928, 0, 0, 2], [305013, 5, 0, 123, 99999, 0, 3], [302014, 0, 0, 5185, 0, 0, 0], [305013, 5, 0, 123, 99999, 0, 3], [302014, 0, 0, 56085, 0, 0, 0], [110002, 4, 2, 50074, 99999, 0, 4], ] print(tf.VERSION) print(tf.keras.__version__) x = np.array([arr[:-1] for arr in syslog_data], dtype=np.float32) y = np.array([arr[-1:] for arr in syslog_data], dtype=np.float32) model = tf.keras.Sequential() # Adds a densely-connected layer with 64 units to the model: model.add(layers.Dense(64, activation="relu")) # Add another: model.add(layers.Dense(64, activation="relu")) # Add a softmax layer with 10 output units: model.add(layers.Dense(10, activation="softmax")) model.compile(optimizer=tf.train.AdamOptimizer(0.001), loss="categorical_crossentropy", metrics=["accuracy"]) model.fit(x, y, epochs=10, steps_per_epoch=30)
H: Gradient Descent Can we use Gradient descent to find Global/local maxima? What types of problem needed to maximize the function? Can we do it with GD? Can anybody explain my question with an example relevant to machine learning? Thanks in advance. AI: Theoretically, it is possible to find a global minimum using gradient descent. In reality, however, it rarely happens - it is also pretty much impossible to prove you have the global minimum! Imagine we have a 2d loss surface; a loss curve as in the figures below. In order to reach the global minimum (the lowest point on the curve), you would need to make steady progress towards the minimum and also reduce the size of the steps that you take on the way, as to not overshoot the minimum (left figure). If you slow down too fast, you never quite reach the minimum (middle figure). If you get it just right, you make it to the global minimum (right figure). The size of those jumps is controlled by the learning rate, which is multiplied with the loss in each iteration of back-propagation. For this theoretical achievement, the theory requires the loss curve to be convex - meaning it has a shape similar to those in the figures. Here is a recent paper that explores this in more detail. The title says it all: Gradient Descent Finds Global Minima of Deep Neural Networks There are methods to increase your chances of success, usually involving using a dynamic learning rate, which is adjusted to the rate at which the model learns i.e. how well it improves on a validation data set whilst learning from the training set. As the figure on the right above shows, it is desirable to reduce the learning rate as the loss curve flattens out. In reality, the "loss surface" on which we are searching for are not 2-dimensional, but have much higher orders e.g. 200. Things such as random initialisation of your model could make the difference between which one you finally land in. This is perhaps why using ensemble methods (groups of models) generally performs better than a single model. It explores the loss landscape more thoroughly and increases the odds of finding better minima. Local Minima If a model seems to covnerge (verified by e.g. loss on the validation set stops improving), but the results on the test set appear to be far from optimal or what is should be possible, then it could be the case the the algorithm has ended up in a local minumum, unable to escape. You can imagine that the updates to the parameters became so small that it is no longer "active" or energetic enough to jump out of the small minimum. Common approaches to get around this situation (or to prevent it) include: using an update rule for weights that carries some momentum (for example the Adam optimizer) using simulated annealing, which is a fancy way of saying that we perform larger jumps to different areas on the loss curve, exploring more of it and not limiting ourselves to local minima. Here is a small graphic (from Wikipedia) that shows it in action: Here is a superb blog post about these optimisers and simulated annealing as another way to "jump" out of local minima.
H: Which model to chose based on learning curve I trained my model using different regression techniques, and I'm not sure which model to choose based on the learning curve. 1) Should I choose Lasso, since train and CV converge at the end 2) Should I choose Gradient Boosting since train and CV converge in the middle, and achieves a lower RMSE score than Lasso AI: You should go for LightGBM which has the lowest training and cross validation RMSE. By the way, score is a bad name for RMSE because you would expect a higher score to mean that your model is better but here the opposite is true.
H: How do I use keras NN to classify data after training? I have defined, trained and saved my tensor keras NN. Now that that is complete how do I use it output classifications to non training data? import tensorflow as tf import numpy as np from tensorflow.keras import layers from syslog import syslog_pred model = tf.keras.Sequential() # Adds a densely-connected layer with 64 units to the model: model.add(layers.Dense(128, activation='relu')) # Add another: model.add(layers.Dense(128, activation='relu')) # Add a softmax layer with 8 output units: model.add(layers.Dense(8, activation='softmax')) model.compile(optimizer=tf.train.AdamOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy']) model.load_weights('.my_model') x = np.array([arr[:-1] for arr in syslog_pred], dtype=np.float32) dataset = tf.data.Dataset.from_tensor_slices(x) answer = model.predict(dataset, steps=30) print(answer) The code at the end isn't what it should be but I'm a little lost. Any help would be appreciated! AI: Once you have a trained model, you can pass new samples to it by using the predict method of the model, which will get you the probabilities of all classes. You than pick the class with the highest probability for each sample as the predicted class: y_prob = model.predict(new_data) # Get class probability vector for each sample y_class = y_prob.argmax(axis=-1) # Set prediction to class with highest probability If you are using a Sequential() model, you can also use the predict_classes method, which will get you the same answer: y_class = model.predict_classes(new_data)
H: LSTM with target outside the timeseries Say I want to predict the final size of a flower depending on the raining and temperature during 200 days. The flowers are in differents towns, so each flower has its own conditions of rain and temperature. I have : 500 flowers For each flower : - 1 time serie with 200 points corresponding to raining volume each day - 1 time seris with 200 points corresponding to temperature on each day known target : final length for each flower (500 data) what should be the input dimensions of my LTSM network ? AI: What you're looking for is covered in the keras LSTM examples: from keras.layers import LSTM, Dense import numpy as np data_dim = 16 timesteps = 8 num_classes = 10 # expected input data shape: (batch_size, timesteps, data_dim) model = Sequential() model.add(LSTM(32, return_sequences=True, input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32 model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32 model.add(LSTM(32)) # return a single vector of dimension 32 model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) # Generate dummy training data x_train = np.random.random((1000, timesteps, data_dim)) y_train = np.random.random((1000, num_classes)) # Generate dummy validation data x_val = np.random.random((100, timesteps, data_dim)) y_val = np.random.random((100, num_classes)) model.fit(x_train, y_train, batch_size=64, epochs=5, validation_data=(x_val, y_val)) Source: https://keras.io/getting-started/sequential-model-guide/#examples In your case, timesteps = 200, data_dim = 2. It sounds like you have a continuous outcome and don't have classes, so you probably want to switch out the last layer to a Dense(1) with a linear activation or something, and switch out your loss and metrics appropriately (probably mean_squared_error and mae or something). Also please note the architecture is purely an example, you may not need stacked LSTMs (or even LSTMs) for your example, you may want dropouts, you may want more dense layers, etc. This is just an example to get you started,
H: Determining which entries in input vector have the greatest effect on output vector I'm looking for a solution in Python. I'm doing a project where I've built a densely packed neural network which takes a 48-dimensional input vector and the last layer is a 2-dimensional linear output. I'm looking for an algorithm that can determine how much each scalar in the input vector contributes to the output vector. Specifically, my input vector is 24 consecutive pairs of position and momentum. For example the input would be $(x_1,v_1,x_2,v_2,...,x_{24},v_{24})$ and my output is the next position and momentum pair $(x_{25},v_{25})$. I've hypothesized that the more recent (higher index) pair should impact the output more. Is there any technique that I could use to quantitatively verify this? AI: If I understand correctly, you are trying to understand how many of the previous pairs affect the next pair ($x_{25}, v_{25}$). Given that you are tlaking about physical properties (position and momentum), I think it will always be the case that the most recent timesteps are most relevant, but you can verify this statistically. Terminology: You have a time-series dataset, with 24 "lags" and want to predict ahead with a horizon of 1. This equates to an edogenous auto-regressive model - a model that uses past values of itself to predict the future. (exogenous models could in your model include other variables, such as air humidity, viscosity surrounding medium, wind etc.) Partial Autocorrelation Given this context, you can try running a Partial Autocorrelation Function on your data. It will tell you how well correlated each points it, pair-wise, with each of the preceding 24 lags. Doing so in Python, using the statsmodels package will give you a nice plot like this: Each vertical line tells you how correlated that particular lag is with the next value in the series ($x_{25}, v_{25}$). Have a look at this thread on Cross Validated for more detail and some code snippets. Models There are other iterative model fitting techniques that can give you a final model, which can be interpreted and allow you to see which input variables (i.e. which of your 24 past data points were most important in fitting the model). you can have a look at component-wise gradient boosting. Here is a tutorial with some theory and examples.. It is only available in R as far as I know, but it isn't very difficult to get working, using the example code in that tutorial. It essentially only updates the coefficient (seen as "importance") of a single lags at a time. So after fully training the model, each coefficient's magnitude encodes it's relevance in predicting the next value - that is exactly what you are searching for.
H: Anomaly Detection System I need a sanity check. I want to create an anomaly detection system. The logic which I am planning to use is the following: Find anomalies in the past using Seasonal Hybrid Extreme Studentized Deviate Test. Binarise the anomalies (1 the anomalies and 0 the trends). Run several algorithms (Autoencoders, SVM, Logistic Regression, Naive Bayes, Lasso Regression, etc) with variables that are correlated and validate the models and use it. Does the binarisation process makes sense? AI: Yes, your logic and what you are thinking is excellent. There is only a flaw in your thinking: The variables you run the model with must not necesarily be "correlated" in a linear sense of the word, just don't discard any variable because any of them could explain your binary output, and not have a linear relationship with it. Is a common solution to binarise an output to detect anomalies, but you will lose the ability to predict "how much" outlier is an outlier, make sure you don't need this information after.
H: Interpretation of Keras training and improving accuracy/consistency I have finished my first NN (pretty exciting) and have started tweaking in hopes of improving results. Epoch 1/10 30/30 [==============================] - 0s 9ms/step - loss: 69.2138 - acc: 0.2937 - val_loss: 62.3838 - val_acc: 0.6250 Epoch 2/10 30/30 [==============================] - 0s 1ms/step - loss: 66.8716 - acc: 0.3167 - val_loss: 64.4628 - val_acc: 0.1875 Epoch 3/10 30/30 [==============================] - 0s 1ms/step - loss: 65.6411 - acc: 0.2750 - val_loss: 96.1742 - val_acc: 0.1250 Epoch 4/10 30/30 [==============================] - 0s 1ms/step - loss: 70.4931 - acc: 0.2875 - val_loss: 91.4954 - val_acc: 0.2031 Epoch 5/10 30/30 [==============================] - 0s 1ms/step - loss: 68.8988 - acc: 0.1813 - val_loss: 75.3798 - val_acc: 0.0000e+00 Epoch 6/10 30/30 [==============================] - 0s 1ms/step - loss: 69.1241 - acc: 0.0000e+00 - val_loss: 77.7191 - val_acc: 0.0000e+00 Epoch 7/10 30/30 [==============================] - 0s 1ms/step - loss: 64.7226 - acc: 0.3479 - val_loss: 74.3400 - val_acc: 0.5156 Epoch 8/10 30/30 [==============================] - 0s 1ms/step - loss: 68.5523 - acc: 0.1719 - val_loss: 69.1414 - val_acc: 1.0000 Epoch 9/10 30/30 [==============================] - 0s 1ms/step - loss: 66.1609 - acc: 0.7917 - val_loss: 70.9609 - val_acc: 0.8438 Epoch 10/10 30/30 [==============================] - 0s 1ms/step - loss: 68.6736 - acc: 0.7552 - val_loss: 68.3616 - val_acc: 0.7344 Test-Accuracy: 0.4234375 What confuses me is how scattered and inconsistent my val_acc values are. Is this due to bad data or bad modeling? Here is a snipit of my training data (It is syslog that i have parsed into key values) syslog_data = [ [0.110002,0.4,0.2,0,0,0,5], [0.110002,0.4,0.2,0,0,0,5], [0.110002,0.4,0.1,0,0,0,5], [0.110002,0.4,0.2,0,0,0,5], [0.302014,0,0,0,0.0,0,1], [0.302014,0,0,0,0.0,0,1], [0.302014,0,0,0,0.0,0,1], [0.302014,0,0,0,0.0,0,1], [0.302014,0,0,0,0.0,0,1], [0.302014,0,0,0,0.0,0,1], [0.302014,0,0,0,0.0,0,1], [0.302014,0,0,0,0.0,0,1], [0.419002,0.2,0.1,0,0,0,6], [0.419002,0.2,0.1,0,0,0,6], [0.419002,0.2,0.1,0,0,0,6], [0.419002,0.2,0.1,0,0,0,6], [0.110002,0.4,0,0,0,0,5], [0.110002,0.4,0,0,0,0,5], [0.110002,0.4,0.1,0,0,0,5], [0.110002,0.4,0.2,0,0,0,5], [0.305013,0.5,0,0,0,0,4], [0.305013,0.5,0,0,0,0,4], [0.305013,0.5,0,0,0,0,4], [0.305013,0.5,0,0,0,0,4], [0.710003,0.1,0.1,0,0,0,8], [0.710003,0.1,0.1,0,0,0,8], [0.710003,0.1,0.1,0,0,0,8], [0.710003,0.1,0.1,0,0,0,8], [0.302014,0,0,0,0.03,0.3,2], [0.302014,0,0,0,0.03,0.3,2], [0.302014,0,0,0,0.03,0.3,2], [0.302014,0,0,0,0.03,0.3,2], And here is the code behind my NN import tensorflow as tf import numpy as np from tensorflow.keras import layers from syslog import syslog_data, syslog_eval, syslog_pred print(tf.VERSION) print(tf.keras.__version__) x = np.array([arr[:-1] for arr in syslog_data], dtype=np.float32) y = np.array([arr[-1:] for arr in syslog_data], dtype=np.float32) x_ev = np.array([arr[:-1] for arr in syslog_eval], dtype=np.float32) y_ev = np.array([arr[-1:] for arr in syslog_eval], dtype=np.float32) model = tf.keras.Sequential() # Adds a densely-connected layer with x units to the model: model.add(layers.Dense(10, activation='relu', input_shape=(6,))) # Add another: model.add(layers.Dense(32, activation='relu')) # Add another: model.add(layers.Dense(32, activation='relu')) # Add another: model.add(layers.Dense(10, activation='relu')) # Add a softmax layer with 8 output units: model.add(layers.Dense(8, activation='softmax')) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.batch(32).repeat() val_dataset = tf.data.Dataset.from_tensor_slices((x_ev, y_ev)) val_dataset = val_dataset.batch(32).repeat() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) results = model.fit(dataset, epochs=10, steps_per_epoch=30, validation_data=val_dataset, validation_steps=2) x = np.array([arr[:-1] for arr in syslog_pred], dtype=np.float32) dataset = tf.data.Dataset.from_tensor_slices(x) y = model.predict(x) print(y) print("Test-Accuracy:", np.mean(results.history["val_acc"])) EDIT: I have expanded my batch size to 2000 and get the following results: Epoch 1/10 30/30 [==============================] - 1s 18ms/step - loss: 67.8644 - acc: 0.1418 - val_loss: 62.3833 - val_acc: 0.0000e+00 Epoch 2/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8003 - acc: 0.0894 - val_loss: 64.4627 - val_acc: 0.0000e+00 Epoch 3/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8639 - acc: 0.2535 - val_loss: 96.1742 - val_acc: 1.0000 Epoch 4/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8003 - acc: 0.4979 - val_loss: 91.4954 - val_acc: 1.0000 Epoch 5/10 30/30 [==============================] - 0s 6ms/step - loss: 67.8639 - acc: 0.1995 - val_loss: 75.3798 - val_acc: 0.0000e+00 Epoch 6/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8003 - acc: 0.4316 - val_loss: 77.7191 - val_acc: 0.0000e+00 Epoch 7/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8639 - acc: 0.4328 - val_loss: 74.3400 - val_acc: 0.0000e+00 Epoch 8/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8002 - acc: 0.7325 - val_loss: 69.1414 - val_acc: 0.0000e+00 Epoch 9/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8639 - acc: 0.8989 - val_loss: 70.9609 - val_acc: 1.0000 Epoch 10/10 30/30 [==============================] - 0s 7ms/step - loss: 67.8002 - acc: 0.9663 - val_loss: 68.3616 - val_acc: 1.0000 [[0.12500003 0.12500003 0.12500001 0.12500003 0.125 0.12499999 0.12499997 0.12499999]] Test-Accuracy: 0.4 AI: Fixed it. It was a problem with my labels. Epoch 1/10 100/100 [==============================] - 7s 66ms/step - loss: 2.0792 - acc: 0.2209 - val_loss: 2.1252 - val_acc: 0.1250 Epoch 2/10 100/100 [==============================] - 6s 63ms/step - loss: 1.9951 - acc: 0.2380 - val_loss: 1.7550 - val_acc: 0.3750 Epoch 3/10 100/100 [==============================] - 6s 64ms/step - loss: 1.1870 - acc: 0.6727 - val_loss: 0.6140 - val_acc: 0.8750 Epoch 4/10 100/100 [==============================] - 6s 64ms/step - loss: 0.4538 - acc: 0.9728 - val_loss: 0.3049 - val_acc: 1.0000 Epoch 5/10 100/100 [==============================] - 5s 55ms/step - loss: 0.2086 - acc: 1.0000 - val_loss: 0.1202 - val_acc: 1.0000 Epoch 6/10 100/100 [==============================] - 6s 56ms/step - loss: 0.1060 - acc: 1.0000 - val_loss: 0.0746 - val_acc: 1.0000 Epoch 7/10 100/100 [==============================] - 7s 69ms/step - loss: 0.0633 - acc: 1.0000 - val_loss: 0.0493 - val_acc: 1.0000 Epoch 8/10 100/100 [==============================] - 7s 65ms/step - loss: 0.0423 - acc: 1.0000 - val_loss: 0.0381 - val_acc: 1.0000 Epoch 9/10 100/100 [==============================] - 6s 59ms/step - loss: 0.0305 - acc: 1.0000 - val_loss: 0.0218 - val_acc: 1.0000 Epoch 10/10 100/100 [==============================] - 6s 56ms/step - loss: 0.0232 - acc: 1.0000 - val_loss: 0.0209 - val_acc: 1.0000
H: How to deal with new features values in my classification model? Lets say i have a categorical feature having a set of values equal to ['Single','Married','Divorced','Unknown']. Okay, so with the help of the other features, i create my model, i test it, all is fine and i deploy it. Now , some user using my model as a webservice, types in 'Widow' for that feature value. What's the proper way of dealing with it? Is there a way to ignore it? and predict based on the rest of the features only? or do i have to handle unknown feature values by assigning Unknown to them too? or worst case scenario, i handle the exception, return a message saying : hey man (or woman) use one of the available values on that feature! ?? Note : I'm trying to avoid assigning Unknown to it, because 'Unknown' has its weight. ( My real question is : if there is a way to set a feature's weight to 0 when doing inference ) AI: Actually, the solution i'm going with, which is the most that made sense to me was to assign 'Unknown' temporarily to not-known-yet feature values. I cannot assign zeros to all my one-hot columns ( basically a NaN and throws in an error ). My model is supposed to be updated after a certain amount of time anyway, this is possible because my training data is not much compared to datasets with millions of records and hundreds of features. So yeah, until my features are updated by retraining the model with a more comprehensive dataset, i'll just assign 'Unknown' to unknown feature values.