text
stringlengths
83
79.5k
H: Summing three lexicon based approach methods for sentiment analysis? I'm doing sentiment analysis using a lexicon based approach and I have a bunch of news headlines that needs to be categorized as negative, positive and neutral or within a scale ranging from -1 (very negative sentiment) to +1 (very positive sentiment). I'm considering using three different methods: TextBlob, Vader (Valence Aware Dictionary and Sentiment Reasoner) and LIWC2015 as I have access to it. This would lead to three predicted sentiment of each headline. I hove two question: Which are the pros and cons of this technique? And, once runned the 3 analisys, how can I melt all them into a single metrics? Is standardize/normalize the 3 metrics and averaging them a good solution? AI: What you're proposing is some very simple form of ensemble learning. You need to have at least a sample of labelled data in order to evaluate any method. Using this labelled data you can: evaluate each of the three methods on their own evaluate your idea of averaging the 3 methods predictions if you have enough labelled data, you could even train a model which combines their predictions optimally (this would be full-fledged stacked generalization)
H: finding the mean for each of the channels (RGB) across an array of images How can I find the mean for each of the channels (RGB) across an array of images? For example train_dataset[0]['image'].shape is (600, 800, 3) and len(train_dataset) is 720 meaning it includes 720 images of dimension 600x800 and 3 channels. train_dataset[0]['image'] is an ndarray. I am looking to end up with 3 numbers each representing the mean for each of the channels across all these 720 images. I have this very dumb solution but I wonder if there's a better solution? I also did it this other way and got almost the same answer: AI: Could probably be more elegant, but here's an idea: mean[:, :, 0:2] = np.mean(imgs[:, :, :, 0:2], axis=0)
H: why R-square always keep increasing I have read in multiple articles that R-square always increases with the number of features, even though a feature may not be of any significance. The formula for R-square is $$1 - \frac{\sum(y-\hat{y})^2}{\sum(y-\bar{y}^2)}$$ If the denominator is constant that means R-square is dependent upon only numerator, so basically on $\hat{y}$. $$\hat{y} = a +b_1x_1 +b_2x_2 \ldots$$ Now if I have a new feature which is really not important shouldn't the beta coefficient of that feature be zero? And if it is really zero how will it really impact R-square? AI: What you are trying to avoid is including features that, while they do technically improve results on your sample data, they don't do a good job of generalizing to other hold-out sets. When you say "If I have a new feature which is really not important wont the beta coefficient of that feature should be zero" - you are correct that in this case it won't have an effect on R-squared. In the event that you include an unimportant feature and the coefficient is non-zero (meaning it's important on the sample data due to some random noise but not a true pattern in the underlying) then R-squared will increase and it will appear that you have a better model - but in fact you are leaning towards overfitting and you have a less robust model. This point your article is making is pointing out a limitation of the R-squared evaluation criteria: if you add more degrees of freedom (input variables in this case) your score is likely only to go up (when maybe it shouldn't). The F-test on the other hand recognizes this limitation of the R-squared and punishes the score by adding a degree of freedom term. So if you only see a marginal gain in R-square by adding a new term it will be punished moreso by the simple addition of the term (dof going up). See the following statquest for a good explanation (it gets to F-test towards the end of the video): https://www.youtube.com/watch?v=nk2CQITm_eo&ab_channel=StatQuestwithJoshStarmer
H: How to choose variables to perform Exploratory Data Analysis I have a dataset with about 110 variables. I have a target variable and I want to do an exploratory data analysis to find out what factors affect this target variable. In such scenarios when we have a lot of variables,how do we choose which variables to analyze? Should we consider all the variables wrt to target variable or do we choose variables based on domain knowledge? I am a newbie to DataScience,Need some guidance on how to proceed with analysis when we have large no of variables in our data. AI: There are a couple options: Iterate over all input features & calculate it's correlation to the target variable. Gather all these numbers and sort them by absolute value. Take the top 10 or 20 as a chunk to start investigating these features with more attention (absolute value because you care about strong negative correlations just as much as strong positive correlations) Train a simple Decision Tree on the inputs mapping to the output. Once the decision tree is trained take a look at the feature importance(s) that the decision tree uncovered - begin your investigation here. You can repeat this process with a linear regression too. Plot all 1-to-1 plots of all input variables to the target and manually look through them (this takes more time as you need to look through as many plots as you have input variables - but it will give you a good understanding of your data once you go through it all)
H: Using pretrained LSTM and Bert Models in CPU Only Environment - How to speed up Predictions? I have trained two text classification models using GPU on Azure. The models are the following Bert (ktrain) Lstm Word2Vec (tensorflow) Exaples of the code can be found here: nlp I saved the models into files (.h5) for later use. The files are big e.g. 27,613kb for the lstm and 1.2 gb for bert. I loaded the models and in a computer where only CPU is available. They both work fine but the model.predict(text) function is super slow predicting the class of the text e.g. on average 1 tweet sized message per second. Adding GPU on the computer is not an option. I wonder if there is another way to make it run faster? e.g. train the models in a different way (without compromising accuracy) or save the model in a different file format? AI: There are a couple of options: Optimize tensorflow for your specific CPU. Sometimes the official versions of tensorflow are not compiled with support for some instruction sets (e.g. SSE4.1, SSE4.2, AVX, AVX2, FMA). Usually, there is a tensorflow runtime warning message stating so. This prevents some computations to take place in parallel. You can either download a version that is optimized for your CPU (e.g. from intel) or you can compile tensorflow yourself. Prune the model. BERT is a multi-head attention model, and it is possible to remove some of its attention heads while retaining most of the quality. Here's a blog post that shows how to do it for tensorflow.
H: For a linear model without intercept, why does the redundent term in one-hot encoding function as intercept? In this question Elias Strehle pointed out that if we keep all the levels during one hot encoding on a linear model without an intercept, the redundant feature will function as an intercept. Why is this the case? Isn't that in a linear model, the intercept term ($x_0$, not $\beta_0$) will always have a value of 1? Suppose the feature to encode is gender, then some of the rows of the redundant term will be 0 while others will be 1s. AI: Let's have a look how dummies work: R Example: # Some data df = data.frame(y=c(30,32,28,10,11,9),gender=c(1,1,1,0,0,0), gender2=c(0,0,0,1,1,1)) # 1) Regression with constant and dummy summary(lm(y~gender,data=df)) # 2) Regression without constant and dummy summary(lm(y~gender-1,data=df)) # 3) Regression without constant and two dummies summary(lm(y~gender+gender2-1,data=df)) Results: Case 1: Since dummies generally work as "contrasts" to some base category (1 vs. 0 / "on" vs. "off") and since the base category has a mean of 10, the intercept term equals 10 and for gender = 1, the difference to the base category is identified (here 20) because the mean of category gender = 1 is 30. (Remember that a regression with only an intercept or with dummies simply gives the arithmetic mean). Estimate Std. Error t value Pr(>|t|) (Intercept) 10.0000 0.9129 10.95 0.000394 *** gender 20.0000 1.2910 15.49 0.000101 *** Case 2: Without a constant only gender = 1 is considered, since for gender = 0 we have $0 + 0 * \beta$ so that gender = 0 is dropped. The coefficient now is the mean of gender = 1. Coefficients: Estimate Std. Error t value Pr(>|t|) gender 30.000 4.546 6.599 0.0012 ** Case 3: Including a dummy for both groups (denote gender = 0 from above as gender2) without adding an intercept now gives the mean for each group directly. Note that the interpretation of the coefficients is different here compared to case 1. Coefficients: Estimate Std. Error t value Pr(>|t|) gender 30.0000 0.9129 32.86 5.11e-06 *** gender2 10.0000 0.9129 10.95 0.000394 *** The interesting bit is when you add some additional $x$: Some new data, now including $x$: df = data.frame(y=c(30,32,28,10,11,9),gender=c(1,1,1,0,0,0), gender2=c(0,0,0,1,1,1), x=c(20,22,25,28,30,29)) Regression with both dummies, no intercept: summary(lm(y~gender2+gender-1+x,data=df)) Coefficients: Estimate Std. Error t value Pr(>|t|) gender2 19.8864 12.6285 1.575 0.2134 gender 37.6136 9.7446 3.860 0.0307 * x -0.3409 0.4342 -0.785 0.4897 is the same as... Regression with one dummy and intercept (apart of the dummy interpretation discused above): summary(lm(y~gender+x,data=df)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 19.8864 12.6285 1.575 0.2134 gender 17.7273 3.1973 5.544 0.0116 * x -0.3409 0.4342 -0.785 0.4897 ... so the marginal effect of $x$ is the same. This is in contrast to... Regression with one dummy, no intercept: summary(lm(y~gender+x-1,data=df)) Coefficients: Estimate Std. Error t value Pr(>|t|) gender 22.38736 1.41677 15.802 9.37e-05 *** x 0.34086 0.03864 8.822 0.000911 *** Here the marginal effect of $x$ is entirely different. Why is this? When you fit some new data, you will see that the fitted line for $x$ goes through (0,0) "no intercept in the model". newdata = data.frame(gender=c(0,0,0,0,0,0), x=c(-1,0,1,2,3,4)) predict(lm(y~gender+x-1,data=df), newdata=newdata) 1 2 3 4 5 6 -0.3408643 0.0000000 0.3408643 0.6817286 1.0225929 1.3634572 This happens because there are cases in which you have $0 + \beta x$ (which is 0 for $x=0$). Or as $x$-matrix (first row would be the intercept, for illustration = 0): \begin{pmatrix} 0 & 1 & x_1\\ 0 & 1 & x_2\\ 0 & 0 & x_3\\ 0 & 0 & x_4 \end{pmatrix} However, when you have the two dummies included, you have: \begin{pmatrix} 0 & 1 & x_1\\ 0 & 1 & x_2\\ 1 & 0 & x_3\\ 1 & 0 & x_4 \end{pmatrix} So there is no case in which you force $\beta x$ to be zero. See this post for further discussion on regression without constant term.
H: Backpropagation and gradient descent I just want to clear one doubt - we use gradient descent to optimize the weights and biases of the neural network, and we use backpropagation for the step that requires calculating partial derivatives of the loss function, or am I misinterpreting something? AI: Yes you are correct. Gradient descent (or various flavors of it) is the mechanism by which you find a local minima of your loss space using some learning rate. Backpropagation calculates the gradient of the error function with respect to the NN's weights and biases.
H: Tree-based algorithms and ordinal features For tree-based methods (e.g., DT, Random Forest, Gradient boosting, etc.), does the conversion interval of an ordinal feature to continuous matter matters? (I can see why it matters for linear model, but I am not clear for tree-based methods) For example: Is there a difference between converting an ordinal feature from [‘Low’,’Medium’,’High’] to [1,2,3] compared to converting it to [1,99, 876] AI: Edit: I misread your post: the answer is no it shouldn't matter which interval is used. So long as the order is not changed the splits your tree finds on this data will effectively be the same. See my original answer if you want more context: In scikit-learn specifically (I cannot speak for other implementations of tree-based models) it does not accept categorical data as input. Meaning the user must convert this to real numbers using hot encoding, numerical encoding etc... When fitting on any given tree - when it's iterating through possible splits it is splitting your column using an inequality (ie is this column's value bigger or smaller than this threshold value). Due to this methodology it does matter in which order your data is encoded. If you hot-encode your data - each column will be looked at independently (a likely split would be "is this bigger or less than 0.5 as thats effectively the only split to be made when the options are either 0 or 1). If you numerically encode your data it will just pick the best threshold that maximizes your split criterion function (usually gini inequality). The bias you add to your system by manually determining the order or the numerical encoding can affect how well the tree is able to split on this column. If you do some logical encoding whereby the target increases in value as the value of your feature increases, your tree will be able to split on this column more effectively & you will likely see better results. For this reason I recommend numerically encoding with a TargetEncoder (https://contrib.scikit-learn.org/category_encoders/targetencoder.html). This way the order of your numerical encoder will make sense for the decision tree.
H: Subsequence parttern matching for time series I have a set of time series data (just like voice sequence data) with the pattern as shown in the first figure (theoretical data). The measured data is given as presented in the second figure. What I want to do is localizing/finding the subsequent pattern as shown in the red squares. Is there any algorithm to solve this problem? It looks like the classification/regression problem in machine learning, but I have no idea how to start it. AI: The real state of the art here is the Matrix Profile suite, developed by Eamonn Keogh and his team in University of California at Riverside (UCR). Here are some links to get you started: Matrix Profile Foundation homepage The UCR Matrix Profile Page MPA: a novel cross-language API for time series analysis paper (2020) with links to Python, R, and Go implementations Introducing matrixprofile-ts, a Python Library for Detecting Patterns and Anomalies in Massive Datasets Towards a Near Universal Time Series Data Mining Tool: Introducing the Matrix Profile (PhD thesis) 100 Time Series Data Mining Questions (with answers!) (highly recommended high-level view) You'll find links to implementations in the above stuff. Apart from the team at UCR, there is another (and possibly more efficient) implementation of the relevant algorithms in the STUMPY Python package: STUMPY documentation & Github repo STUMPY: A Powerful and Scalable Python Library for Time Series Data Mining paper (2019)
H: What is the correct formula for Jaccard coefficient with integer vectors? I understand the Jaccard index is the number of elements in common divided by the total number of distinct elements. But it seems to be some discrepancy or terminology confusion about Jaccard being applied to "binary vectors", meaning a vector with binary attributes (0, 1), or, "integer vectors", meaning any vector with integer values (2, 5, 6, 8). There are two formulas depending on the type of elements in the vector? This answer comments about "binary vectors" which "they can be interpreted as sets of indices with value 1". However there are examples where Jaccard Coefficient is calculated with an integer vectors, so it seems to be valid. Besides, scikit-learn seems to define 3 cases: Binary vectors y_true = np.array([[0, 1, 1], [1, 1, 0]]) y_pred = np.array([[1, 1, 1], [1, 0, 0]]) Multilabel cases (what is a Multilabel case is not defined in the scikit-learn documentation) "Multiclass problems are binarized and treated like the corresponding multilabel problem" y_pred = [0, 2, 1, 2] y_true = [0, 1, 2, 2] An additional test with a R library which uses the equation form TP / (TP + FP + FN) results in an undefined behavior: library("ClusterR") pc <- c(0, 1, 2, 5, 6, 8, 9) tc <- c(0, 2, 3, 4, 5, 7, 9) external_validation(pc, tc, method = "jaccard_index") [1] NaN Is using the set based formula only suitable for binary vectors? $$ J(A,B) = {{|A \cap B|}\over{|A \cup B|}} = {{|A \cap B|}\over{|A| + |B| - |A \cap B|}} $$ AI: The Jaccard coefficient (or Jaccard similarity) is defined on two sets $A$ and $B$: $$ J(A,B) = {{|A \cap B|}\over{|A \cup B|}} = {{|A \cap B|}\over{|A| + |B| - |A \cap B|}} $$ There is a single definition for this coefficient, but note that Jaccard is a general similarity measure, it is not specifically designed as an evaluation measure. So assuming one wants to use it as an evaluation measure, they need to decide (design) how. More precisely confusion may arise from the following questions: Which pair of sets does one want to compare? Which data structure does one use to represent a set? Let's start with the second question: a binary vector is a common and convenient way to represent (encode) a set. Given a set of possible elements (universe) $U=\{x_1, ...,x_n\}$, any subset $s\subseteq U$ can be represented as a binary vector of length $n$ where the $i^{th}$ boolean is 1 if $x_i\in s$, 0 otherwise. It follows from this representation that the Jaccard coefficient is the number of indexes where both vectors contain 1 divided by the the number of indexes where at least one of the two vectors contain 1. Example from scikit documentation: >>> y_true = np.array([[0, 1, 1], ... [1, 1, 0]]) >>> y_pred = np.array([[1, 1, 1], ... [1, 0, 0]]) >>> jaccard_score(y_true[0], y_pred[0]) 0.6666... The vectors compared are [0, 1, 1] and [1, 1, 1] (meaning for instance sets $\{b,c\}$ and $\{a,b,c\}$), so the result is 2/3. Multi-label classification is when an instance can be assigned any number of classes. In the linked scikit example they apply the Jaccard coefficient to the two sets of classes for every instance (optionally aggregating the results across instances). This is an example of a design choice as mentioned in my first question above. Note: in the case of the multiclass problem, I don't even understand how they obtain this result for the third value so I can't comment about it. The formula $TP / (TP + FP + FN)$ is another example of what does one choose as sets. This formula only makes sense for a binary classification task: TP is the intersection of true and predicted positive instances, $TP+FP+FN$ is the union of all positive cases (whether true or predicted). This is why it doesn't work with integer vectors, since these cannot represent the result of binary classification. Conclusion: one must define which sets they want to compare. The most simple way with integer vectors is to consider them as sets of integers, in this case they could be represented as binary vectors as follows: [0, 1, 2, 5, 6, 8, 9] -> [1,1,1,0,0,1,1,0,1,1] [0, 2, 3, 4, 5, 7, 9] -> [1,0,1,1,1,1,0,1,0,1] But going through the binary representation is not a must, it's only needed if one wants to use a function which takes this format as input. One could as well define their own function using a set as data structure for instance.
H: ML to find relationship between a number and 50 other characteristics values I have 10K records of football Players, their market value and about 50+ characteristics values of each player (i.e. Shot Power, Attacking, Defending, Position, Stamina and etc.) --> 80% are numerical values out of 100, 20% are categorical. My goal is to know what features (or group of features?) most contributing to player value? I'm assuming it might be certain characteristics, or certain group of characteristics -- when they are high, it increases player value. Or both. How to tackle this task? What ML models (or others) I should explore? Any tips? AI: Perform a simple linear regression. Take your characteristic values as inputs (hot-encode the categorical ones or numerical-encode them if they're ordinal) and predict the target that is the player value (assuming it's a continuous variable target). Make sure before training you scale all your input values between 0 and 1. Once you've trained your linear regression take a look at the coefficients of each variable on the trained model. The absolute value of these coefficients represent how "important" that characteristic is to the target player value. If it's a large negative coefficient - having a lot of that characteristic negatively impacts player value, while if it's large positive having a lot positively effects player value. Low absolute values represents characteristics that have very little impact. The reason you want to scale your input values between 0 and 1 is to make sure you can effectively compare the raw value of your coefficients knowing the inputs are all on the same scale.
H: Precision Accuracy and recall rate of a confusion rate of confusion matrix Assume you are solving a 4-class problem. Your test set is as follows: • 5 samples from class 1, • 10 samples from class 2, • 5 samples from class 3, • 10 samples from class 4. • Total Samples: 30 The decision made by your classifier is as follows: • 2 samples from class 1 are decided as class 1, 3 samples from class 1 are decided as class 2. • 2 samples from class 2 are decided as class 1, 5 samples from class 2 are decided as class 2, 1 sample from class 2 are decided as class 3, and 2 samples from class 2 is decided as class 4. • 4 samples from class 3 are decided as class 3 and 1 sample from class 3 is decided as class 4. • 2 samples from class 4 are decided as class 1, and 8 samples from class 4 are decided as class 4. Generate a confusion matrix. Using the confusion matrix, calculate accuracy, average precision, and average recall rate. I need help calculating the accuracy recall rate and precision by hand using this confusion matrix below AI: Accuracy: The sum of the numbers on the diagonal divided by the sum of all numbers on the grid Recall and Precision depend on if you want to take the micro or macro approach. See this blog post for more details (it gives a very similar example to your case): https://towardsdatascience.com/confusion-matrix-for-your-multi-class-machine-learning-model-ff9aa3bf7826
H: Which ml framework/library makes you do the most of the work but still eases the process for you? This may sound like a stupid question, but trust me it's not, I am searching for a ml framework/library that makes you do most of the work yourself so I can learn rather than depending on a framework/library but STILL I do not wanna code ALL ml algorithms from scratch as that would be pretty painful Hope that clears everything up, thanks in advance. AI: Sklearn allows you to build your own pipeline steps (and estimators if you so choose). If you get comfortable with how the Pipeline object framework works it's infinitely configurable. It also allows you to rely on pre-built algos. To operate this way you will need to be comfortable with object-oriented programming in python rather than a script-based approach. Take a look at how to create custom pipeline steps in this helpful blogpost: https://towardsdatascience.com/pipelines-custom-transformers-in-scikit-learn-the-step-by-step-guide-with-python-code-4a7d9b068156
H: Machine learning algorithms for identification and classification of Microorganisms https://www.google.com/search?q=Viruses+images&tbm=isch&ved=2ahUKEwiB9-fsoL3sAhUJyHMBHWRZB-sQ2-cCegQIABAC&oq=Viruses+images&gs_lcp=ChJtb2JpbGUtZ3dzLXdpei1pbWcQA1AAWABgo_sCaABwAHgAgAEAiAEAkgEAmAEAwAEB&sclient=mobile-gws-wiz-img&ei=HL6LX4H5DImQz7sP5LKd2A4&bih=592&biw=360&client=ms-android-lava&prmd=inv https://www.google.com/search?q=bacteria+images&client=ms-android-lava&prmd=inv&sxsrf=ALeKk00Z4XiPIbFDVbfDqZyuAOCSzt6Izw:1602994167858&source=lnms&tbm=isch&sa=X&ved=2ahUKEwj4xs7Por3sAhW-7HMBHav0ChEQ_AUoAXoECCQQAQ&biw=360&bih=592&dpr=2 https://bio.libretexts.org/Bookshelves/Microbiology/Book%3A_Microbiology_(Boundless)/1%3A_Introduction_to_Microbiology/1.2%3A_Microbes_and_the_World/1.2A_Types_of_Microorganisms Can machine learning classification, computer vision, image processing algorithms assist in identification & segregation of viruses & bacteria microorganisms? Input Dataset : Images of all viruses and bacteria in gif, jpg formats. Output : Virus or Bacteria identification with the name, short description. New updates : Modify the existing images of all seven types of microorganisms database by adding proper label name with Adobe Photoshop web designing software or any other image editing software. Examples : COVID-19 virus, Tuberculosis bacteria etc Inputting images for all seven types of microorganisms viz viruses, bacteria, archaea, protozoa, algae, fungi and multicellular animal parasites. https://towardsdatascience.com/wtf-is-image-classification-8e78a8235acb https://www.google.com/search?q=diseases+caused+by+virus&oq=diseases+caused+by+&aqs=chrome.3.69i57j69i59j0i457j0l2.8949j0j7&client=ms-android-lava&sourceid=chrome-mobile&ie=UTF-8 https://www.google.com/search?q=diseases+caused+by+bacteria&oq=diseases+caused+by+bac&aqs=chrome.0.0i67i457j0j69i57j0j0i20i263.6814j0j7&client=ms-android-lava&sourceid=chrome-mobile&ie=UTF-8 https://www.google.com/search?q=diseases+spread+through+archaea&oq=&aqs=chrome.1.69i176j35i39i362l4...4.-1j0j7&client=ms-android-lava&sourceid=chrome-mobile&ie=UTF-8 https://www.google.com/search?client=ms-android-lava&sxsrf=ALeKk01JWGtJY1ucgsaljZXnB-ulqbL04w%3A1603026074551&ei=mjyMX6uHIZD59QOI7o6ICQ&q=diseases+caused+by+protozoa&oq=diseases+caused+by+protozoa&gs_lcp=ChNtb2JpbGUtZ3dzLXdpei1zZXJwEAMyBAgAEEcyBAgAEEcyBAgAEEcyBAgAEEcyBAgAEEcyBAgAEEcyBAgAEEcyBAgAEEdQAFgAYK42aABwAXgAgAEAiAEAkgEAmAEAyAEIwAEB&sclient=mobile-gws-wiz-serp https://www.google.com/search?client=ms-android-lava&sxsrf=ALeKk01eT0ai8hYeXrtRyEM2c2-WvqukPQ%3A1603025640805&ei=6DqMX-XMMMfJrQGc8IiAAw&q=diseases+caused+through+algae&oq=diseases+causedthrough+algaue&gs_lcp=ChNtb2JpbGUtZ3dzLXdpei1zZXJwEAEYADIECCEQCjoECAAQRzoECCMQJzoFCAAQyQM6AggAOgYIABAWEB46CQgAEMkDEBYQHjoFCCEQoAE6CAghEBYQHRAeOgcIIRAKEKABOgQIHhAKUM4tWN6HAWCtjwFoAnABeACAAZMCiAHOJpIBBjAuMTcuOZgBAKABAcgBCMABAQ&sclient=mobile-gws-wiz-serp https://www.google.com/search?q=diseases+spread+through+fungi&oq=diseases+spread+through+&aqs=chrome.2.69i57j35i39j69i59j0i457j0.14735j0j7&client=ms-android-lava&sourceid=chrome-mobile&ie=UTF-8 https://www.google.com/search?client=ms-android-lava&sxsrf=ALeKk03dcQw152ejskUp_AQElmXhqLJ3Xw%3A1603026257174&ei=UT2MX9eOCpnorQHc26jYCg&q=diseases+spread+through+multicellular+animal+parasites&oq=diseases+spread+through+multicellular+animal+parasites&gs_lcp=ChNtb2JpbGUtZ3dzLXdpei1zZXJwEAM6BAgAEEc6BAgjECc6BQgAEMkDOgIIADoFCCEQoAFQpA5Y9Ddg9zpoAHABeACAAZIHiAGbHZIBCTItMi4yLjYtM5gBAKABAaABAsgBCMABAQ&sclient=mobile-gws-wiz-serp AI: Yes they can. You would need a labelled dataset of the name of viruses and bacteria with each image. This can then be fed into a CNN with a softmax output. The model can predict the label of the image, and then you could use a database to recall a short description to accompany the prediction.
H: What is momentum in neural network? While using "Two class neural network" in Azure ML, I encountered "Momentum" property. As per documentation, which is not clear, it says For The momentum, type a value to apply during learning as a weight on nodes from previous iterations. Although that is not very clear. Can someone please explain? AI: Momentum in neural networks is a variant of the stochastic gradient descent. It replaces the gradient with a momentum which is an aggregate of gradients as very well explained here. It is also the common name given to the momentum factor, as in your case. Maths The momentum factor is a coefficient that is applied to an extra term in the weights update: Note: image from visual studio magazine post Advantages Beside others, momentum is known to speed up learning and to help not getting stuck in local minima. Intuition behind As it is really nicely explained in this quora post, the momentum comes from physics: Momentum is a physical property that enables a particular object with mass to continue in it's trajectory even when an external opposing force is applied, this means overshoot. For example, one speeds up a car and then suddenly hits the brakes, the car will skid and stop after a short distance overshooting the mark on the ground. The same concept applies to neural networks, during training the update direction tends to resist change when momentum is added to the update scheme. When the neural net approaches a shallow local minimum it's like applying brakes but not sufficient to instantly affect the update direction and magnitude. Hence the neural nets trained this way will overshoot past smaller local minima points and only stop in a deeper global minimum. Thus momentum in neural nets helps them get out of local minima points so that a more important global minimum is found. Too much of momentum may create issues as well as systems that are not stable may create oscillations that grow in magnitude, in such cases one needs to add decay terms and so on. It's just physics applied to neural net training or numerical optimizations. In video This video shows a backpropagation for different momentum values. Other interesting posts How does the momentum term for backpropagation algorithm work? Hope it helps.
H: Correlation among features (e.g. doc length, punctuation, ... ) in classifying spam emails I extracted some other features from my dataset regarding punctuation, capital letters, upper case words. I got these value: looking at the correlation with my target variable (1=spam, 0=not spam), using .corr() in python. BT stands for binary text, e.g., and BS stands for binary summary, where I assign 1 or 0 based on the presence of a capital letter in the text/summary, or upper case word, or... Do you think that features like these can be useful in model building? I cannot see very strong correlations, but I would like to determine if an email can be spam or not based also on features like these (number of character/text length; presence of !, upper case words,....). I have around 1000 emails, but only 50 are spam (maybe too small to extract useful information). However, I had to extract these information, so it is a new dataset, built on my own, so I could not get many more spam emails (and I would like to not use datasets from kaggle, for instance). What do you think? AI: First about the features i think you could add some such as : the time when the letter is received, number of links in the email, the whole structure (does it follow typical structure for email), number of words that contains numbers in it, what is the whole mood of the email (sales,threats,info,...-for this purpose you can use sentiment analysis), number of attachments , type of attachments and so on. After that try with feature selection (you could read more about it here). For the imbalance data you need to resample the data. I would: add copies of spam emails(oversampling) try to generate new spam email (smote) You can read more here. I hope my answer will give you some clarity.
H: Merging Training and Validation Sets for better accuracy I am training an Image Classification Model and my train-test set distribution is 80-20. After Training my train-test loss curve looks like this As the model is converged after around 20-30 epochs and is not overfitting. Can I also try to train the same model by merging my training and testing set hoping it would become more accurate with the increased size of training data ? Is it a good idea to merge the train, test set in this case? What are the disadvantages if I do so? AI: After you have finished with the model building process (in which it is assumed that you have used your test set once and only once for assessing the performance of your final model on unseen data), and before deploying your model, both common sense and standard practice say that you should re-train it on all the available data, including the portion that, until then, had been put aside as test. Leaving out available data is a luxury which normally we cannot afford; and, provided that there are no issues with your model building process, and your test set is qualitatively similar to your training one (an assumption implicitly always present), there is nothing to worry about. Qualitatively speaking, this approach is similar with what we do with cross validation, where afterwards we routinely re-train the model using all the available data. The following Cross Validated threads might be useful; although they address the cross-validation issue, the rationale is similar - at the end, use all the data: Training on the full dataset after cross-validation? How to choose a predictive model after k-fold cross-validation?
H: How to set a class_weight Dictionary for Random Forest? I'm dealing with an unbalanced dataset, so I decided to use a weight dictionary for classification. Documentation says that a weight dict must be defined as shown below: https://imbalanced-learn.org/stable/generated/imblearn.ensemble.BalancedRandomForestClassifier.html weight_dict = [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] So, since I want to predict 12 classes which are located in the last column. I assume that the setting would be like: weight_dict = [{0: 1, 1: 5.77390289e-01}, {0: 1, 1: 6.48317326e-01}, {0: 1, 1: 1.35324885e-01}, {0: 1, 1: 2.92665797e+00}, {0: 1, 1: 5.77858906e+01}, {0: 1, 1: 1.73193507e+00}, {0: 1, 1: 9.27828244e+00}, {0: 1, 1: 1.18766082e+01}, {0: 1, 1: 8.99009985e+01}, {0: 1, 1: 6.39833279e+00}, {0: 1, 1: 2.55347077e+01}, {0: 1, 1: 9.47015372e+02}] Honestly, I don't clearly understand the notation of the first indicators, I mean the: 0:1 of {0: 1, 1: 1} or the: 1: value. Do they represent column position, label order? What is the right way to set it? I'll be grateful for your insights. AI: If you're just doing multiclass classification, you should specify the weights as a single dictionary, e.g. {0: 1.0, 1: 1.5, 2: 3.2} for a three-class problem. (Or use the convenience modes "balanced" or "balanced_subsample"). The list of dictionaries is used for multilabel classification (where each row can have multiple true labels). In that case, each dictionary is for one of the outputs, the keys being the classes for that output and values the weights.
H: How to get DATE column from dataframe in pandas without using CSV This is my code. import pandas as pd import pandas_datareader.data as web import datetime as dt start = dt.datetime(2019, 2, 1) end = datetime.today().strftime('%Y-%m-%d') slug = "tsla" df = web.DataReader(slug, 'yahoo', start, end) print(df) This prints this ---> High Low Open Close Volume Adj Close Date 2019-01-31 62.312000 58.799999 60.200001 61.403999 62846000.0 61.403999 2019-02-01 63.220001 60.700001 61.084000 62.442001 36417000.0 62.442001 2019-02-04 63.060001 60.375999 62.596001 62.577999 36760500.0 62.577999 2019-02-05 64.487999 62.450001 62.498001 64.269997 33714000.0 64.269997 2019-02-06 64.848000 63.124001 63.917999 63.444000 25192500.0 63.444000 ... ... ... ... ... ... ... 2020-10-12 448.739990 438.579987 442.000000 442.299988 38791100.0 442.299988 2020-10-13 448.890015 436.600006 443.350006 446.649994 34463700.0 446.649994 2020-10-14 465.899994 447.350006 449.779999 461.299988 48045400.0 461.299988 2020-10-15 456.570007 442.500000 450.309998 448.880005 35672400.0 448.880005 2020-10-16 455.950012 438.850006 454.440002 439.670013 32620000.0 439.670013 [433 rows x 6 columns] I only want to get this Date column from df, If i write df['Date'] it gives me error, but if i use CSV file, then everything works fine df.to_csv('%s.csv' % slug) df = pd.read_csv('%s.csv' % slug) but it creates so much csv files for every company, so i want to use it without CSV AI: You need to reset index using df.reset_index().This will make your Dates attribute/column and not an index as it is now. import pandas as pd import pandas_datareader.data as web import datetime as dt start = dt.datetime(2019, 2, 1) end = dt.datetime.today().strftime('%Y-%m-%d') slug = "tsla" df = web.DataReader(slug, 'yahoo', start, end) modified = df.reset_index() print(modified['Date'])
H: How does layer normalization work exactly? As far as I understand, layer normalization normalizes across all the features for fully connected layers. Does that mean that for each batch dimension we have to learn the normalization parameters? Also, what do we do at test time if our batch dimension is different? AI: To understand how it works, we can have a look at the paper related to the question: "Layer Normalization", J. Lei, J. Kiros and G. Hinton. Question: layer normalization normalizes across all the features for fully connected layers Not exactly. What layer normalization does is to compute the normalization of the term $a_i^l$ of each neuron $i$ of the layer $l$ within the layer (and not across all the features or activations of the fully connected layers). This term $a_i^l$ is given by the weighted sum of the activations of the previous layers: $$ a_i^l = (w_i^l)^T\, h^l$$ Where $h^l$ is the vector of activations from the previous layer and $w_i^l$ is the vector of weights that connects the activations of the previous layer with the neuron $i$. This normalization is computed within the layer because is done by using the mean ($\mu^l$) and standard deviation ($\sigma^l$) of the values of all the neurons in the same layer, $l$ for the current sample: $$ \mu^l = \frac{1}{H}\sum_{i=1}^Ha_i^l\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\sigma^l=\sqrt{\frac{1}{H}\sum_{i=1}^H(a_i^l-\mu^l)^2}$$ Where $H$ is the number of neurons in the layer $l$. Hence, the normalization of the $a^l_i$ terms is given by: $$\bar{a}_i^l = \frac{g_i^l}{\sigma^l}(a_i^l -\mu^l)$$ Where $g^i_l$ is defined by the authors as a scale (or gain) value for each neuron $i$ of the layer $l$ which has to be learnt by the NN. Once we have obtained the normalized term $\bar{a}_i^l$, we compute the activation of each neuron, $h_i^{l+1}$ by: $$ h_i^{l+1} = f(\bar{a}_i^l + b_i^l)$$ Where $b_i^l$ is the bias term of the neuron, and $f(\cdot)$ is the activation function. Question: Does that mean that for each batch dimension we have to learn the normalization parameters? Yes, that's true. For each sample we would have to recompute the terms $\mu^l$ and $\sigma^l$ in order to do the normalization of the $a^l_i$ terms. Question: What do we do at test time if our batch dimension is different? At test time we have to compute only the predictions (forward propagation). We aren't learning the weights, biases and the $g_i$'s terms, we are testing them. However, because the activations on each layer have to be normalized we have to compute again the terms $\mu^l$ and $\sigma^l$ for each layer and sample (irrespective of the mini-batch size).
H: how print f1-score with scikit´s accuracy_score or accuracy of confusion_matrix? I would like to print the f1-score. I got confused about the wording f1-accuracy score and accuracy score. What is the difference of these 2 scikit-learn metrics and how can I print the f1-score out of this code? from xgboost import XGBClassifier from sklearn.metrics import accuracy_score, classification_report from sklearn.model_selection import train_test_split import pandas as pd import numpy as np from sklearn.model_selection import train_test_split df = pd.read...path X = df.drop('pricing_class', axis=1) y = df.pricing_class X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) xgb_classification = XGBClassifier(random_state=42) xgb_classification.fit(X_train,y_train) xgb_results=xgb_classification.predict(X_test) print(classification_report(xgb_results,y_test)) print('accuracy_score',accuracy_score(xgb_results, y_test)) this is a test sample: Question 1: There is under the column f1-score the row of accuracy. Is this the f1-score of the scikit-learn classification report? or is this the accuracy score? I have the feeling, that this value is the rounded value of the accuracy_score the line below (when I use full dataset). Question 2: How can I print the f1-score? AI: Use f1_score instead of the classification report: from sklearn.metrics import f1_score ... print('f1_score', f1_score(xgb_results, y_test))
H: What does n means in neural network neuron output? I've found this equation that explains the output of a neuron in a MLP network: $y(n) = f(\mathbf{w}^T \mathbf{x}(n) + b)$ I can understand the general context, but since i have no background with mathematical notation, i don't understand what the $(n)$ parameter means (e.g. $y(n)$, $x(n)$). Is it sort of a temporal or sample index? I've seen this notation in other machine learning subjects, but didn't find an explanation. AI: n is the dimension of the vector x and also y, as you can see wT is a transpose of w with dimension (n,n), is the image z is y and a is x. and dont bother about l it indicates the index of layer.
H: Does the output of the Sequence-to-Sequence encoder model exist in the same semantic space as the inputs (Word2vec)? Does the output generated from the LSTM encoder module exist in the same semantic space as the original word vectors? If so, say for example we have a sentence and we pass it through the encoder to get an encoded output and then we also calculate the average of word vectors for the same sentence separately, will the two new vectors (encoded and average) be comparable? Will their euclidean distance be relatively small? AI: No, assuming your input vectors are one-hot encodings. These input one-hot encodings are in an $n$-dimensional Euclidean vector space. The last hidden layer of an LSTM is not due to the non-linear activation functions across the encoder. Therefore, an average of the inputs will not necessarily align well in a vector space with the model output, nor are you guaranteed any similarity in cosine/Euclidean distance.
H: How to choose between Tensorflow and Pytorch? Recently I've been working on a pretty vanilla ANN model in Python with sklearn (and its preprocessing pipeline), mostly in jupyterhub notebooks if that matters. I am considering changing the framework I work with as I need some more tools (drop out at least). but I am not so familiar with Python frameworks. The 2020 state of AI report point out (p. 14) that the two main frameworks are PyTorch and Tensorflow. What are the practical differences between those two frameworks ? AI: As of October 2020... In terms of basic neural network functionality, they are pretty equivalent. Some differences: Stability: tensorflow 2.0 underwent a lot of changes from tensorflow 1.x, specifically in the very way it worked (they changed from a computational graph paradigm to an imperative paradigm). This caused a lot of friction and left many underlying problems in tensorflow. API coherence: tensorflow API has evolved and changed a lot and, through its evolution, many pieces of functionality were duplicated, and later where deprecated and removed. Also, there is a framework called Keras, which, at the beginning was a separate piece of code, but now is integrated in tensorflow. The co-existence of tensorflow and keras brought tension (see here). The Pytorch API has been more consistent over time. Documentation: the API of tensorflow has changed a lot over the time that many tutorials and stackoverflow questions are outdated. Pytorch has changed less and has kept good backward compatibility so, while there are some tutorials that may include outated practices, most of them should work. Deployment: tensorflow is known to be better suited for "production scenarios", e.g. it has tensorflow serving for exposing trained models through a service. This, however, is changing and now there are official alternatives for model deployment for pytorch. Mobile support: tensorflow lite is the official way of having deep learning models use the GPU of Android mobile devices, so it's easier to deploy on Android if you use tensorflow. Note that, while there is a mobile version of Pytorch, it does not support GPU yet. So, how to choose? This is my opinion: Use tensorflow if you have a specific reason for using it, e.g. you want to deploy on Android easily, you want to use tensorflow serving, you want to reuse models that are only available for tensorflow. Use pytorch in any other case.
H: Scaling of variables considering the values of a single column or the whole dataset I read many time that for machine learning and data mining algorithms the multi-dimensional input data should be scaled (e.g. normalized or standardized). Now my question is whether the average, min or max value shall be calculated for each row (variable) or for the dataset as a whole. Let's say we have 3 variables X=[x_1, x_2,.., x_n] Y=[y_1, y_2,...,y_n] Z= [z_1, z_2,...,z_n] I use the following code in python def standardize (data, train_split): data_mean = data[:train_split].mean(axis=0) data_std = data[:train_split].std(axis=0) return (data - data_mean)/ data_std def normalize(data, train_split): data_max= data[:train_split].max(axis=0) data_min = data[:train_split].min(axis=0) return (data - data_min) / (data_max - data_min) If I saw it correctly, the code (that I copied from a tutorial) calculated the min,max, average and standarddeviation for the whole dataset. Meaning that the maxium value is the maxium of X,Y and Z. Is this correct to do or shall one only normalize and standardize a variable by considering its own values? So the min, max etc. should be calculated for every variable. I'd appreciate every comment. AI: Both can make sense depending on your data : Example 1 : if you have a feature that varies between -100 and -50 on the whole dataset you would want to normalize that column based on the values on the whole dataset Example 2 : if each sample is a list of values from different sensors and you only care about their relative values (for anomaly detection for instance) then you might want to normalize each sample That being said, I think 9 times out of 10 you will want to normalize based on the statistics of the whole dataframe. Note also that normalizing each sample based on the statistics of the whole dataset is an operation that can be reversed using the normalized data and the dataset statistics and therefore no information is lost : your data is basically the same but rescaled. A neural network could very well learn to reverse this operation. Normalizing each sample based on this sample statistics can also be useful but this operation actually transforms the representation of your data. As an example, let's say you normalize each sample to a norm of 1, you can't recover the initial data without knowing the initial norm of that specific sample.
H: Item-to-Item recommendation using DNN I am new to DNN still learning, have a need to build item-to-item content based recommendation using DNN. For example, say I have a column of strings where each row represents a document I need to compute the cosine similarity of this column and recommend similar documents. id document 1 "hi this document is about science" 2 "hi this document is about wars" 3 "This document is about peace" now need to train based on document and column and recommend all the documents similar to a given document. I have solved this problem by using cosine similarity in ML now wanted to achieve the same in DNN (Deep Neural Nets). I have spent a lot of time on the net they all talk about movie rating where userId, rating, title is used. The problem I only have one column document how can I achieve this in DNN? AI: To be able to compare strings / Words or documents the data needs to be converted to a format the computer understand, vectors. Google has a nice guide on Universal Sentence Encoder for sentence similarity that you can follow which explains how to generate a vector from the neural network that they have already trained. What they do and what you can try is to use cosine similarity to compare the vectors. Compare one sentence vector with all other vectors and find the closest. I am adding Annoy here, because I find it quite intuitive to use and you can get the n closest items.
H: I want to run PCA on a data set that will be aggregated by country. Should I aggregate the data before or after I standardize the data, and why? Basically the title asks my question. I have the results of a survey that was filled out by people from different countries. I have been asked to analyze the data using PCA and see what findings I can get on each country from my analysis. Now of course this means I have to aggregate my data, but should I aggregate the data before or after I standardize the data? From the results that I am getting it seems that aggregating first is the right way to go about it, but I am failing to understand why that would be the case. AI: Data should be standardized for calculating principal components. If data is standardized before aggregating by country, the resultant data after aggregation will no longer will have mean 0 and standard deviation 1, hence, the results of PCA will be erroneous. Therefore, data should be aggregated and then standardized in order to obtain correct principal components of the data.
H: Effect of weights on the Louvain communities detected The Louvain method for community detection aims to optimize modularity and hence detect communities in the given graph. In case of a weighted graph would it be valid to assume that an edge with a higher weight would be prioritized to be part of a community as against an edge with a lower weight while optimizing modularity ? AI: Yes, weights are interpreted as a strength of connection in the context of modularity. This is perhaps easiest to see using the interpretation of unweighted modularity as The fraction of edges within communities minus the expected such fraction in a random graph with the same degree sequence. Switching the adjacency matrix to the weighted adjacency matrix to get the weighted modularity just changes this to The fraction of total weights within communities minus the expected such fraction in a random weighted graph with the same total weights at each vertex. That said, you do have to be careful. There are community detection methods that do treat weights as distances: SO: meaning of weights in community detection algorithms It may be worth running a simple example when first using a specific algorithm implementation; e.g., with $G=C_4$, weight one perfect matching very much higher than the other, the communities should end up being two $K_2$s, and you want to know whether they contain the high- or low- weight edges.
H: Performing a linear regression with Perceptron I was wondering about the link between the linear regression and the perceptron! Perceptrons were used as binary classifiers i.e to classify binary labels ( 0 or 1 ). My question is How can you perform a linear regression with perceptron and what is the link? I think the point is hidden in the activation function which we are applying to the perceptron! I would be appreciated if you can help me with the beginner's perspective on Neural Networks. AI: If you use a linear activation function on your perceptron you are essentially creating a linear regression where the weights connecting to your perceptron are analogous to the the coefficients on your linear regression model. The only difference in this case is you would normally fit your perceptron using backpropagation, but for the linear regression model you would likely use OLS. In either case - you will end up with a series of coefficients multiplying on your input variables in a linear fashion with some bias term. This of course is the most basic use of a perceptron. The advantage of the perceptron is you can use non-linear activation functions to fit non-linearities in your data, and your ability to stack them together to form a larger neural network.
H: Loading collections of datasets - Python code examples Sometimes you might want to check your ideas on multiple datasets. There are several places with datasets collections. Question: Please share some Python scripts how to download multiple datasets from these (or other) datasets collection ? Ideally one should be able to : 1) get datasets list 2) select some desired by conditions 3) download those selected. But if you have something different please share anyway. For "openml" database - I have a script - see my own answer. But I do have for other collections : Kaggle, uci ... Here some examples of datasets collections: https://www.openml.org/ https://archive.ics.uci.edu/ml/index.php https://ieee-dataport.org/datasets Каggle contains lots of datasets, there are also specific collections: graph collections see list here https://mathoverflow.net/a/359449/10446 , many biological data is here : https://www.ncbi.nlm.nih.gov/gds AI: How to fetch Kaggle data from python code? Install kaggle package C:\Users\TalgatHafiz> pip install kaggle login to your Kaggle account click on the icon in the upper right corner -> My Account Scroll down to API section Click "Create New API Token" "kaggle.json" file is created and saved locally Create ".kaggle" dir C:\Users\TalgatHafiz>mkdir .kaggle and move "kaggle.json" into that directory see all active competitions by running the following command C:\Users\TalgatHafiz>kaggle competitions list Select one of the competitions that you signed up for, eg https://www.kaggle.com/c/contradictory-my-dear-watson/data# Scroll down. Right before "Data Explorer" section there should be API line: "kaggle competitions download -c contradictory-my-dear-watson" copy it run these commands from the notebook import kaggle !kaggle competitions download -c contradictory-my-dear-watson zipped data file is downloaded into your the same directory where your notebook is: C:\Users\TalgatHafiz\conda\contradictory-my-dear-watson.zip so now you can unzip and start using the data If you still have questions please read https://medium.com/@jeff.daniel77/accessing-the-kaggle-com-api-with-jupyter-notebook-on-windows-d6f330bc6953
H: Including column of indices as predictor for model? I have a small dataset consisting of 1000 observations (rows), 11 predictors + 1 response (12 columns). It is a toy dataset used for learning purposes in a machine learning class at university -- binary classification (heart disease vs. no heart disease). I have fitted a logistic regression, SVM, and KNN. However, I am using python and I was deleting some rows corresponding to missing values and outliers I did not want to consider -- doing it with df.drop() and then reset_index(). If you do not set the drop flag of reset_index to true you will get a column in your data frame with the indices you wanted to reset. I accidentally forgot to set the flag to true and when fitting the 3 mentioned models included the column with these indices. This gave me an f1-score of 99-100%. When I realized that I had accidentally included this column, I dropped the column, re-fit and achieved ~ 60-80 percent F1 score with the respective models. Is there a reason why including this column of indices gets you to 100% F1 score or even accuracy? Is this merely a coincidence that apparently works with the dataset? AI: Including column of indices as predictor for model? No, don't include indices. They don't provide any meaningful information about the problem. They are just an enumeration. Take them away. Is there a reason why including this column of indices gets you to 100% F1 score or even accuracy? You probably get that when evaluating in train set, in test you wont get that. If you do, then you have a problem with your data (it might be sorted). What does the underlying algorithm do here? Something like this for a decision tree if index == 1: predict =1 if index == 2: predict =0 . . . As you can see there is no meaningful information about the problem here and the prediction in train will still be zero. Conclusion: don't use indexes as a training feature.
H: Is there a RandomForest implementation that handles categorical data without encoding in python? I am working on a binary classification project with both continuous and categorical features. I know that the R implementation of RandomForest can handle categorical data passed in as factor type data. Python's scikit-learn implementation requires encoding of categorical data however (e.g. one-hot). I'm curious about the difference in the results I receive using the two implementations, and I'm wondering if anyone knows of a python implementation of RandomForest that can handle categorical data without encoding. AI: Because sklearn uses the CART algorithm it cannot accept categorical data as-is (as you have pointed out). There is an existing ticket out to change this. The issue really should be rephrased as supporting categorical splitting on Decision Trees - not Random Forest, as Random Forrest is simply the ensemble method using these decision trees as fitters. I don't know of a python package that supports this functionality but I do suspect that it would help increase performance because it would avoid a common pitfall of using random forest on hot-encoded data; see One-Hot Encoding is making your Tree-Based Ensembles worse, here’s why? In the meantime - I use TargetEncoder when I know I need to use Random Forrest. It allows the categorical data to be encoded in a "smart" fashion without needing to use one-hot encoding.
H: Same confusion matrix when changing DecisionTreeClassifier parameters I'm trying to build my first Decision Tree Classifier using the Iris dataset in the sklearn library. This is my first sample code: from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.model_selection import cross_val_score from sklearn.metrics import confusion_matrix from sklearn import tree import numpy as np import graphviz iris = load_iris() clf_ex1 = tree.DecisionTreeClassifier(criterion="entropy",random_state=300,min_samples_leaf=5, class_weight={0:1,1:10,2:10}) np.random.seed(0) indices = np.random.permutation(len(iris.data)) indices_training=indices[:-10] indices_test=indices[-10:] iris_X_train = iris.data[indices_training] iris_y_train = iris.target[indices_training] iris_X_test = iris.data[indices_test] iris_y_test = iris.target[indices_test] clf_ex1 = clf_ex1.fit(iris_X_train, iris_y_train) predicted_y_test = clf_ex1.predict(iris_X_test) print(confusion_matrix(iris_y_test, predicted_y_test)) print("Predictions:") print(predicted_y_test) print("True classes:") print(iris_y_test) print("--------") print(iris.target_names) # print some metrics results acc_score = accuracy_score(iris_y_test, predicted_y_test) print("--------") print("Accuracy score: "+ str(acc_score)) print("--------") f1=f1_score(iris_y_test, predicted_y_test, average='macro') print("F1 score: "+str(f1)) print("--------") scores = cross_val_score(clf_ex1, iris.data, iris.target, cv=5) print(scores) dot_data = tree.export_graphviz(clf_ex1, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph As you can see, in my DecisionTreeClassifier function I set the weight of the classes by giving a major value to the second one and the third one and I've given 300 to the random_state parameter. Then, I made another example by changing these parameters on this way: clf_ex2 = tree.DecisionTreeClassifier(criterion="entropy",random_state=300,min_samples_leaf=5, class_weight={0:1,1:1,2:10}) and on this way: clf_ex3 = tree.DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=2, max_leaf_nodes=None, min_samples_leaf=15, min_samples_split=5, random_state=100, splitter='best') The problem is that all the values that I print (the confusion matrix, the accuracy, the predicted_y_test and the f1 score) do not change between the three codes. The only value that gets affected is the Cross Validation Score. Why? AI: I ran your script and this is what was returned for all 3 confusion matrices: [[4 0 0] [0 3 1] [0 0 2]] This confusion matrix indicates to me that your model is working wonderfully on the testing data. There is a single exception where your model predicts the wrong class (the non-diagonal "1"). Since you are giving the same random seed every time, your training and testing indices are the same for all 3 models you are investigating. Meaning: they are all being trained on the same data and tested on the same data. It is quite possible that, in this test data set, 9 out of 10 samples are very easy to classify, while there is 1 case where it is much more non-trivial. The contrast between trivial and non-trivial might be so stark that adjusting hyperparameters in your DecisionTree isn't yielding any difference in performance. So this could explain why your output is the same for all those fields you mentioned. As for the cross_val_score - this method randomly mixes your entire dataset and gives you 5 successive CV trials. As this process is actually random from your first model through to your third (it doesn't depend on your np.random.seed(0)), the splits are different and you will yield similar (but not identical) results. As a general note - be careful how you're currently using cross_val_score. In your current implementation you are feeding a pre-trained model to a method where it expects an un-trained model, and you are asking it to cross validate. As a result it's likely that cross_val_score is re-training your model in a way that you might not be intending.
H: How to determine layers structures appropriate for a specific application? I'm playing around with Keras. I went through their example of MNIST data, cats vs dog etc. The structure of the network they suggest is different in each case. The MNIST stucture: # Build a simple model inputs = keras.Input(shape=(28, 28)) x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(inputs) x = layers.Flatten()(x) x = layers.Dense(128, activation="relu")(x) x = layers.Dense(128, activation="relu")(x) outputs = layers.Dense(10, activation="softmax")(x) model = keras.Model(inputs, outputs) So basically preprocessing flattening the data 2 intermediate layers output layers. The cats vs dogs classifier has some preprocessing as well, but then many more intermediate layers of different types. I've played around with each, adding/remove layers and changing characteristics in each. Seems to influence loss/accuracy, but then many different variations also seems to have pretty similar results. My question is - for some lambda application, what is a good process to determine the basic architecture of the networks (how many layers, what types)? I understand that Keras has a tuner to set the parameters, however my understanding is that this is for the model parameters. I'm talking about one step back from that - how many layers? Which types? Are there good rules of thumbs for that, good resources of "standard" structure as a function of types of inputs? AI: From the structure you have outlined it looks like you're going for a dense fully-connected neural network. A general rule of thumb for practitioners on this topic is that increasing the size of a single hidden layer is more likely to yield a more performant network than adding successive layers. This is obviously not true in all cases - but as you are experimenting with architectures it is more useful to start by increasing the size of your one hidden layer before you jump to two of even three. See this for more details. I would recommend removing one of those hidden layers and increasing the number of nodes on the remaining. Something else to note: if you are looking at image data you might consider using convolutional layers. These are much more appropriate for problems with data that are spatially related (images etc..). If you do decide to go with this approach the previous advice does not apply as it relates only to simple dense multilayer perceptrons. Instead review this.
H: How to deal with NaN values after merging or joining two data frames? After merging two dataframes, I end up with NaNs in the new dataframe, because one csv does not have all the ID's that the other has (Two dataframes of different sizes for example). So some rows have NaN values in some columns. Should I deal with those block missing values by removing them or replacing them with mean or median ? Should I remove or replace them with ? AI: You probably should conduct a missing values analysis to see what is the percentage of missing per column (figure below, from dataprep package) Decide a threshold according to which you may want to completely drop a column or not (depending on how your analysis or model treats nans as well) For the columns that are not dropped, you should impute the missing values experimenting with relevant techniques, e.g. average, std etc. (also depends on the type of the data and feature). https://scikit-learn.org/stable/modules/impute.html
H: What is the next step after k fold CV? I came across this video lecture https://www.youtube.com/watch?v=wjILv3-UGM8 on k fold cross validation (CV). The algorithm given in the video lecture is presented below: for k = 1:5 train on all except k get model $M_{\tilde{k}}$ calculate accuracy on $k$ as $A_k$ end Calculate final cross validation accuracy: $A = > \frac{1}{5}\sum_{k=1}^5 A_k$ This is quite clear to me. Here $M$ is I guess just a single type of ML algorithm. However at time stamp 6:35 the presenter raises the question that what do we do with all the 5 different models that were built? According to him, we either combine all the models and make decision based on that or take the best model out of the 5. Is this statement true? In many sites including here (https://stats.stackexchange.com/questions/310953/doubt-about-k-fold-crossvalidation?noredirect=1&lq=1 ; https://stats.stackexchange.com/questions/11602/training-on-the-full-dataset-after-cross-validation and https://stats.stackexchange.com/questions/11602/training-on-the-full-dataset-after-cross-validation) and research papers I have understood that: -- for doing model training using k fold CV, we re-train on the entire dataset after the end of the CV loop and that is the final model. -- We do not select any model from inside the CV loop if the idea of doing CV training is to check the accuracy of the ML algorithm on the entire dataset. -- However, if we have multiple ML algorithms say random forest, neural network, SVM inside the CV loop then we select the algorithm with the highest accuracy. -- Another technique, nested cross-validation is used for hyperparameter tuning. Is my understanding correct? AI: I didn't watch the linked video but based on your explanations: yes, your understanding is correct. A common confusion is to assume that cross-validation is similar to a regular training stage and therefore produces a model. This assumption is wrong: CV includes repeated training/testing for the purpose of evaluating the method/parameters. From this understanding it follows that: for doing model training using k fold CV, we re-train on the entire dataset after the end of the CV loop and that is the final model. Yes, since we want to obtain the final model as accurate as possible so we should use all the data. In this case the CV has been used to calculate a good estimate of the performance. We do not select any model from inside the CV loop if the idea of doing CV training is to check the accuracy of the ML algorithm on the entire dataset. Correct, otherwise there's no point using CV. However, if we have multiple ML algorithms say random forest, neural network, SVM inside the CV loop then we select the algorithm with the highest accuracy. Any case where multiple methods and/or parameters are being evaluated is a bit more complex than the regular case of a single method: evaluating multiple systems is by itself an additional layer of training, in the sense that we select some parameters (typically the best model) based on the data. This means that the selection itself is based on the whole data used in the CV stage, so the CV performance of the best model is akin to a performance obtained on a training set. This is why one needs another test set (or nested CV) in order to obtain the final performance of the model. An intuitive way to understand this is to imagine evaluating say millions of models with CV: the only way to know if the best performance is due to chance or not is to evaluate the corresponding model on some fresh test set. Note: the case of combining the outputs of all the models is a different story, since this boils down to a single meta-model.
H: How much data do you need to build a classifier? I would like to ask you what a good size of dataset would be for building a classifier. I know that there are datasets of 1000 obs and datasets of 1m obs. But I also read papers where classifiers were built on datasets of 300 obs. I think the size may affect the accuracy/precision of a classifier, but I am not sure 100% of that. AI: It's impossible to answer this question in general, because the answer strongly depends on the content of the data. More precisely it depends if the relations between the features and the target class in the training data are sufficient for the parameters of the model to be estimated as accurately as possible. In the most simple cases a few instances might be enough, for example a linear regression model with one single feature would need only two "perfect" instances. Practically the following factors have a strong impact on the number of instances needed: the number of parameters to estimate and the complexity of the model: a more fine-grained model needs more detailed information, i.e. more instances the number of classes, because there are more possible combinations for the model to learn and because it usually implies a higher number of parameters as well (more complex model) the number of features, for the same reason the amount of noise in the data, because finding the general patterns is more difficult if there are lots of inconsistencies/errors in the data, so statistically more instances are needed to distinguish the effect of chance from the real patterns. So the only way to check how much data is needed for a particular task and dataset is to do an ablation study, i.e. a series of experiments in which a model is trained every time with a different number of instances. For example if one has 1000 instances, they could try to train a model with 100, 200, 300,...,1000 instances. Then by plotting the performance of every model one can observe whether the curve becomes stable at some point: if yes, this point shows how many instances are needed. If not (i.e. the performance keeps increasing significantly), then more than 1000 instances are needed. I think the size may affect the accuracy/precision of a classifier, but I am not sure 100% of that. Definitely.
H: Correctness of a ROC Curve I've built a Decision Tree Classifier to practice with the sklearn library. My first task was to shuffle the iris dataset and split it keeping only the last 10 elements for the test. Then, after the training part I predicted the class of these elements and printed other useful metrics to understand what I'm doing. My doubt comes after building the roc curve, since I find it is really different from the other I'v seen in some example. This is the code: from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import roc_curve, auc from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn.model_selection import cross_val_score import matplotlib.pyplot as plt from sklearn import tree import numpy as np import graphviz iris = load_iris() clf_ex1 = tree.DecisionTreeClassifier(criterion="entropy",random_state=300,min_samples_leaf=5, class_weight={0:1,1:10,2:10}) np.random.seed(0) indices = np.random.permutation(len(iris.data)) indices_training=indices[:-10] indices_test=indices[-10:] iris_X_train = iris.data[indices_training] iris_y_train = iris.target[indices_training] iris_X_test = iris.data[indices_test] iris_y_test = iris.target[indices_test] clf_ex1 = clf_ex1.fit(iris_X_train, iris_y_train) predicted_y_test = clf_ex1.predict(iris_X_test) print("Predictions:") print(predicted_y_test) print("True classes:") print(iris_y_test) # print some metrics results acc_score = accuracy_score(iris_y_test, predicted_y_test) print("--------") print("Accuracy score: "+ str(acc_score)) print("--------") f1=f1_score(iris_y_test, predicted_y_test, average='macro') print("F1 score: "+str(f1)) print("--------") scores = cross_val_score(clf_ex1, iris.data, iris.target, cv=5) print("Cross validation scores: "+str(scores)) # Confusion Matrix print("--------") print("Confusion matrix:") print(confusion_matrix(iris_y_test, predicted_y_test)) # Building the ROC Curve y_test_prob = clf_ex1.predict_proba(iris_X_test) # Calculating the roc curve for each class changing the pos_label value fpr_cl0, tpr_cl0, _ = roc_curve(iris_y_test, y_test_prob[:,1], pos_label = 0) roc_auc_cl0 = auc(fpr_cl0, tpr_cl0) fpr_cl1, tpr_cl1, _ = roc_curve(iris_y_test, y_test_prob[:,1], pos_label = 1) roc_auc_cl1 = auc(fpr_cl1, tpr_cl1) fpr_cl2, tpr_cl2, _ = roc_curve(iris_y_test, y_test_prob[:,1], pos_label = 2) roc_auc_cl2 = auc(fpr_cl2, tpr_cl2) # Building the Plot for each class plt.figure() lw = 2 plt.plot(fpr_cl0, tpr_cl0, color='darkorange', lw=lw, label='ROC curve class 0 (area = %0.2f)' % roc_auc_cl0) plt.plot(fpr_cl1, tpr_cl1, color='cornflowerblue', lw=lw, label='ROC curve class 1 (area = %0.2f)' % roc_auc_cl1) plt.plot(fpr_cl2, tpr_cl2, color='aqua', lw=lw, label='ROC curve class 2 (area = %0.2f)' % roc_auc_cl2) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() And these are the results: Are they consistent with the predictions? If I change the weights of the classes in my DecisionTreeClassifier, but I get the same predictions, is it normal that the final plot do not changes? AI: The first issue seems to be in the following block of code: # Calculating the roc curve for each class changing the pos_label value fpr_cl0, tpr_cl0, _ = roc_curve(iris_y_test, y_test_prob[:,1], pos_label = 0) roc_auc_cl0 = auc(fpr_cl0, tpr_cl0) fpr_cl1, tpr_cl1, _ = roc_curve(iris_y_test, y_test_prob[:,1], pos_label = 1) roc_auc_cl1 = auc(fpr_cl1, tpr_cl1) fpr_cl2, tpr_cl2, _ = roc_curve(iris_y_test, y_test_prob[:,1], pos_label = 2) roc_auc_cl2 = auc(fpr_cl2, tpr_cl2) You are feeding y_test_prob[:,1] to each roc_auc calculator (when you should be feeding column 0, 1 and 2). In this case the predict_proba method outputs one column per class. Additionally you are feeding iris_y_test into this calculator which is numerically encoded with distinct values 0, 1 and 2. This function wants one column with the probabilities of each class on it's own. See a possible alteration: import pandas as pd # Calculating the roc curve for each class changing the pos_label value fpr_cl0, tpr_cl0, _ = roc_curve(pd.get_dummies(iris_y_test).loc[:, 0], y_test_prob[:, 0]) roc_auc_cl0 = auc(fpr_cl0, tpr_cl0) fpr_cl1, tpr_cl1, _ = roc_curve(pd.get_dummies(iris_y_test).loc[:, 1], y_test_prob[:, 1]) roc_auc_cl1 = auc(fpr_cl1, tpr_cl1) fpr_cl2, tpr_cl2, _ = roc_curve(pd.get_dummies(iris_y_test).loc[:, 2], y_test_prob[:, 2]) roc_auc_cl2 = auc(fpr_cl2, tpr_cl2) Secondly - looking at your output probabilities they are all either distinctly 0 or 1. Your decision tree seems to have "perfectly" classified each example into a class with 100% certainty. The ROC AUC curve will only show interesting results if there are multiple possible thresholds to classify your samples into different classes. In your case - any threshold you pick between 0 and 1 exclusively will yield the exact same False Positive and True Positive rate. The ROC AUC curve helps to determine thresholds for classification - and in your case selecting thresholds is quite trivial.
H: How do I deal with additional input information other than images in a convolutional neural network? I try to convert a game state of a board game into the input for a convolutional neural network. A convolutional neural network is useful because the players have to place items on the board, and the convolutional neural network can take advantage of that spatial structure. Therefore I can describe the board well with a binary feature plane for each player (1 if there is an item on the board and 0 if not). The players can not only place objects but also collect cards. A player can own a maximum of 19 of five different card types. How many cards a player has and what type they are, is important information for the neural network, but I cannot describe this with another feature plane. This has nothing to do with spatial structure. So how do I give the convolutional neural network such additional information, for example that the player has 6 cards of type A, as input? There are also "places" where a player can place his items. These places have a number (so to speak how good the place is) from 2 - 12. I can describe these places again with a feature plane. But I wonder if the network distinguishes well enough between two numbers like 6 and 7? I could imagine that it can distinguish much better between ones and zeros. AI: I think there are three questions here: How to incorporate non-spatial information into the network? When combining different information modalities, a typical approach is to do it at the internal representation level, that is: the point where you lose the spatial information (normally with a flatten operation) after the convolutions. You can have your extra information be processed by an MLP and the result be combined with the representations obtained by the convolutional layers by concatenating both. How to represent the cards as input to the network? In order to represent the card a user has, you can represent them as discrete elements (i.e. tokens), just like text is usually handled in neural networks. This way, you can use an embedding layer, which would receive as input the index of the card. As the user can have any number of cards, you could use an LSTM. In order to represent the "end of the card collection", you can have a special token, and yet another one to represent "padding", which would be useful to create minibatches with different number of cards. How to represent the places? You should decide if these are better represented as discrete or continuous values. Or maybe just try both options and choose the best performing one. For continuous features, you could add, as you suggested, another feature place. For discrete features, you would just have an embedding layer, and then concatenate the output to the other channels. Update: Some clarifications: A "token" is a term used in NLP to refer to a value which is discrete, that is, the number of values it can take is finite, normally small. In your case, the different values that a card token can take is 19. Usually, we refer to tokens by the index they occupy in the list of all possible values. In order to represent discrete values in neural networks, we normally represent each different value as a fixed-size vector. An embedding table is just a table with the fixed-size vectors used to represent your discrete elements. The embedding layer is normally the first in the network architecture. It receives as inputs token indexes and outputs their associated vectors. The entries of the embedding table are updated during the backpropagation process. You don't "concatenate an MLP with a convolutional network", you concatenate their outputs. Specifically, once the output of the last convolutional layer is computed, you normally "flatten" it, meaning you remove the spatial information and just place the output tensor elements in a single-dimension vector. That vector is what you concatenate with the output of the MLP, which is also a single-dimension vector (apart from the minibatch dimension).
H: What does this formula in Glorot & Bengio mean? In this paper, on page 5, we find the formula $$Var(z^i)=Var(x)\prod_{i'=0}^{i-1}n_{i'}Var(W^{i'})$$ I am really struggling to understand what is meant by this formula. I think at least some of the following are true: We're dealing with a linear neural network, i.e. no activation functions. $z^i$ is the output of one layer of the network, so for the very first hidden layer (closest to the input) of this network we would have: $$Var(y)=nVar(x)Var(W)$$ where $y$ is the output vector of the first hidden layer, $x$ is the input vector and $W$ is the matrix of weights connecting that first layer to the input. However, it's entirely unclear to me what these variances are. I thought maybe $Var(y)$ just meant the empirical variance of the vector $y$, i.e. the sum of the squares of the differences of the elements of $y$ from the mean of $y$, and likewise for $Var(x)$ and $Var(W)$, where the latter is just the variance of all of the entries of $W$. But under this interpretation the formula turns out to be false numerically, so I'm at a bit of a loss to understand what this equation is supposed to mean. AI: 1. Meaning of the formula $$Var(z^i)=Var(x)\prod_{i'=0}^{i-1}n_{i'}Var(W^{i'})$$ This formula expresses the variance of the values of the activation for each neuron $k$ of the layer $i \rightarrow z^i_k$. This value, under the assumptions that are going to be mentioned along this post, is the same for all the neurons in the layer $i$. This is why the authors express it as $Var(z^i)$ instead of a particular $Var(z^i_k)$. Just to simplify the notation Note that this value is scalar, as well as the other variances involved in the equation: $Var(x)$ and $Var(W^i)$. So, just as a summary, these variances have the next meanings: $Var(W^i)\rightarrow$ The variance of the random initialization of the weights. $Var(x)\rightarrow$ The variance of each feature (which is asummed to be the same for every feature as we are going to see in the next lines). $Var(z^i)\rightarrow$ As said, this is the variance of the activation of each neuron of the layer $i$. So now let's see how to reach this formula. 2. Reaching the formula 2.1 Analysing some parts of the paper Let's analyse some parts of that page of the article that answer the other questions and that are going to be also useful to understand the assumptions that are being made. For a dense artificial neural network using symmetric activation function $f$ with unit derivative at $0$ (i.e. $f'(0) = 1$), if we write $z^i$ for the activation vector of layer $i$, and $s^i$ the argument vector of the activation function at layer $i$, we have: $$ s^i = z^iW^i + b^i\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,z^{i+1}=f(s^i)$$ We can draw some conclusions given that paragraph: The authors are considering an activation function subject to $f'(0) = 1$ in that part of the article. The fact that $f'(0) = 1$ means that locally the function $f(s^i)$ will behave linearly at the neighborhood of $s^i_k=0\rightarrow f'(s^i_k) \approx 1$, where $k$ is the index of the neuron. As you said $\rightarrow z^i$ represents the activation output of one layer The variances will be expressed with respect to the input, outpout and weight initialization randomness This means that the variances that they are going to use are going to be $Var(x)$, $Var(z^i)$ and $Var(W^i)$ respectively, and during the initialization (the weights are only spread randomly at the initialization). The variance of the biases is not being considered as they are assumed to be initialized to the same value at the initialization $ \Rightarrow Var(b^i)=0$. Consider the hypothesis that we are in a linear regime at the initialization, that the weights are initialized independently and that the inputs features variances are the same ($=Var(x))$. Given this, we know that: Linear regime $\Rightarrow f'(s^i_k) \approx 1$ as we saw above. Weights are initialized independently $\Rightarrow Var(W^iW^{i+1})=Var(W^i)Var(W^{i+1})$ Inputs features variances are the same $\Rightarrow Var(x_k) = Var(x_{k+1}) \,\,\forall\,\, k$ where $k$ is the index of the input 2.2 Example So now, let's compute for example the Variance of the activations of the first hidden layer ($i=1$) i.e. $Var(z^1_k)$: $$ \begin{align} Var(z^1_k) &= Var(f(s^0_k)) && (1)\\\\ &\approx (f'(\mathbb{E}(s^0_k))^2\,\,Var(s^0_k) && (2)\\\\ &= (f'(0))^2\,\,Var(s^0_k) && (3)\\\\ &= Var(s^0_k) && (4) \\\\ &= Var(W^0_k x + b_k^0) && (5)\\\\ &= Var(W^0_k x) = Var(w^0_{k1} x_1 + w^0_{k2} x_2 +... ) && (6)\\\\ &= Var(w^0_{k1} x_1) + Var( w^0_{k2} x_2) +... && (7)\\\\ &= n_0Var(w^0_{kj})Var(x_j) && (8)\\\\ &= n_0Var(W^0)Var(x) && (9) \end{align}$$ Note that we would end up with the same expression of $Var(z^i_k)$ for every neuron $k$ of the layer $i \rightarrow $ Now we understand why $Var(z^i)$ represents the variance of the activation for each neuron in the layer $i$ i.e. $Var(z^i) = Var(z^i_k)$. 2.3 Justifications The justifications for each step are: $z^{i+1}=f(s^i) \Rightarrow z^{1}=f(s^0)$ The justification to this can be found in the Wikipedia post "Approximating the variance of a function" $\mathbb{E}(s^0)= \mathbb{E}(W^0x) = \mathbb{E}(W^0)\mathbb{E}(x)=0$. This is because $x$ and $W^i$ are independent and $W^i$ is assumed to be randomly initialized with mean $0$. Linear regime $\Rightarrow f'(s^i_k) \approx 1$. $s^i = W^ix + b^i$ Bias values are initialized independently and $Var(b^i)=0$ as we mentioned above. This is true because $Cov(w^0_{k1} x_1, w^0_{k2} x_2)=0$. Something that is proved at the end of the post. (and 9) This is because the weights and inputs are independent, and because of the assumptions made by the authors, which are: $$Var(x_j)=Var(x)\,\,\forall\,k \,\,\,\,\,\,\,\text{ and }\,\,\,\,\,\,\, Var(w_{jk})=Var(W)\,\,\forall\,\,k,j$$ 3. Conclusion Extending this reasoning to all the layers we end up with the equation given by the authors: $$Var(z^i)=Var(x)\prod_{i'=0}^{i-1}n_{i'}Var(W^{i'})$$ Edit regarding the comments The reasoning holds even if the features are not independent. This is because, we can proof that the covariance of the $z^{i+1}_k$ terms in the same layer $i$ is $0$. This also serves as an explantion to the step $(7)$ of the example above. To see this let's compute $Cov(w_k^iz^i,w_j^iz^i)$ where $w_k^i$ and $w_j^i$ represent the vector of weights related to the neurons $k$ and $j$ of the layer $i$, with $k\neq j$: $$\begin{align} Cov(w_k^iz^i,\,\,w_j^iz^i) &= \mathbb{E}\left[(w_k^iz^i-\mathbb{E}(w_k^iz^i))(w_j^iz^i-\mathbb{E}(w_j^iz^i))\right]\\ &=\mathbb{E}\left[(w_k^iz^i)(w_j^iz^i)\right]\\ &=\mathbb{E}\left[w_k^iz^i\,w_j^iz^i\right]\\ &=\mathbb{E}\left[w_k^i\right]\mathbb{E}\left[z^i\,w_j^iz^i\right]\\ &=0^T\,\mathbb{E}\left[z^i\,w_j^iz^i\right]=0 \end{align}$$ Note that we can do $\mathbb{E}\left[w_k^iz^i\,w_j^iz^i\right]=\mathbb{E}\left[w_k^i\right]\mathbb{E}\left[z^i\,w_j^iz^i\right]$ because $w_k^i$ is independent of $z^i\,w_j^iz^i$ Then by extending this to the other neurons of the layer, we can confirm that: $Var(W^iz^i) = n_iVar(W^i)Var(z^i)\rightarrow$ We can reach the equation given by the authors.
H: How to achieve better accuracy of 90+ on a 3 class highly skewed dataset? I have a 3 class dataset with very high imbalance classes: class 1: 75000 class 2: 27000 class 3: 3000 With simple learning algorithms, accuracy is 84.6% but as expected mostly predicts class 1, few class 2 and no class 3. With techniques, like oversampling, SMOTE, undersampling, XGBoost, Adaboost, showed some increase in F1 score, but the accuracy overall either stays at ~84% or drops. Any promising technique that I can explore for improving accuracy atleast above 90%? I am not bothered about improving the accuracy of only class 3, but the accuracy of the overall classifier. thanks. AI: First be careful, looking only at accuracy in a multiclass problem can be misleading: with almost 75% of the data in the majority class, a dummy model which always predict the majority class achieves almost 75%. Measuring performance with micro or macro F1-score would be more informative. Now about designing your experiments: currently you seem to be trying various methods at random, including sampling techniques and classification algorithms. Why not, but in this way you rely entirely on luck to improve performance. In particular what strikes me is that you don't mention anything about the task or the features (btw it's probably the reasons why some people downvoted the question). The type and nature of the features, their number and their relation to the class can be important to understand why certain methods work and others don't. There might be some feature engineering to do. In particular using feature selection methods sometimes brings great improvement. It would also be useful to get an idea of the performance obtained with simple methods (like decision trees, SVM, logistic regression). Finally you could investigate in more detail which kind of cases which get misclassified and/or study how stable the model is with respect to varying the number of instances or features.
H: model.predict() accuracy extremely low on training dataset This question is similar to this. I'm new to ML, and I am trying to classify breast cancer histology images using EfficientNets with Transfer Learning. The dataset is small (400 images in total - there are 4 classes and all classes are equally balanced) and I am using ImageNet weights, and fine-tuning the model by freezing the first two blocks. I've implemented a model with Keras that reaches a training accuracy of ~90% after 30 epochs. When trying to use model.predict on the training dataset (to understand the results of the predict), I expect the results to be good since the prediction is being done on data that the model has already seen but the results I get are extremely low. The prediction accuracy in the report created by sklearn.metrics.classification_report is 27.81%. After trying various methods to change this, I still have not been able to improve or understand the result. The labels I use: {0: 'Benign', 1: 'InSitu', 2: 'Invasive', 3: 'Normal'} After playing around with code, I found that the expected labels and the results were as follows: expected: [0, 3, 1, 0, 0, 3, 2, 0, 2, 2, 1, 0, 3, 1, 2, 3, 2, 1, 3, 2, 3, 3, 1, 0, 3, 1, 3, 2, 2, 1, 1, 3, 0, 1, 3, 0, 1, 2, 0, 3, 2, 3, 2, 3, 1, 0, 1, 0, 1, 1, 2, 1, 2, 2, 2, 1, 1, 0, 3, 3, 0, 3, 1, 1, 3, 1, 1, 0, 2, 1, 0, 0, 1, 1, 1, 2, 2, 1, 3, 1, 1, 3, 3, 3, 0, 1, 1, 0, 2, 3, 0, 3, 1, 2, 2, 3, 0, 3, 3, 0, 0, 0, 2, 0, 1, 0, 2, 1, 3, 1, 0, 2, 2, 0, 1, 3, 1, 1, 3, 2, 3, 0, 2, 1, 2, 3, 3, 0, 0, 1, 3, 0, 0, 3, 2, 3, 1, 2, 0, 0, 0, 3, 3, 3, 0, 0, 0, 0, 3, 0, 0, 3, 2, 1, 3, 2, 1, 1, 2, 2, 3, 3, 1, 2, 1, 0, 3, 3, 0, 1, 2, 2, 1, 3, 1, 3, 0, 1, 3, 2, 3, 3, 0, 2, 1, 3, 2, 0, 2, 0, 3, 3, 2, 3, 0, 2, 2, 0, 3, 2, 2, 1, 1, 3, 2, 3, 0, 2, 0, 3, 1, 0, 1, 2, 0, 2, 0, 0, 2, 2, 2, 1, 1, 0, 3, 0, 1, 1, 2, 2, 0, 0, 3, 0, 1, 0, 1, 1, 1, 3, 2, 2, 3, 2, 1, 2, 0, 1, 2, 1, 1, 1, 3, 1, 2, 3, 1, 2, 0, 2, 3, 1, 2, 1, 2, 1, 2, 2, 2, 0, 3, 3, 1, 3, 3, 3, 0, 1, 1, 2, 2, 3, 3, 2, 0, 0, 2, 1, 0, 1, 3, 0, 2, 0, 0, 3, 2, 1, 2, 2, 1, 0, 0, 0, 2, 3, 3, 0, 2, 3, 1, 0, 0, 0, 0, 3, 0, 3, 2, 2] result: [3 0 3 0 3 3 3 1 0 2 3 3 2 1 3 2 1 1 3 1 2 3 2 1 0 3 3 0 2 1 2 2 2 1 3 3 0 2 0 2 1 1 2 0 0 0 0 0 2 3 0 1 3 0 1 1 2 0 0 2 3 3 0 2 2 1 2 3 2 2 1 2 1 3 3 3 1 0 1 3 0 1 2 3 2 0 1 3 0 3 3 0 0 3 3 3 1 3 0 2 3 3 3 0 3 0 0 1 3 1 1 3 0 0 0 3 1 1 0 1 3 2 0 3 2 3 0 1 0 2 1 1 2 3 2 2 1 2 1 2 1 1 0 1 1 1 3 2 2 1 3 1 1 0 2 1 1 1 0 1 0 0 0 0 2 2 0 0 3 2 3 3 2 3 3 3 2 1 2 3 2 3 2 2 1 1 0 1 0 2 3 2 1 3 0 1 0 1 2 0 3 1 2 0 1 1 2 0 3 1 3 2 1 2 2 1 3 0 1 2 3 3 1 3 2 1 2 3 0 3 0 2 2 3 0 1 3 1 0 2 2 1 0 2 3 1 0 1 3 3 2 2 0 0 2 1 0 0 3 2 2 3 1 0 2 1 0 0 0 2 0 2 1 3 1 0 3 1 1 1 1 1 2 0 3 3 0 3 0 0 2 0 3 1 1 3 1 0 2 2 3 2 2 2 2 2 1 0 3 3 2 0 0 0 1 2 3 2 1 2] The code I use to predict: training_data.reset() predicts = model.predict(training_data, steps=20, verbose=1) predict_class = np.argmax(predicts, axis = 1) # could this be the problem? errors = np.where(predict_class != truth)[0] print("No of errors = {}/{}".format(len(errors),training_data.samples)) Where training_data is an image generator defined as follows: training_gen = ImageDataGenerator( rescale=1./255, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.15, horizontal_flip=True, vertical_flip=True, preprocessing_function = preprocess_img ) training_data = training_gen.flow_from_dataframe( dataframe=df_train, directory = train_dir, x_col="ID", y_col="Truth", target_size=(height,width), batch_size=batch_size, # shuffle=True, class_mode="categorical", seed=seed_augment ) Am I misunderstanding when I assume that the predict function will give me good accuracy on training data? Does this imply that my model is actually really bad? Any direction would be appreciated. AI: The problem with the way the prediction is done in the code found in the question is that the generator is shuffling the data (even though the shuffle=True is commented out, True is the default). Additionally, the batch_size in the code is set to 16. Setting batch_size = 1 produced the correct output for me, and the resulting accuracy increased to what I was expecting. Oh, and don't use the same generator to test the predict function! Hope this helps someone! :)
H: Where are the 60 million params of AlexNet? On the abstract of the AlexNet paper, they claimed to have 60 million parameters: The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. When I implement the model with Keras, I get ~25 million params. model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(96, 11, strides=4, activation="relu", input_shape=[227,227,3]), tf.keras.layers.MaxPooling2D(pool_size=(3,3), strides=(2,2)), tf.keras.layers.Conv2D(256, 5, activation="relu", padding="SAME"), tf.keras.layers.MaxPooling2D(pool_size=(3,3), strides=(2,2)), tf.keras.layers.Conv2D(384, 3, activation="relu", padding="SAME"), tf.keras.layers.Conv2D(384, 3, activation="relu", padding="SAME"), tf.keras.layers.Conv2D(256, 3, activation="relu", padding="SAME"), tf.keras.layers.Dense(4096, activation="relu"), tf.keras.layers.Dense(4096, activation="relu"), tf.keras.layers.Dense(1000, activation="softmax"), ]) Note that I removed the normalization and set an input of 227*227 instead of 224*224. See this question for details. Here is the summary from Keras: Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 55, 55, 96) 34944 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 27, 27, 96) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 27, 27, 256) 614656 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 13, 13, 256) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 13, 13, 384) 885120 _________________________________________________________________ conv2d_3 (Conv2D) (None, 13, 13, 384) 1327488 _________________________________________________________________ conv2d_4 (Conv2D) (None, 13, 13, 256) 884992 _________________________________________________________________ dense (Dense) (None, 13, 13, 4096) 1052672 _________________________________________________________________ dense_1 (Dense) (None, 13, 13, 4096) 16781312 _________________________________________________________________ dense_2 (Dense) (None, 13, 13, 1000) 4097000 ================================================================= Total params: 25,678,184 Trainable params: 25,678,184 Non-trainable params: 0 _________________________________________________________________ I'm really far from 60 million. So, how did they sum 60 million params? For reference, here is the architecture of the model as described in Sec. 3.5 of the paper: The first convolutional layer filters the 224x224x3 input image with 96 kernels of size 11x11x3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5x5x48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3x3x256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3x3x192, and the fifth convolutional layer has 256 kernels of size 3x3x192. The fully-connected layers have 4096 neurons each. AI: I forgot to flatten between the last Conv2D layer and the first fully-connected layer. model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(96, 11, strides=4, activation="relu", input_shape=[227,227,3]), tf.keras.layers.MaxPooling2D(pool_size=(3,3), strides=(2,2)), tf.keras.layers.Conv2D(256, 5, activation="relu", padding="SAME"), tf.keras.layers.MaxPooling2D(pool_size=(3,3), strides=(2,2)), tf.keras.layers.Conv2D(384, 3, activation="relu", padding="SAME"), tf.keras.layers.Conv2D(384, 3, activation="relu", padding="SAME"), tf.keras.layers.Conv2D(256, 3, activation="relu", padding="SAME"), tf.keras.layers.Flatten(), # <-- This layer tf.keras.layers.Dense(4096, activation="relu"), tf.keras.layers.Dense(4096, activation="relu"), tf.keras.layers.Dense(1000, activation="softmax"), ]) Once added, I get the 62 million params: Model: "alex_net" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) multiple 34944 _________________________________________________________________ conv2d_1 (Conv2D) multiple 614656 _________________________________________________________________ conv2d_2 (Conv2D) multiple 885120 _________________________________________________________________ conv2d_3 (Conv2D) multiple 1327488 _________________________________________________________________ conv2d_4 (Conv2D) multiple 884992 _________________________________________________________________ max_pooling2d (MaxPooling2D) multiple 0 _________________________________________________________________ flatten (Flatten) multiple 0 _________________________________________________________________ dense (Dense) multiple 37752832 _________________________________________________________________ dense_1 (Dense) multiple 16781312 _________________________________________________________________ dense_2 (Dense) multiple 4097000 ================================================================= Total params: 62,378,344 Trainable params: 62,378,344 Non-trainable params: 0 _________________________________________________________________ Even if it's a mistake from me, I leave it here for understanding purposes.
H: What's the most effective way to measure the accuracy of my multi-class classification NN? I'm relatively new to data science, and am confused about how to measure the accuracy of a multi-class neural network. The model I'm building is attempting to predict the outcome of a given event with 20 different outcomes - there is some imbalance in the labels, the most frequency label is around 15% of the total, and the least frequent is around 1%. The event I'm attempting to classify is difficult to predict, and I'll be happy with around 20% accuracy over all events - however I'm worried that when I achieve that, then my model will simply be predicting the most frequent class all the time, and just getting lucky on enough of the others to bring overall accuracy up to 20%. I'm using keras' 'categorical cross-entropy' loss function and 'accuracy' metric - are these the most appropriate for the problem? Which other metrics are best suited to a problem like this? Very happy to be referred to literature to help build my understanding of this problem. I understand the confusion matrix for binary classification, however for multi-class classification, however I'm at a loss (no pun intended) as to how to apply these principles to a multi-class problem, whilst ensuring my model isn't just predicting the most popular class and getting lucky on enough others to meet my target. Thanks in advance for any assistance! AI: Keras has some accuracy metrics for multiclass neural networks. CategoricalAccuracy Top K Categorical Accuracy Sparse Top K Categorical Accuracy There are some others if you want to implemented from scratch. Taken from: https://www.sciencedirect.com/science/article/abs/pii/S0306457309000259 These work for most problems rather nicely combined with a good cross validation approach and leak prevention. Having an imbalanced dataset always makes it challenging but picking a metric that doesn't punish results too much like Top K Categorical Accuracy combined with n-fold cross validation like this article suggests, is often enough to have a solid evaluation procedure that you can work on.
H: Interpreting Learning Curves I'm curious on how to interpret the following learning curves, which can be considered an actual good learning curve of the three learning curves. I'm not sure when to consider if there is a high variance or bias, likewise the presence of underfitting or overfitting respectively. Thank you in advanced. 1. 2. 3. AI: Bias and variance and their effect on overfitting and underfitting summarized in one illustration Therefore, I think you a fit model, with reasonable variance and bias.
H: Reuters Dataset Labels description the list of Reuters dataset labels are provided with the dataset and also available in varous online resources. Here is an example. But I couldn't find what each of these labels actually means. I was able to find out 'acq' is short for 'Acquisitions'. But yet don't know these ones: dlr, gnp, bop Is there a description availbale about what these labels represent? AI: Apparently the paper introducing the dataset mentions a list of "online appendices" (table 1 p 363) which seems to contain details about the categories. However I wasn't able to find these appendices in the additional material. [edit] Since it appears that there is no existing source for the original list of topics, I think your best bet is to rely on an acronym dictionary. Based on Acronym Finder, given that the data is mostly about economic news, I think the most likely meaning for the three topics you mention are: DLR: dollar GDP: Gross Domestic Product BOP: Balance Of Payments You could check whether these are correct by looking at a few documents labelled with these topics in the dataset.
H: Pandas loads a csv file incorrectly, but without throwing an error So I tried loading some data through pandas to practice manipulating it, but I ran into a slight problem. Basically, the pandas load the data improperly. And it does so consistently. Let me show you what I mean. This is a snapshot of the dataset (a csv file) I think you can more or less see how it looks here. However, when I load it in Pandas, I get this: As you can see there's a clear distortion. For one thing, the Date column has been pushed from the first column in the spreadsheet to the second in the CSV. Its values have also been replaced with what looks to me like a copy of the "Travels" column. Moving on, you can also see that the column I'm interested in ("Total Direct Remittances") is totally incorrect. In the spreadsheet, it starts with 342, but when loaded it starts with 23423, and the rest of its values are completely off. Here's the code: import pandas as pd data = pd.read_csv("/home/user/Downloads/IntPayments26102020.csv") You can get a copy of the data here. Any ideas guys? AI: You need to read the file with the index_col=False argument: # wrong way: df = pd.read_csv("IntPayments26102020.csv") print(df.head()) # result : Date Travels ... Debt Services And Payments Total 10/12/2003 234234.00 23424.0 ... 3.574570e+05 NaN 1/12/2005 12.00 2.0 ... 1.160000e+02 NaN 4/28/2006 7119901.48 9950000.0 ... 3.887479e+08 NaN 5/2/2006 346.00 345435.0 ... 4.632060e+05 NaN 5/12/2006 152204.22 11500000.0 ... 4.743224e+08 NaN [5 rows x 11 columns] # correct way: df = pd.read_csv("IntPayments26102020.csv", index_col=False) print(df.head()) # result : Date Travels ... Debt Services And Payments Total 0 10/12/2003 234234.00 ... 23423.00 3.574570e+05 1 1/12/2005 12.00 ... 21.00 1.160000e+02 2 4/28/2006 7119901.48 ... 45602245.83 3.887479e+08 3 5/2/2006 346.00 ... 34534.00 4.632060e+05 4 5/12/2006 152204.22 ... 71560790.20 4.743224e+08 [5 rows x 11 columns] From the docs: index_col : int, str, sequence of int / str, or False, default None Column(s) to use as the row labels of the DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used. Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you have a malformed file with delimiters at the end of each line. From where you can arguably guess why the default setting index_col=None will not work with your data - your rows are also separated by comma ,: "Date","Travels","Inward Money Transfers","Cash Sales to BDC And Banks","Letters of Credit","Total Direct Remittances","Remittances to Travelex","Remittances to Amex","WDAS","Debt Services And Payments","Total" "10/12/2003","234234","23424","23423","2342","342","23423","3423","23423","23423","357457", "1/12/2005","12","2","12","12","21","12","12","12","21","116",
H: How to normalize the data correctly in spam dataset I'm working on the spam dataset to classify the inputs into binary classes. My problem is that the observations in the dataset are float small numbers in the first 53 columns, and the 54 is float larger numbers, while the last two columns are integers. My Question: How to Normalize this dataset correctly, so all the observations have the same importance? import pandas as pd spam = pd.read_table("spambase.data",sep=',',header=None) one proposed approach which didn't seem very convenient to me, because it normalizes the whole row input is that: #======================== # Normalization Function #======================== def Normalize(x): ''' ================================== Normalization Function ================================== ----------- Parameters: ----------- @Parameter x: Vector --------- Returns: --------- Normalized Vector. ================================ ''' norm=0.0 for e in x: norm+=e**2 for i in range(len(x)): x[i]/=sqrt(norm) return x AI: Normalizing so that "all the observations have the same importance" is kinda ambiguous and ill-defined. In any case, it would be strongly advised to avoid re-inventing the wheel, and use one of the several scalers available out there (e.g. in the sklearn.preprocessing module). Here is an example using MinMaxScaler, which will re-scale your data in [0, 1] column-wise: import pandas as pd df = pd.read_csv("spambase.data", header=None) print(df.head()) # result: 0 1 2 3 4 5 ... 52 53 54 55 56 57 0 0.00 0.64 0.64 0.0 0.32 0.00 ... 0.000 0.000 3.756 61 278 1 1 0.21 0.28 0.50 0.0 0.14 0.28 ... 0.180 0.048 5.114 101 1028 1 2 0.06 0.00 0.71 0.0 1.23 0.19 ... 0.184 0.010 9.821 485 2259 1 3 0.00 0.00 0.00 0.0 0.63 0.00 ... 0.000 0.000 3.537 40 191 1 4 0.00 0.00 0.00 0.0 0.63 0.00 ... 0.000 0.000 3.537 40 191 1 [5 rows x 58 columns] from sklearn.preprocessing import MinMaxScaler sc = MinMaxScaler() # define the scaler df_scaled = pd.DataFrame(sc.fit_transform(df)) # fit & transform the data print(df_scaled.head()) # result: 0 1 2 3 ... 54 55 56 57 0 0.000000 0.044818 0.125490 0.0 ... 0.002502 0.006007 0.017487 1.0 1 0.046256 0.019608 0.098039 0.0 ... 0.003735 0.010012 0.064836 1.0 2 0.013216 0.000000 0.139216 0.0 ... 0.008008 0.048458 0.142551 1.0 3 0.000000 0.000000 0.000000 0.0 ... 0.002303 0.003905 0.011995 1.0 4 0.000000 0.000000 0.000000 0.0 ... 0.002303 0.003905 0.011995 1.0 [5 rows x 58 columns] Keep in mind that normalization depends also from your choice of a model to use: it is practically necessary for neural networks and k-nn (and for k-means clustering), but is is completely redundant for decision trees and tree-ensemble models (Random Forest, GBM etc).
H: Understanding the generality of the NER problem Named-entity recognition (NER) is a well-known problem in the NLP literature. It typically addresses the problem to locate and classify named entities in text, e.g. Organizations and Products. I'm trying to solve a similar problem but, in my view, a bit more general. Given an input text, I want to be able to comprehensively annotate the whole text; not only specific entities like Actors and Organizations but also higher-level concepts like Conditions of Applicability and Temporal Conditions, e.g.: The added difficulty is that we have nested "entities", e.g. (from above): <denotic> must, <temporal> within the specified period </temporal>, notify ... </deontic> Can this still be formulated as a NER problem? If so, what would be the best type of model to solve this task assuming a dataset of ~ 50 K examples? AI: The problem described is not a more general version of Named Entity Recognition, it is a different problem called parsing. Parsing consists in extracting the syntactic structure of a text, usually in the goal to better capture its semantics. There are various approaches: Shallow parsing only identifies the constituents of the sentences (based on your example this could be sufficient in your case) Statistical parsing and in particular Dependency parsing represent the full structure of the sentence, including the links between its constituents. There are various libraries and datasets for parsing: one of the most famous is probably the Stanford parser, but there are many others often included in NLP toolkits such as OpenNLP. The Universal Dependencies project is a vast multilingual collection of annotated text which can be used to train parsers. Semantic Role Labeling (SRL) is a closely related task which consists in identifying the semantic relations between a predicate (verb) and its related constituents (e.g. subject, object).
H: Why does my model produce too good to be true output? I am trying to run a binary classification problem on people with diabetes and non-diabetes. For labeling my datasets, I followed a simple rule. If a person has T2DM in his medical records, we label him as positive cases (diabetes) and if he doesn't have T2DM, we label him as Non-T2DM. Since there are a lot of data points for each subject, meaning he has a lot of lab measurements, a lot of drugs taken, a lot of diagnoses recorded, etc, I end up with 1370 features for each patient. In my training, I have 2475 patients and in my testing, I have 2475 patients. (I already tried 70:30. Now am trying 50:50 still the same result (as 70:30)) My results are too good to be true as shown below Should I reduce the number of features? Is it overfitting? Should I retain only the top features like top 20 features, top 10 features etc? can help me understand why is this happening? Detailed Update We check for the presence of T2DM by a set of diagnosis codes (like icd9,10 codes for T2DM and its complications etc). For ex: let's say if a patient has an icd9 code of 250 in his diagnosis records, we know that he was diagnosed with Type 2 diabetes. Let's not worry about the accuracy of this labeling approach at this point. Similarly, we label all other patients as T2DM and Non-T2DM. But when we extract features, all his medical records are treated as features. The frequency of the drug/condition/lab tests will be used as a feature value. So, basically, the same diagnosis code (250) will be an input feature as well. Does it mean I should drop the diagnosis codes which were used to label a dataset from being used as features? But those are really very good features that can help me find out whether a patient is diagnosed with T2DM or not (when I apply my model on a totally different dataset). My objective is not to find out whether a patient will develop diabetes in the future or not but my objective is only to find out whether a patient is diabetic or not (from his records). So, I label my dataset with an imperfect heuristic as stated above and build a model. Once I build this model, I would like to validate this model at another site and find out how good is the model built using this imperfect heuristic in identifying whether a patient is diabetic or not. Hope this helps AI: Assuming that these results are obtained on a valid test set with no data leakage, these results don't show overfitting because overfitting would cause great performance on the training set but significantly lower perfomance on the test set. Make sure that your instances between the training and test set are truly distinct: there might be some data leakage, for example if there are duplicate records for the same patient. Another potential flaw is the gold standard labeling: if the patient has this T2DM in their medical record, it means that they are already diagnosed right? And since the features are also based on the medical record, it's likely that this medical record contains direct indications about this diagnosis (for example in the drugs prescribed to the patient). There are two interpretations about this: either the task is purposefully defined by this T2DM label, and in this case you can just enjoy the great performance but it's not technically about detecting diabetes in general. or the goal is to detect patients with diabetes including the ones who are not diagnosed yet, but then it's likely that your gold standard is incorrect for this task. [edit following updated question] Your update clarifies which exact task you're targeting, and it corresponds exactly to my first interpretation above: given that your goal is to predict which instances satisfy this T2DM criterion and that the features contain direct indications about it: I think you're right to keep these features, if a very useful information is available it would be absurd not to exploit it (assuming it's also available in the same form in any future dataset you plan to use, of course) The very high performance you obtain makes perfect sense for this task, it's not a bug. It simply happens that the task is easy, so the system is able to predict the label very well. However this also means that you could do even better without any ML: currently the ML method gives you around 99% F-score because it doesn't perfectly represent the criterion used for the gold. But since the gold standard label is based entirely on the features, the most direct way to "predict" the label is to apply the criterion. There's no point using a complex ML method to predict an information that you can obtain from the same data more accurately with a deterministic method.
H: Model Architecture Design I have two different model architectures, I am just curious if one would be better then the other. I have tested both and they vary in performance each time I train them, but the second model generally performs a bit better, however not a major difference. Question: Are these models going to behave or perform in a similar fashion despite having different architectures? Or is there an advantage of one over the other? Model 1: Model 2: AI: Indeed you can have two different model architectures and they yield the similar output but that does not generalize on all the data input. For example SVM with linear kernel and logistic regression can give you similar results. For neural network, RCNN and CNN-LSTM can too have similar results, but it depends entirely on the input. For the submitted architectures that you have provided you need to code and test them yourself.
H: How to represent genre or artist name in a neural network I am writing a music recommendation system using machine learning. I'm attempting to make sense of ensemble networks to allow the system to learn from both the content-based features, as well as the global 'meta' features, such as the genre, year, artist etc. However, I do not know how I should represent the genre and artist information. Making them integers would imply some sort of order. If Kanye West is 24, what should 25 be? 26? Or is there something fundamentally wrong with my approach? AI: You can represent this kind of discrete information by means of embeddings. An embedding is simply a table of vectors. It is defined by the number of vectors in the table and the length of each vector (i.e. dimensionality). You need to define a priori all the elements you want to support, e.g. for genres, you should list all the genres you will support, and list them. The index of each item is how you represent it. You can have an embedding for each of the discrete features you need (genre, artist, etc). In Keras, there is a layer for Embeddings.
H: Bug in sentiment analysis and classification for unlabeled text I'm working on the transcript of Trump and Biden's debate and want to analyze the sentences and classify negative, positive, or neutral comments, but I ran into one problem. I used both TextBlob and the transformers pipeline to analyze the sentiment but unfortunately in both ways, there are some very disastrous flaws! For example,I found that TextBlob recognizes -0.70 polarity in "fewer people are dying every day" (negative comment) or the transformers pipeline recognizes "The audience here in the hall has promised to remain silent." as a negative comment with 0.99 percent certainty! Why do you think it's happening? Is there any way we can prevent this? Is there any way better than this for analyzing the sentiment of unlabeled text? Also, I'm not comfortable with sentences like "Oh, Really?!" being classified as neutral. It's more of a sarcastic or negative comment I think. Here's my colab notebook, I've added one "Problem" markdown where I've observed these examples. AI: First, congratulations for thinking to do a qualitative analysis of the results :) I know it should be obvious, but so many people just assume that the system works and don't bother checking their output. Now, strictly speaking what you're seeing is not a bug. These are errors made by a statistical system. A statistical system is not meant to get everything right, it's only meant to label the input "to the best of its knowledge", and its knowledge is limited primarily by (1) the data it was trained with and (2) the assumptions made in the design of the model itself. I don't know the exact characteristics of the systems that you used, but I can make an educated guess about the errors that you mention: "fewer people are dying every day" is likely to be predicted as negative because it contains the word "dying". Probably there were no (or very few) examples in the training data which contain the word "die" and are labelled positive. As a consequence the system assumes that any sentence containing "die" is likely negative. One may notice that the positive semantics of "fewer people dying" is completely lost on the system, because it focuses on simple clues (individual words), it's not able to parse more complex phrases. "The audience here in the hall has promised to remain silent." would be a similar case: the word "silent" or perhaps the two words "remain silent" likely were found only in negative examples during training, so the system just generalizes wrongly that a sentence containing these words is negative. The sarcastic "Oh really?!" is an even more complex concept for the system to properly identify. The task of sarcasm detection is studied on its own because it's such a difficult task for a machine. I don't follow this field closely so I could be wrong, but I don't think the task has reached any satisfying level of maturity yet, let alone been integrated with standard sentiment analysis systems. Nonetheless these errors don't mean that the results are useless. If you annotate manually a random sample and evaluate the performance of the system on this sample, hopefully you'll see that overall the system performs decently. That's what is expected of a statistical system: it's not reliable on an individual basis, but normally it's doing a good job in average. More generally, all these errors show that the problem of Natural Language Understanding is far from being solved yet... and it might never be. The good news is that there's still a lot of interesting problems to solve for NLP scientists ;)
H: Hyperparameter tuning XGBoost I'm trying to tune hyperparameters with bayesian optimization. It is a regression problem with the objective function: objective = 'reg:squaredlogerror' $\frac{1}{2}[log(pred+1)-log(true+1)]^2$ My dataset consists of 20k vectors, each vector has length 12 (twelve features). Every vector has a corresponding Y value. I want to find the set of hyperparameters that minimize the loss function. This is how it is implemented in code: def evaluate_model(learning_rate, max_depth, nr_estimators, min_child_weight, min_split_loss, reg_lambda): model = get_model(learning_rate, max_depth, nr_estimators, min_child_weight, min_split_loss, reg_lambda) model.fit(X_train, Y_train) pred = model.predict(X_val) error = np.array([]) for i in range(len(pred)): prediction = np.maximum(pred[i],1) error = np.append(error, (1/2)*(np.log(prediction+1)-np.log(Y_val[i]+1))**2) err = np.mean(error) return -err My question is if anyone has any problem with how I've constructed the evaluate_model function. Do this optimize the squared log error when bayesian hyperoptimization is being implemented? The maximum(pred[i],1) is there in case a negative prediction is produced. Also, I get bad results even after the hyperparameter optimization. These are the hyperparameters I evaluate: pbounds = {'learning_rate': (0,1), 'max_depth': (3,10), 'nr_estimators': (100, 5000), 'min_child_weight': (1,9), 'min_split_loss': (0,10), 'reg_lambda': (1,10)} The optimization is ran for 100 iterations and 10 init points. The package I've used for the bayesian optimization is bayes_opt AI: Another way is to use the mean_squared_log_error from the same metrics module,. First clip the negative values in the predictions to 1 and find the mean squared log error pred = np.clip(pred, min=1, max=None) err = mean_squared_log_error(yval, pred)
H: Is active learning able to detect challenging cases? Let's say we have a set of data points that need to be labelled for a classification task. In the pool-based active learning, if we go with the uncertainty measure, is the AL approach able to detect challenging cases? By challenging cases I mean samples that receive a high prediction score for $\hat{y}$ (e.g. >90%) but, most probably, $\neg\hat{y}$ is the correct prediction. The rationale behind my question is: does adding more samples to the training set always improve the performance of a classifier? AI: In general it depends on the exact method used to select instances and of course on the data. Assuming that the selection is based solely on the uncertainty measure of a single classifier, then by definition the method will prioritize instances predicted with a probability around 50%, i.e. where the classifier is "unsure". As a consequence an instance predicted with a high probability is unlikely to be selected for annotation. However the iterative training process will make the classifier re-estimate the probability of all the instances, so it's possible that an instance wrongly classified with 90% probability at a particular iteration will later be assigned a lower probability, or even the true class. But overall there's no guarantee: like with any statistical system there can be instances misclassified with a high probability. The rationale behind my question is: does adding more samples to the training set always improve the performance of a classifier? In active learning, the performance depends more on how many instances end up being manually annotated than on the size of the unlabelled sample. But as usual the performance strongly depends on the data itself.
H: Handling dimensions for RGB data with Keras CNN I'm trying to make work the code from Keras' documentation getting started. There is something I do not understand about handling RGB data. I made one work with MNIST data (which is greyscaled), but I can't seem to figure out the CIFAR10. HEIGHT = 200 WIDTH = 200 def build_model(): inputs = keras.Input(shape=(HEIGHT, WIDTH, 3)) # making things simple, I don't CenterCrop, just set the input to whatever I'm feeding it # I do rescale to 0-1 values x = Rescaling(scale=1.0 / 255)(inputs) # this basically the doc's architecture x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu", )(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu",)(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu",)(x) x = layers.GlobalAveragePooling2D()(x) outputs = layers.Dense(10, activation="softmax")(x) model = keras.Model(inputs=inputs, outputs=outputs) return model # fake data, easier to play with shapes than with the actual CIFAR10 data to debug # shape is (500, 200, 200, 3), cifar10's is (60000, 32, 32, 3) data = np.random.randint(0, 255, size=(500, HEIGHT, WIDTH, 3)).astype("float32") # shape is (500,1) cifar10's is (60000,1). Just 10 categories to match the output layer labels = np.random.randint(0,9, size=(500,1)).astype("int8") print("got fake data... ") model = build_model() print("model built... ") model.summary() model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.CategoricalCrossentropy()) print("model compile...") model.fit(data, labels) print("done") For some reason, I get: ValueError: Shapes (None, 1) and (None, 10) are incompatible Strangely, if change the label's fake data to size=(500, 10) then it "works". But obviously, that makes no sense, since it would mean I have 10 labels for each sample. The MNIST dataset that works which I refer to can also be found in the linked documentation. The architecture of the network is much simpler (only Dense layers linked together). What am I missing here? Why would it be so different for a colored image than it is for a greyscale? I also tried the work with the data from train, test = cifar10.load_data(), but it yields the same results. AI: Strangely, if change the label's fake data to size=(500, 10) then it "works". But obviously, that makes no sense, since it would mean I have 10 labels for each sample. It would most certainly not mean that; it simply means that, for the type of softmax classification you are attempting here with CategoricalCrossentropy loss, your labels should be one-hot encoded, and not single digits, as here (hence the error). You should Either convert your single-digit labels to one-hot encoded ones using to_categorical (docs) Or change your loss to sparse categorical cross-entropy (docs) Changes in your code for the first approach: labels_ohe = tf.keras.utils.to_categorical(labels, num_classes=10) # [...] model.fit(data, labels_ohe) Changes in your code for the second approach: model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy()) The MNIST dataset that works which I refer to can also be found in the linked documentation. The MNIST labels, like the CIFAR10 ones, are also single-difit integers and not one-hot encoded; your MNIST model from the example you refer to you works because of the loss="sparse_categorical_crossentropy" setting.
H: accuracy at a false positive rate of 1% I need to calculate the accuracy but at a false positive rate of 1%. I am not sure if it is the normal accuracy that we can calculate with sklearn or I need a customized formula? AI: Calculate this by finding the threshold at which the false positive rate is 1%. Your model outputs a probability, not a category. You get a category by seeing on which side of some threshold, probably 0.5, the probability lies. Test some other thresholds. You can use the ROC curve for this. Once you have that threshold, calculate the accuracy. However, be cautious about accuracy (or any threshold-based scoring rule, such as F1 score) as a scoring rule. https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models/312787#312787 https://stats.stackexchange.com/questions/464636/proper-scoring-rule-when-there-is-a-decision-to-make-e-g-spam-vs-ham-email https://stats.stackexchange.com/questions/494023/proper-scoring-rule-in-optical-character-recognition (Frank Harrell’s blog that gets linked is a good read, too.)
H: what is criterion in flcuster of scipy package? Could some one explain what does criterion of fcluster indicate? I tried to read the documentation but I am unable to understand. What does maxclust criterion indicate? AI: Welcome to the community! You may want to refer to a tutorial on Agglomerative Hierarchical Clustering before reading this answer. My explanation is more practical. Assume the data below: from scipy.cluster.hierarchy import ward, fcluster from scipy.spatial.distance import pdist import numpy as np import matplotlib.pyplot as plt from matplotlib.text import TextPath X = [[0, 0], [0, 1], [1, 0], [0, 4], [0, 3], [1, 4], [4, 0], [3, 0], [4, 1], [4, 4], [3, 4], [4, 3]] X = np.array(X) Z = ward(pdist(X)) plt.plot(X[:,0],X[:,1],'o') We have 12 points in 2 dimensions. This is how they are distributed: Now let's see how maxclust affects the clustering. Please note that I named clusters by numbers and plot the name of cluster each data belongs to, instead of its point. for tt in [1, 3, 5, 9]: plt.figure(figsize=(8,8)) plt.title('t={}'.format(str(tt))) plt.xlim((-1,5)) plt.ylim((-1,5)) memberships = fcluster(Z, t=tt, criterion='maxclust') for ii in range(len(memberships)): path = TextPath((X[ii,0],X[ii,1]), str(memberships[ii])) plt.plot(X[ii,0],X[ii,1],marker=path,c='b',markersize=50) plt.show() Results are shown below. I explain each: t is the parameter which limits number of clusters i.e. you can have maximum t clusters in output. If t=1, any other criteria will be ignored as all data points have to be in the same cluster. So far so good. Let's say t=3. Now the fun starts. maxclust finds an optimal distance between each pair of points which are going to be in the same cluster. If t=3, without that distance, you get unbalanced clusters. In the figure below we have 4 obvious clusters and if I want 3 clusters out, two of them will merge and cause a cluster which is "wrong" (let's say wrong for simplicity). The maxclust distance threshold, prevents the algorithm to do so as it rejects far points being in the same cluster. So you see the data is clustered to 2 clusters instead of 3 which makes perfect sense. Please note that, yes, we have bigger clusters. But they are "right". It is like we zoomed out and see the clusters from a higher perspective. Let's go with t=5. Now the algorithm is allowed to find 4 clusters. Topology tells you that finding "right clusters" i.e. clusters in which each pair of points have a relatively small distance (in compare with distances to other clusters), is possible. This means that maxclust can find an optimal distance threshold and t lets us have 4 clusters. So it works! The last but not least is t=9. No we can have clusters more than the actual number needed (which was 4). Now maxclust tries to find its minimum distance between pairs so that points in same clusters are closer to each other than to other points in other clusters. As you see in each real cluster, The distance between center points to other two is smaller than the distance between those two diagonal points (as this distance is identical, clustering algorithms choose members randomly). So you see that each real cluster is divided to two, due to the effect of combination of maxclust and t Hope it helped. Good Luck!
H: Normal distribution and Random Forest I have big table in dataframe (600k rows) which has y column (the variable I want to predict) and other 4 other columns that are the X. I have run RF regressor and I got score of 0.87 when I run it on the train and test. However, when I tried to predict another set of data (which is very similar, with 1M rows) I got score of 0.65. So I assumed that is overfitting. when I tried to understand why it hapenns, I went back to the distribution of the y column, which looks like this: my question is, can it be that because that my data does not have normal distribution (or very skewd...) my model preformance is bad? Do all variables need to have normal distribution? how does the score of the random forest regrssion is calculated? id value is 0.25 and predict is 0.26 does it count as correct prediction? AI: If you use tree-based algorithms like random forests the data distribution should not be an issue. Linear algorithms are more dependent on the distribution of your variables. To check if you overfit can try to predict your training data and compare the result with test data. The score depends on your evaluation metric. If you use scikit-learn you get R^2 as your metric. The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum().
H: Stemmer or dictionary? I have recently ported a stemmer from Java to Python for a highly inflectional language. The stemmer learns how to change suffixes from the dictionary of words and their inflected forms. It basically builds a stemming table with learned stemming rules. As I was porting the algorithm I decided to train it on a larger dictionary. As a result, the learned stemming table got bigger, and stemming accuracy got higher as well. Then I thought this actually make no sense as the stemming table size gets closer and closer to the size of the dictionary. Why build or train stemming algorithms if you can simply lookup a dictionary? I can understand that in old times storing large files could be a problem, but now? And for some languages there might be no proper dictionary resources. But is there any other reason? AI: There is another reason: words which don't appear in the dictionary. Of course a dictionary approach will correctly stem all the forms which are known in the dictionary, and depending on the language this may indeed lead to better accuracy. However the dictionary approach cannot do anything about unknown words, whereas a generic stemmer can try to apply its generic rules. This can be particularly important with texts which are either very domain-specific (e.g. medicine), which often contain technical words which are not in a general dictionary, or recent user-generated texts such as social media posts where people may use neologisms or words borrowed and sometimes transformed from another language.
H: Machine Learning validation data returns 100% accuracy I'm Testing a Machine Learning model with validation data returns that return 100% correct answers, is it overfitting or the model works extremely well, do I need to continue training on more data? I'm not sure how to interpret the result, any guidance please? AI: I'm assuming this is a classification task as you mentioned accuracy: 100% on the validation set is usually a red-flag. Check there is no data leakage in your code (inspect it at each line). Make sure your validation set has a mixture of both classes (for example: is this a very imbalanced set and your validation set has all negative class?). There is really no way to tell if or what the problem is without some description of the data or presence of code - but 100% accuracy is usually a big indication to dig into the details to find out if there's a bias you are adding to the system.
H: DecisionTreeRegressor under the hood of GradientBoostingClassifier I'm inspecting the weak estimators of my GradientBoostingClassifier model. This model was fit on a binary class dataset. I noticed that all the weak estimators under this ensemble classifier are decision tree regressor objects. This seems strange to me intuitively. I took the first decision tree in the ensemble and used it to predict independently on my entire dataset. The unique answers from the dataset were the following: array([-2.74, -1.94, -1.69, ...]) My question is: why and how does the gradient boosting classifier turn the weak estimators into regressor tasks (instead of classification tasks) that are not bound by 0 and 1? Ultimately the GradientBoostingClassifier outputs a pseudo-probability between 0 and 1: why aren't the ensemble of weak estimators doing the same? AI: After reading through more of the documentation I found the section that covers the classification case. It can be found here. Additionally this statquest was very useful. 1.11.4.5.2. Classification Gradient boosting for classification is very similar to the regression case. However, the sum of the trees is not homogeneous to a prediction: it cannot be a class, since the trees predict continuous values. The mapping from the value to a class or a probability is loss-dependent. For the deviance (or log-loss), the probability that belongs to the positive class is modeled as where is the sigmoid function. For multiclass classification, K trees (for K classes) are built at each of the iterations. The probability that belongs to class k is modeled as a softmax of the values. Note that even for a classification task, the sub-estimator is still a regressor, not a classifier. This is because the sub-estimators are trained to predict (negative) gradients, which are always continuous quantities.
H: Can I fine-tune the BERT on a dissimilar/unrelated task? In the original BERT paper, section 3 (arXiv:1810.04805) it is mentioned: "During pre-training, the model is trained on unlabeled data over different pre-training tasks." I am not sure if I correctly understood the meaning of the word "different" here. different means a different dataset or a different prediction task? For example if we pre-train the BERT on a "sentence-classification-task" with a big dataset. Then, should I fine-tune it again on the same "sentence-classification-task" task on a smaller and task-specific data-set or I can use the trained model for some other tasks such as "sentence-tagging"? AI: The sentence "During pre-training, the model is trained on unlabeled data over different pre-training tasks." means that BERT was pre-trained on normal textual data on two tasks: masked language model (MLM) and next sentence prediction (NSP). There were no other classification/tagging labels present in the data, as the MLM predicts the text itself and the NSP label is derived from the textual data itself. Both tasks were trained simultaneously from a single textual dataset that was prepared to feed the input text and the expected outputs for both tasks. Therefore "different" here refers to the two pre-training tasks I mentioned: MLM and NSP. When fine-tuning, you do not need to train again on the same sentence classification task, you just simply train it on the task you need. It is perfectly fine to fine-tune BERT on a sentence tagging task on your own dataset.
H: How to train a model on top of a transformer to output a sequence? I am using huggingface to build a model that is capable of identifying mistakes in a given sentence. Say I have a given sentence and a corresponding label as follows -> correct_sentence = "we used to play together." correct_label = [1, 1, 1, 1, 1] changed_sentence = "we use play to together." changed_label = [1, 2, 2, 2, 1] These labels are further padded with 0s to an equal length of 512. The sentences are also tokenized and are padded up(or down) to this length. The model is as follows: class Camembert(torch.nn.Module): """ The definition of the custom model, last 15 layers of Camembert will be retrained and then a fcn to 512 (the size of every label). """ def __init__(self, cam_model): super(Camembert, self).__init__() self.l1 = cam_model total_layers = 199 for i, param in enumerate(cam_model.parameters()): if total_layers - i > hparams["retrain_layers"]: param.requires_grad = False else: pass self.l2 = torch.nn.Dropout(hparams["dropout_rate"]) self.l3 = torch.nn.Linear(768, 512) def forward(self, ids, mask): _, output = self.l1(ids, attention_mask=mask) output = self.l2(output) output = self.l3(output) return output Say, batch_size=2, the output layer will therefore be (2, 512) which is same as the target_label. To the best of my knowledge, this method is like saying there are 512 classes that are to be classified which is not what I want, the problem arises when I try to calculate loss using torch.nn.CrossEntropyLoss() which gives me the following error (truncated): File "D:\Anaconda\lib\site-packages\torch\nn\functional.py", line 1838, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), igno re_index) RuntimeError: multi-target not supported at C:/w/1/s/tmp_conda_3.7_100118/conda/conda-bld/p ytorch_1579082551706/work/aten/src\THCUNN/generic/ClassNLLCriterion.cu:15 How am I supposed to solve this issue, are there any tutorials for similar kinds of models? AI: I think you should treat this problem as a binary classification problem. For each word in the changed sentence, you will have a binary label: correct or incorrect. I would recommend relabeling so that "correct" words will have a label of 0 and "incorrect" words will have a label of 1. In your example you would have: correct_sentence = "we used to play together" changed_sentence = "we use play to together" labels = [0, 1, 1, 1, 0] And instead of padding with some special value, pad with the "correct" label (which would be 0 if you use my suggestion above). Conventionally, class labels always start at index 0, so this labeling scheme will match what PyTorch expects for binary classification problems. Next, you will need to change the activation function for your final Linear layer. Right now, your model ends with just a Linear layer, meaning the output is unbounded. This doesn't really make sense for classification problems, because you know that the output should always be in the range [0, C-1], where C is the number of classes. Instead, you should apply an activation function to make your outputs behave more like class labels. For a binary classification problem, a good choice for the final activation is torch.nn.Sigmoid. You would modify your model definition like this: class Camembert(torch.nn.Module): """ The definition of the custom model, last 15 layers of Camembert will be retrained and then a fcn to 512 (the size of every label). """ def __init__(self, cam_model): super(Camembert, self).__init__() self.l1 = cam_model total_layers = 199 for i, param in enumerate(cam_model.parameters()): if total_layers - i > hparams["retrain_layers"]: param.requires_grad = False else: pass self.l2 = torch.nn.Dropout(hparams["dropout_rate"]) self.l3 = torch.nn.Linear(768, 512) self.activation = torch.nn.Sigmoid() def forward(self, ids, mask): _, output = self.l1(ids, attention_mask=mask) output = self.l2(output) output = self.l3(output) output = self.activation(output) return output Your output will now have dimension (batch_size, 512, 1). Each of the 512 outputs will be a number between 0 and 1. You can treat this as the probability of each particular token being "incorrect". If the output is greater than 0.5, then the label becomes "incorrect". Otherwise, the label is "correct". Finally, since you're treating the problem as a binary classification problem, you will want to use binary cross-entropy loss (torch.nn.BCELoss). Note that you will have to unsqueeze the labels to make their dimension match the dimension of your output. model = Camembert(cam_model) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) input = <tokenized, padded input sequence> labels = torch.tensor([0, 1, 1, 1, 0, . . . , 0]) output = model(input) loss = criterion(output, labels.unsqueeze(1)) optimizer.zero_grad() loss.backward() optimizer.step()
H: add labels bar chart There is this dataframe: print(df) df.plot.bar(figsize=(10,5),fontsize=14,) plt.title('counts',fontsize=15) plt.show() barh: df.plot(kind='barh') How can I add value labels on this bar and barh chart? AI: Unfortunately, adding text to the plot must be done manually. See the example below. fig, ax = plt.subplots(figsize=(10,5)) df.plot.bar(ax=ax, fontsize=14) for i, v in enumerate(df['counts']): ax.text(i + .25, v + 3, str(v)) Similarly, you can do for the horizontal barplot. fig, ax = plt.subplots(figsize=(10,5)) df.plot.barh(ax=ax, fontsize=14) for i, v in enumerate(df['counts']): ax.text(v + 3, i + .25, str(v))
H: SMOTE train test split with validation data Would like to ask, in which way to use SMOTE? My dataset is imbalanced and a multiclass problem. As I read in many posts, use SMOTE method only for the training dataset (X_train and y_train). Not for the test dataset (X_test and y_test). There I include validation data. How do you handle SMOTE with validation data? df = pd.read_excel... X=df.drop('column1',axis=1) y=df.column1 #Training part X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=42) smote = SMOTE(random_state=42) X_train_smote, y_train_smote = smote.fit_resample(X_train, y_train) #validation part X_train_smote, X_val, y_train_smote, y_val = train_test_split(X_train_smote, y_train_smote, test_size=0.5, random_state=42) Is this correct? and is it right, that the validation datasets (X_val and y_val) have both SMOTE inside? or should I make it out of the normal train test split: X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.5, random_state=42)? Im confused. AI: Problem in applying smote on data and than applying the split (test/or Validation does not matter) is that you could suffer from data leakage. Meaning that some Information from the Train could Spill over to the future and falsly give good predictions. I would advise seperating smote data Generation process for all 3 data sets. So do the splits, than do the data Generation.
H: Books about statistical inference I'm currently taking a course "Introduction to Machine Learning" which covers the following topics: linear regression, overfitting, classification problems, parametric & non-parametric models, Bayesian & non Bayesian models, generative classification, neural networks, SVM, boosting & bagging, unsupervised learning. I've asked the course stuff for some reading material about those subjects but I would like to hear some more recommendations about books (or any other material) that give more intuition about the listed topics to start with and also some books that go deeper into the theory of those subjects (affter I'll gain some intuition)? (I guess every subject has many books to cover the theory of course, but any recommendations will be great) Thank you. AI: For theory Tibshirani: The elements of statistical learning https://web.stanford.edu/~hastie/Papers/ESLII.pdf Also Andrew NG and other books from deeplearning.ai: Machine Learning Yearning https://d2wvfoqc9gyqzf.cloudfront.net/content/uploads/2018/09/Ng-MLY01-13.pdf of course the applied machine learning books on computer languages: An introduction into statistical learning in R http://faculty.marshall.usc.edu/gareth-james/ISL/ or deep learning with python: http://faculty.neu.edu.cn/yury/AAI/Textbook/Deep%20Learning%20with%20Python.pdf Of course there are a lot of free public pdf books in internet. However in my opinion for the beginning Tibshirani and Andrew NG are good. (and of course take your own project and ask questions on stackoverflow. That helps me a lot).
H: How to train with cross validation? and which f1 score to choose? I got similar results in 2 models which consists of similar algorithms. Model 1 with cv=10 has a f1'micro' of 0.941. See code below. Model 2 only train test split (no cv) has f1'micro' 0.953. Now here is my understanding problem. Before I did a Grid-Search to find best hyperparameters. Now I would like to do just a cross validation to train the dataset. Like the red marked in the picture. In the code there is still the Grid Search inside. Question 1: Is this code doing that what I want? (is that a cross validation to train the dataset?) Question 2: when I have 2 models like in the picture, model 1 with cross validation (red marked) and model 2 with train validation test data -> what are the reasons to choose model 1 with cross validation and why? X = df.drop('columnA', axis=1) y = df.columnA X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.20, random_state=42) xgb_params = { 'max_depth': [6], 'learning_rate': [0.3]} xgb_clf = GridSearchCV(XGBClassifier(random_state=42), xgb_params, cv=10, n_jobs=-1, verbose=True) xgb_clf.fit(X_train, y_train) xgb_pred = xgb_clf.predict(X_test) print(accuracy_score(xgb_pred, y_test)) print(f1_score(xgb_pred, y_test, average='micro')) Im sorry if my point of view is strange, but I have a lack of knowledge and Im confused about Cross Validation and Kfold and how to use it. AI: I think you confused some technical names. Cross-Validation is the name of the procedure, and it has some techniques or approaches such as k-fold cross-validation, train test split, etc. All are techniques to measure the performance of a model. In your case, you have the first model that is assessed using 10-fold cross-validation and has an f1-score of 0.941, and the second model is assessed using the train test split approach and has an f1-score of 0.953. In this case, choosing the better model depends on what you want to give the privilege. In other words, whether you focus more on False Predictions or True Predictions, or False Negatives of False Positives. For this purpose, check the confusion matrix below. There are several scenarios which mainly are: Using Recall metric You can use it if False Negatives are more important for you. In the medical analysis, for example, False Negatives are tried to be minimized and scientist measures model's performance essentially by using Recall. Because, for example, it is a more acceptable case to predict a person cancer but actually that person is healthy while compared to the case where a person is labeled as healthy but in actual that person has cancer. Using Precision metric It is preferred when False Positives are more important than False Negatives. Using F1-score It helps to identify the state of incorrectly classified samples. In other words, False Negative and False Positives are attached more importance. Using Accuracy score It is mostly used when True Positive and True Negatives are prioritized. So back to your question, you should not choose the model that best 'fits' (or performs) in a test, but you should choose a model that fits your demands most and the test must be chosen depending on the demands. If you want to have balanced False Negatives and False positives use the f-score to choose a model, which is the model2 in the above-mentioned case. However, if you focus on a different target, make your choice according to a performance metric.
H: Impossible to increase model accuracy I'm building binary classification models on my company's dataset. The problem I'm having is that I haven't been able to increase the accuracy of my models. I have trained, tuned, cross validated models such as logistic regression, knn, neural networks and they all have results with the same accuracy. I feel that I have tried everything. The dataset is about 100 data points with 21 features. I'm aware of the curse of dimensionality so I have tried using only subsets of the features but the accuracy is the same no matter which subsets of features I use. Is it possible that with this dataset, no further improvements can be done? Should I just tell my boss that this is the maximum accuracy possible given the amount of data we have? AI: 100 data points is a very very low number of points. I am quite surprised that the algo managed to learn something meaningful (well unless you have very obvious relationships between the features and the target). I'll suggest you look at what the logistic regression learned exactly (what coefficients are deemed significant). There is an important risk that you are overfitting on your very very small dataset. And you can't really catch that so your are just left with checking if the rationale of your model make sense. Yes, you should probably tell your boss that this is not enough data points.
H: Emotion detection on audio I'm trying to apply emotion detection on audio. Basically, given an audio file, it must generate emotional labels for a segment of any durations, say every 5s chunks, like happy, sad, angry, etc. Is there a working example with pretrained models available to test? I was looking for a sample on GitHub, but had problems running it. I need a relatively accurate model. Any working sample would be helpful. AI: Sure, here is the github and here is the explanantion.
H: Dividing data set into (almost) equal batches based on characteristics I have a large dataset containing 6 characteristics (all numerical). I need to split this dataset into multiple batches to be processed in parallel, and ideally, the batches should be as equal in size as possible. The catch is: I can only split based on the 6 characteristics, so I have to specify ranges of values for each batch. Simplified example: [1,1,1,2,2,3,3,3,3,4] will be split in two by specifying numbers 1-2 go to batch 1 and 3-4 go to batch two and I end up with [1,1,1,2,2] and [3,3,3,3,4]. I managed to make a simple division algorithm based on one characteristic, unfortunately some of the characteristics might not be suitable to split on, and I have no way to reliably predict which characteristic will create a good split. I remember from Uni that a classification algorithm might do the trick here, but I am not sure how to implement the requirement for similar size classes. here is the simple code I wrote for one characteristic, it may help to understand the situation: def divide_into_groups(lst: list, num_groups=2): length = len(lst) sorted_list = sorted(lst) optimal_group_size = length // num_groups initial_group_markers = [sorted_list[i * optimal_group_size] for i in range(1, num_groups)] marker_edges = [(sorted_list.index(marker), length - 1 - sorted_list[::-1].index(marker)) for marker in initial_group_markers] final_group_markers = [] for i, marker in enumerate(initial_group_markers): if i - marker_edges[i][0] < marker_edges[i][1] - i: final_group_markers.append(marker - 1) else: final_group_markers.append(marker) distances = [abs(initial_group_markers[i] - final_group_markers[i]) for i in range(len(initial_group_markers))] efficiency = (length - sum(distances)) / length print('distances from optimals: %s' % distances) print('efficiency of division is %s %%' % (efficiency * 100)) return final_group_markers P.S. this is my first question in this forum. If I am missing some information or have made any mistake, please comment and I will fix as soon as possible. AI: If I got this right you want to split the dataset into train and test sets in a way that preserves the same proportions of examples in each class as observed in the original dataset? This is called a stratified train-test split. See the stratify argument here sklearn split
H: Calculate the number of params of a neural network I'm not clear how the number of params on my Convolutional Network in keras is estimated. Could you help me to understand this? Model: "sequential_7" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_7 (Conv2D) (None, 26, 26, 64) 640 _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 13, 13, 64) 0 _________________________________________________________________ flatten_7 (Flatten) (None, 10816) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 10816) 0 _________________________________________________________________ dense_14 (Dense) (None, 128) 1384576 _________________________________________________________________ dense_15 (Dense) (None, 10) 1290 ================================================================= Total params: 1,386,506 Trainable params: 1,386,506 Non-trainable params: 0 AI: There is not enough information to compute forward the number of parameters of each layer, as we don't know the number of input channels to conv2d_7 or its kernel width. Nevertheless, we can try to guess the missing pieces of information with reasonable assumptions from the number of parameters in each layer, which maybe helps you understand the number of parameter computations: conv2d_7 has an output shape with 64 channels. This means that the filter tensor is $(c \times w \times w + 1) \times 64$ (the "+ 1" is for the bias), where $c$ is the number of input channels and $w$ the kernel size. The number of parameters in this layer is 640, which means that $w \times w \times c + 1= 10$. I would guess that $c=1$ and $w = 3$. max_pooling2d_7, flatten_7 and dropout_2 don't have trainable parameters (= 0) dense_14 has 1384576 parameters and, from the output shape, we know that the projection matrix is $? \times 128$. The previous layer has 10816 as output dimensionality. From this, we know that dense_14 has a projection matrix of dimensionality $10816 \times 128$ and a bias vector of 128 dimensions (totaling 10816 * 128 + 128 = 1384576) dense_15 has 1290 parameters and, from the output shape, we know that the projection matrix is $? \times 10$. The previous layer has 128 as output dimensionality. From this, we know that dense_15 has a projection matrix of dimensionality $128 \times 10$ and a bias vector of 10 dimensions (totaling 128 * 10 + 10 = 1290). Adding up the three computed number of parameters gives the total: $1384576 + 640 + 1290=1386506$.
H: Confusion matrix in sklearn If you look at this: >>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"] >>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"] >>> confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"]) array([[2, 0, 0], [0, 0, 1], [1, 0, 2]]) I suppose fist row of array means "predicted ant" and first column is "actually is ant" second column is "actually is bird" etc. So first row first col 2 i read like "predicted ant, is ant", first row second col 0 i read as "precited ant is bird" is 0 which fits, and third column is "predicted ant is cat" is 0 but should be 1. What i am doing wrong while understanding the confusion matrix. Another example is this >>> from sklearn.metrics import confusion_matrix >>> y_true = [2, 0, 2, 2, 0, 1] >>> y_pred = [0, 0, 2, 2, 0, 2] >>> confusion_matrix(y_true, y_pred) array([[2, 0, 0], [0, 0, 1], [1, 0, 2]]) Where is not even clear, what is the order of classes. Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html edit: Unless it is swapped. First row is "is ant" not "predicted ant". Only that on wikipedia the system is that row is the prediction. AI: You just confused the actual and predicted. Every row represents actual values of distinct elements in your array and columns represent predicted values of them. That is, First row: There are 2 ants, and 2 samples are predicted as ant. Seconds row: There are 1 bird and 1 sample is predicted as cat. Third row: There are 3 cats, 1 sample is predicted as ant, 2 samples are predicted as cat.
H: SelectFromModel vs RFE - huge difference in model performance Note: I have already looked at Difference between RFE and SelectFromModel in Scikit-Learn post and my query is differnt from that post Expectation: SelectFromModel and RFE have similar/comparable performance in the model built using their recommendations. Doubt: Is there any known use-case where RFE will fare better? As a student of data science (just starting learning) its a weird observation for me Code: # RecursiveFeatureElimination_ExtraTreesClassifier from sklearn.feature_selection import RFE from sklearn.ensemble import ExtraTreesClassifier rfe_selector = RFE(estimator=ExtraTreesClassifier(), n_features_to_select=20, step=10) rfe_selector.fit(x_raw, y_raw) [x[0] for x in pandas.Series(rfe_selector.support_, index=x_raw.columns.values).items() if x[1]] # returns ['loan_amnt','funded_amnt','funded_amnt_inv','term','int_rate','installment','grade','sub_grade','dti','initial_list_status','out_prncp','out_prncp_inv','total_pymnt','total_pymnt_inv','total_rec_prncp','total_rec_int','recoveries','collection_recovery_fee','last_pymnt_amnt','next_pymnt_d'] # SelectFromModel_ExtraTreesClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel selector = SelectFromModel(ExtraTreesClassifier(n_estimators=100), max_features=20) selector.fit(x_raw, y_raw) [x[0] for x in pandas.Series(selector.get_support(), index=x_raw.columns.values).items() if x[1]] # prints ['loan_amnt','funded_amnt','funded_amnt_inv','term','installment','out_prncp','out_prncp_inv','total_pymnt','total_pymnt_inv','total_rec_prncp','total_rec_int','recoveries','collection_recovery_fee','last_pymnt_d','last_pymnt_amnt','next_pymnt_d'] Code for Model train and test # internal code to select what variables I want x_train, y_train, x_test, y_test = get_train_test(var_set_type=4) model = ExtraTreesClassifier() model.fit(x_train, y_train) # then just print the confusion matrix ExtraTreesClassifier Model from SelectFromModel variables ExtraTreesClassifier Model from RFE variables My confusion matrix is powered by this Open Source project: DTrimarchi10 / confusion_matrix AI: To start with, let me repeat here what I have already answered to another OP wondering if Recursive feature selection may not yield higher performance?: There is simply no guarantee that any kind of feature selection (backward, forward, recursive - you name it) will actually lead to better performance in general. None at all. Such tools are there for convenience only - they may work, or they may not. Best guide and ultimate judge is always the experiment. Apart from some very specific cases in linear or logistic regression, most notably the Lasso (which, no coincidence, actually comes from statistics), or somewhat extreme cases with too many features (aka The curse of dimensionality), even when it works (or doesn't), there is not necessarily much to explain as to why (or why not). Having clarified that, let's see your case in more detail. None of the feature selection procedures here takes into account the model performance; in classification settings, the sole criterion by which features are deemed as "important" or not is the mean decrease in the Gini impurity achieved by splitting in the respective feature; for some background, see the following threads (although they are about Random Forests, the rationale is identical): How is the 'feature_importance_' value calculated in sklearn random forest regressor? Relative importance of a set of predictors in a random forests classification in R How are feature_importances in RandomForestClassifier determined? Although it is often implicitly assumed that a reduction of the features using this importance as a criterion may lead to gains in the performance metric, this is by no means certain and far from straightforward (I am actually repeating my intro here). Given that, it would seem that the actual question here should be why the two methods end up selecting different features, for which the thread you have linked yourself, Difference between RFE and SelectFromModel in Scikit-Learn, is arguably relevant. In any case, the expectation that they should offer similar results is arguably not well-founded; the relative feature importance changes when features are removed from the model (RFE case), hence it cannot be directly compared with the SelectFromModel approach (use all features, remove those with importance below a threshold). Similarly, the question "Is there any known use case where RFE will fare better?" is ill-posed - you have not shown that RFE is consistently inferior, and the results of a single experiment with a single dataset and a single parameter setting (such as the no. of required features and the threshold involved in SelectFromModel) should not be generalized light-heartedly. In any case, selecting features based on their Gini importance (Mean Decrease in Impurity - MDI) has started falling out of fashion, mainly because the calculated importance is spurious in cases of categorical features with high cardinality (see the academic paper The revival of the Gini importance?); in fact, there is already a relevant warning in the scikit-learn classifiers incorporating this "classical" feature_importances attribute: Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See sklearn.inspection.permutation_importance as an alternative. See the scikit-learn vignette Permutation Importance vs Random Forest Feature Importance (MDI) for a concrete example. Irrelevant to the exact question, and if the classes in your dataset are imbalanced (as they seem to be), you could (and should) inform your models about this, using the class_weight argument of ExtraTreesClassifier (docs), i.e. change the model in both cases to ExtraTreesClassifier(class_weight='balanced') (h/t to Ben Reiniger for a constructive comment that helped to improve the answer)
H: How to maximize recall score for specific label in multiclass classification? Thought I had solved the problem but I'm having inconsistent issues with it so reaching out here. I have a multilabel classification problem with four labels ['--','-','+','++'] and with a basic random forest model, I have significant performance issues with one label '-', while the other three labels are performing pretty decently. model = RandomForestClassifier(random_state=42) model_name = 'RFC_base' grid_params={} pipe = GridSearchCV(make_pipeline(model),scoring='recall_weighted',cv=5,param_grid=grid_params,n_jobs=-1) recall avg recall (++) recall (+) recall (-) recall (--) RFC_base 0.848485 0.840909 0.871795 0.771429 0.893617 I'm well aware that GridSearchCV is over engineering, but I like to keep code consistent between tinkers while optimising. I wanted to improve the recall score for '-', so created a custom scoring function that I thought would be maximising the recall score for '-'. recall_neg_scorer = make_scorer(recall_score,average=None,labels=['-'],greater_is_better=True) I've been tinkering with this to optimise it, hence trying to define the 'greater_is_better' param. So I do an actual GridSearchCV now changing some of the params. model = RandomForestClassifier(random_state=42) scaler = StandardScaler() grid_params = { 'randomforestclassifier__n_estimators': [81], #np.arange(start=60,stop=90,step=1), 'randomforestclassifier__max_depth': np.arange(1,10,step=1), 'randomforestclassifier__criterion': ['gini'], #,'entropy'], 'randomforestclassifier__max_features': [7], #np.arange(1,X.shape[1]+1,step=1), 'standardscaler': [StandardScaler(),MinMaxScaler(),None] } model_name = 'RFC_gscv_recall_neg' pipe = GridSearchCV(make_pipeline(scaler,model),scoring=recall_neg_scorer,param_grid=grid_params,cv=5,n_jobs=-1) What I'm finding is my scoring function doesn't actually optimise '-' when 'greater_is_better = True' For instance, when GSCV runs on 'max_depth' it chooses 9 which gives : recall avg recall (++) recall (+) recall (-) recall (--) RFC_gscv_9 0.836364 0.818182 0.871795 0.742857 0.893617 Which is performing worse on '-' than even the base case. When I force max_depth = 8 it gives : recall avg recall (++) recall (+) recall (-) recall (--) RFC_force_8 0.842424 0.818182 0.871795 0.8 0.87234 When I choose 'greater_is_better = False' it actively tries to minimise the score. Is there something I'm screwing up here or is there a known issue that I'm missing out? Also I'm a bit new to stack so let me know if there's something I'm missing. AI: So I've just figured this out and I feel like a bit of an idiot. I didn't post this in the question - which is part of why I need to be better at stack. So all of the scoring data was based on "test" scores. The optimised scoring data for the GSCV is obviously based on just the TRAINING data. So I guess I was making some data leakage for my model when optimising based on test score data, and not the train score data.
H: How to study the effect of eps in sklearn.cluster.DBSCAN? I posted this question on stackoverflow.com and have not received any answer. In case I get an answer from one of them, I will inform on the other. I have a dataset and is requested by my professor to study the effect of eps in sklearn.cluster import DBSCAN? Could you please suggest some ways to measure its effect? I generated eps = np.arange(start = .1, stop = 1, step = .1), but I don't know what should I compare. import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.cluster import DBSCAN %config InlineBackend.figure_format = 'svg' # Change the image format to svg for better quality don = pd.read_csv('https://raw.githubusercontent.com/leanhdung1994/Deep-Learning/main/donclassif.txt.gz', sep=';') kmeans = DBSCAN(eps = .2) y_kmeans = kmeans.fit_predict(don) plt.figure(figsize = 1.5 * np.array(plt.rcParams['figure.figsize'])) # This is to have bigger plot plt.scatter(don['V1'], don['V2'], c = y_kmeans, cmap = 'viridis') eps = np.arange(start = .1, stop = 1, step = .1) AI: The meaning of $\epsilon$ is that of the neighbourhood size. The neighbourhood of a point $p$, denoted by $N_{\epsilon}(p)$, is defined as the $N_{\epsilon}(p) = \{q \in D | dist(p,q) \leq \epsilon \}$. Here $D$ is a database of $n$ objects (points) and $q$ a query point. So what you Professor probably wants you to do is to evaluate goodness of clustering algorithm (DBSCAN) based on its input Parameters Epsilon. This goodness of fit you can measure for example with Silhouette score
H: Unable to make CNN model with same output and input dimensions I'm trying to develop a simple CNN model that takes in a RGB images and returns the same 3 channel as output. Eg: 3 x 128 x 128 is what I'm giving as input K.set_image_data_format('channels_first') def resBlock(x, channels, kernel_size=[3, 3], scale=0.1): tmp = Conv2D(channels, kernel_size, kernel_initializer='he_uniform', padding='same')(x) tmp = Activation('relu')(tmp) tmp = Conv2D(channels, kernel_size, kernel_initializer='he_uniform', padding='same')(tmp) tmp = Lambda(lambda x: x * scale)(tmp) return Add()([x, tmp]) feature_size = 128 input_shape = (3, 128,128) inputs = Input(shape=input_shape) # x = Concatenate(axis=1)([inputs]) x = Conv2D(feature_size, kernel_size=(3,3), activation='relu', input_shape=input_shape, padding='same')(inputs) for i in range(6): x = resBlock(x, feature_size) x = Conv2D(feature_size, (3, 3), kernel_initializer='he_uniform', padding='same')(x) x = Add()([x, inputs]) model = Model(inputs=inputs, outputs = x) This is what I have currently, however, it throws an error: ValueError: Operands could not be broadcast together with shapes (128, 128, 128) (3, 128, 128) It fails in this line: x = Add()([x, inputs]) AI: The error indicates that you are trying to add tensors of incompatible dimensions, as x has 128 channels and inputs has 3 channels. The reason why x has 128 channels is just in the line above, where you pass feature_size with value 128 as the number of output channels. Change the number of output channels in that line to 3 and it should work, like this: x = Conv2D(3, (3, 3), kernel_initializer='he_uniform', padding='same')(x)
H: Extract names from email address Say I have two email addresses and I would like to see if it is likely that they belong to the same person. For example, [email protected] and [email protected] is likely to be from the same person (it doesn't have to be certain, providing the likeliness would be sufficient). I had two directions in mind to achieve this, one is a string comparison between the two email addresses and the other is to first extract the names from the email addresses then compare if they might be the same person. Like in the example above, the names extracted should be Cameron M Thompson and c thompson. I am also wondering if given that one of the email addresses is guaranteed to contain the full name (usually company email addresses have the full name), would that help the extraction of name in the other email address (personal email addresses might not always contain the full name), or would that help on the comparison of the two email addresses. I have had a hard time trying to figure out if any of the above two directions would be feasible. Especially when email addresses might not have separators and names can vary a lot that a listing might not be sufficient to find a match. How should I proceed in solving this problem? Would machine learning / deep learning help or I should go with something else simple like regex and fuzzy string match? UPDATE: I have a dataset that has two columns, email address and name, and about 2k rows there. I believe this could be used for the second direction (name extraction). For the first direction (string comparison similarity), I am thinking of modifying the dataset to three columns (email address 1, email address 2, label of whether they are the same person), which should give about 1k rows of data. AI: Before talking about the solution, why don't you focus on the content instead? I think it would be more helpful to solve your problem, considering that most of the email addresses end with the sender's sign, Name Surname. Also, the probability of failing to obtain this information from an email address is much higher than the probability of failing to get it from the content. Especially, this is the case with company email addresses which might not contain the whole name in the email address (first letter of name and surname e.g. John Travolta - [email protected]), but it must contain the author's full name (at least the name) at the end. Furthermore, consider that plenty of email addresses will contain only name or surname or neither of them, but substitutive words like superboy122133@+++.com :D. But most of the email apps contain a default sign that includes name and surname. In addition, you can combine these two techniques. That is, combine the email address data with email content data so that, if it is infeasible or impracticable to obtain data from one of these, then you can use another one. However, if let's say you have to do it with nothing but an email address I think using Machine Learning techniques would be overrating or overestimating the problem. Also, using non-machine learning techniques does not mean you are simplifying the solution, all these techniques give the best outcome when they are applied in the correct context. Let's imagine a simple situation: if you know or can easily infer that [tax] = 0.2 * [salary] + 20 $, why would you find (or fit) this equation using Machine Learning? Unless you have data in the format of |email address, fullname|, you shouldn't start with using Machine Learning. (If you would have |email address, fullname| data, as an option, you would train a model to learn the general relationship between the email address and full name, thus you would identify similar email addresses). However, in this current situation, one approach would be finding all possible patterns in the email addresses. Which can be first letter of the name and surname jtravolta@+++.com name, special characters and surname john_travolta@+++.com name and numbers john1954@+++.com ... Then these features that are extracted from email addresses using identified patterns can be compared with other emails either hashing or using string distance algorithms. One alternative approach would be having a hashed dictionary of all available names and surnames, then you can cut pieces(substrings) from the email address then hash them to find the names and surnames from the address (Of course, vice versa would be highly inefficient). The email addresses that have the most similar, properties would be matched. Another solution would be, using the above-mentioned patterns, you can generate a bunch of artificial email addresses. Considering that it is highly probable that there is not a dataset that includes the name and surname of people and their one or more email addresses, data augmentation is the first order of business. (I am not sure whether the data augmentation term fits this situation. If it does not then let's say data generation). So your input would be Name Surname (you can include middle name, number, etc.), and output would be randomly generated email addresses based on the pre-defined patterns. The number of emails that are generated for a single input should be randomly selected also, but be careful about the generation of the same email address more than once. E.g. input -> John Travolta -> output -> j_travolta12@+++.com, john.t.99@+++.com, john.travolta@+++.com (Lets suppose for this example we randomly choose 3 emails to be generated). Then after you created, email addresses with all possible (almost) patterns you can get help from Machine Learning techniques. So the model might give you a probability with the relevant name and surname. (Also, you can configure the output so that it would give you top n name and surnames which have higher probability) Another thing that is needed to be considered is the possibility of two different persons having the same name and surname. Lastly, independently from using which approach your solution cannot be perfect because, for example, it is not possible to understand whether the character 'j' stands for John or Jake in the email address. Thus, if you can integrate the email content into your solution, that will increase the performance drastically. Update Accordingly: Check this answer which does not exactly answer your problem, but the context is the same.
H: how to create multiple subplots( scatterplot) in for loop I want to create scatterplot of target variable with all the other variables in dataframe. Just like we create pairplot but as pairplot with large number of variables will take lot of time and computing memory, it is required to draw scatter plot containing all the variables with target variable. So tried implementing it in for loop, but only last one got rendered properly. count=1 for i in df.columns: plt.subplot(n,1,count) plt.figure(figsize=[5,5]) sns.scatterplot(df["cnt"],df[i]) count+=1 output: only last plot is rendered properly AI: Just get rid of the plt.figure(figsize=[5,5]) count=1 for i in df.columns: plt.subplot(n,1,count) sns.scatterplot(df["cnt"],df[i]) count+=1 plt.show() or use this to make it more beautier (Adjust the first two parameters of subplot function, i.e. 3, 2, to if you have more columns. First parameter specifies the number of rows for subplots in the plot, and second parameter specifies number of columns) count=1 plt.subplots(figsize=(10, 8)) for i in df.columns: plt.subplot(3,2,count) sns.scatterplot(df["cnt"],df[i]) count+=1 plt.show()
H: Convert Neural network to Keras Classifier I am training a Neural Network for Multi-Class classification. After successfully training it and validating the model through cross-validation, I would like to use this network inside a voting Classifier. In order to perform cross-validation on my trained network, I convert it to a Keras Classifier and then calculate its validation score. However, when I parse the same exact "Keras Classifier" in the Voting Classifier method, I get the following error: ValueError: The estimator KerasClassifier should be a classifier The code can be seen below: import random random.seed(42) from keras.layers import Dense from keras.models import Sequential from keras.wrappers.scikit_learn import KerasClassifier import tensorflow as tf from sklearn.ensemble import VotingClassifier def NeuralNetwork(): model = Sequential() # define first hidden layer and visible layer model.add(Dense(600, input_dim=k, activation='relu')) model.add(Dense(20, activation='relu')) # define output layer model.add(Dense(3, activation='softmax')) # define loss and optimizer model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=["accuracy"]) return model NN = KerasClassifier(build_fn=NeuralNetwork , epochs=100, batch_size=100, verbose=0) cross_val_score(NN, X_new_train,y_train_no_id, cv=3) votingC = VotingClassifier(estimators=[ ('LR1', LR1),('LR2', LR2), ('XGB',XGB),('NN',NN)], voting='hard', n_jobs=4) #LR1 and LR2 are some other Logistic Regression estimators defined in another section votingC = votingC.fit(X_new_train, y_train_no_id) votingC.predict(X_new_test) AI: There are a couple of Github issues on this - see here and here. In short, while scikit-learn native models include an _estimator_type attribute: from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf._estimator_type # 'classifier' this is not the case with a KerasClassifier; using your own NN gives NN = KerasClassifier(build_fn=NeuralNetwork , epochs=100, batch_size=100, verbose=0) NN._estimator_type --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-4-a2fd193c154f> in <module>() ----> 1 NN._estimator_type AttributeError: 'KerasClassifier' object has no attribute '_estimator_type' As reported in a comment here, setting the attribute manually seems to work: NN._estimator_type = "classifier" so add this line immediately after your NN definition.
H: Why the accuracy of my bagging model heavily affected by random state? The accuracy of my bagging decision tree model reach up to 97% when I set the random seed=5 but the accuracy reduce to only 92% when I set random seed=0. Can someone explain why the huge gap and should I just use the accuracy with highest value in my research paper or takes the average with random seed=None? AI: Can someone explain why the huge gap It simply means that there's a quite high variance depending which random set of instances is picked. How many times do you re-sample the instances in the bagging process? Probably increasing the number of runs will decrease the variance. As mentioned in a comment, the most common reason for variance in performance is a sample which is too small (and/or a number of features/classes which is too high). It's likely to cause your models to overfit. and should I just use the accuracy with highest value in my research paper or takes the average with random seed=None? Never ever use the highest performance across random runs, this is cherry-picking and it doesn't reflect the true performance. The possibility to select a random seed is for reproducibility purposes, and selecting the one which gives the best results is the opposite of the principle of reproducibility. Since you're using bagging, you should decrease the variance (that's the whole point) by increasing the number of runs. If you can't do that for any reason, then don't use bagging: simply repeat the regular process splitting-training-evaluating $N$ times (with a fixed proportion training/testing data) or use cross-validation, and report the average performance (preferably report the variance as well, e.g. standard deviation).
H: Running scikit-learn with large volume I need to run a Random Forest process with scikit-learn. To train the model, I have a database table with 10 million rows of features. The question is: what is the best way to approach this, should I load into memory the 10 million rows, for example with numpy or pandas or there's a better way to load the data progressively by chunks? AI: There are multiple possiblities from dusk, to others model etc. Here are my 2 favorites, not to loose you in the number of possibilities: www.h5py.org/ "It lets you store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they were real NumPy arrays. Thousands of datasets can be stored in a single file, categorized and tagged however you want." Try online learning with Cousin models of random forest (light-gbm). He has online learning capabilities.
H: finding similarity of a new datapoint I have built a recommendation engine using cosine similarity. When I want to find all the records similar to a given record that is already present in the dataset it works. Consider a case, a user enters a new record which does not exist in the similarity matrix, to find and recommend similar records I am adding the new data point into the dataset built the similarity matrix and then pick the similar records, Is there a better way to handle this problem? Adding more details data=["document1", "document2", "document", ... "documentN"] count_matrix = count.fit_transform(data) cosine_sim = cosine_similarity(count_matrix, count_matrix) AI: Instead of transforming, and calculating the whole cosine similarity matrix every time you receive a new data point, store your existent data, and whenever you get a new sample calculate the cosine similarity of a new sample against the existent data. Then append and store it. That is, you need a two function: First function - Calculate the distance matrix among N existent documents, then store it to the [N x N matrix], also you need to store transformed data, count_matrix. Second function - When you get a new data point, calculate the cosine similarity of a new datapoint to all existent datapoints in the count_matrix, and append the new [1 x N+1] list to the existent cosine similarity matrix. Also, update the count_matrix. Afterward, store both. Simply, instead of creating [N+1 x N+1] matrix, store the data and when you have new data, calculate [1 x N+1] and combine it with the existent matrix and store it.
H: Need to kickstart learning rates I was just looking at the docs on Pytorch for different available schedulers and I found one that I am having some trouble understanding here. The others seem to make sense: As training progresses, the learning rate gradually decreases. But in my study so far, I am yet to come across a model that needs such a "kickstart" mechanism. Could someone please help me figure out why we need this? AI: This technique of gradually increasing the learning rate linearly during early phase of training is called learning rate warmup. This is typically needed in order to prevent overfitting during the initial phase of training where a certain subset of observations may significantly skew your model towards some features and push you towards a bad local optimum. The rectified adam paper suggests that this warmup heuristic serves as a variance reduction technique. Due to the lack of samples in the early stage, the adaptive learning rate has an undesirably large variance, which leads to suspicious/bad local optima They perform a few experiments that validates their hypothesis.
H: Scaling continuous data to discrete range Edit: the context is as follows: I've trained some ML model that predicts some feature vector. Thats a. But I know that a can take some values from discrete range, but the model outputs continuous values. After applying predict I want to enforce the outputs to take the discrete values they are allowed to be in. I've also edited my example to be more informative. I have this continuous data: a = [0.003 0.994 1.334 3.2 1.464 2.454 2.941 999.999] (outputs) How can I easily convert it to the following discrete array: b = [0 1 2 3 3] (allowed values) so that every element of a will be mapped to its closest counterpart in b: a_scaled = [0 1 1 3 1 2 3 3] (what I need) I know how to implement this, but I want to stick with "don't invent the wheel". So is there any nice function from some library that does this? AI: Update Accordingly: Your question was not clear before, therefore, sorry for the irrelevant solution. To achieve that, I don't think there is a public library function, however, you can build your solution using some beautiful "ready" functions. Two solutions come into my mind: First one's time complexity is O(N*M) N is your prediction (list a in your case) size, and M is your dictionary (list b in your case) size. import numpy as np def findClosest(dictionary, value): idx = (np.abs(dictionary - value)).argmin() return dictionary[idx] #[findClosest(b, elem) for elem in a] print([findClosest(b, elem) for elem in a]) This just subtracts your prediction value from the values in your dictionary and takes the absolute value of them. Then in the resulting array, it looks for the location of value that is smallest. Second one's time complexity is O(N*log(M)) N and M denote the same thing as the first solution. from bisect import bisect_left def findClosestBinary(myList, myNumber): pos = bisect_left(myList, myNumber) if pos == 0: return myList[0] if pos == len(myList): return myList[-1] before = myList[pos - 1] after = myList[pos] if after - myNumber < myNumber - before: return after else: return before #[findClosestBinary(b, elem) for elem in a] print([findClosestBinary(b, elem) for elem in a]) Note: To save time I didn't implement myself but took the findClosestBinary() function from here. This one is a better algorithmic approach in terms of time complexity. This does the same thing but uses a binary search to efficiently find the closest value in the dictionary. However, it assumes your dictionary (list b) is sorted. Since your dictionary is a predefined list, you can improve the performance by providing a sorted array. However, if your dictionary that you will map predictions to is not a very big one, then you can just use the first one. In the case of the dictionary being small, these two functions will behave the same in terms of time.
H: Compare cross validation and test set results I am having a hard time understanding the results of a cross validation test and a test run on a test set. First I made the following pipeline: pipe=Pipeline([('clf',DecisionTreeClassifier(random_state=0))]) Then I use cross validation on a scaled training set(75% of the original dataset): >>> cross_val_score(pipe, X_train_scaled,Y_train,cv=7).mean() 0.7257796129913106 I then fit the pipeline with the training data and run the classifier on the training data. >>> pipe.fit(X_train_scaled,Y_train) >>> pipe.score(X_train_scaled,Y_train) 0.7734339749126584 Finally, I checked the models performance on the test set: pipe.score(X_test_scaled, Y_test) 0.941353836876225 Question 1: have I done the right steps, do I even need to run the pipeline on the training data for the training data score? Question 2: why is the test data so much more accurate than the cross validated one. Is the data underfitted, or is it okay for this to happen ? AI: At first sight the steps seem to be correct, nevertheless: you did not tell how you split your dataset intro training and test sets, in case it has some influence in your final score values you might be interested in the sklearn GridSearchCV option , with which you carry out cross validation similar to what you did but you do not need to manually refit the model on the whole train dataset, because you have the option with the refit parameter (choose the scoring metric considered so as to select the best model from the grid search cross-validation process already refit on the whole train dataset wothout splits), as follows: Basically, the steps for your example could be: split your dataset to leave some rows out for a final validation apply grid search cross-validation on the rest of the data (your train set), something like: dt_clf = DecisionTreeClassifier(random_state=0, class_weight="balanced") hiperparams_search_space = {'criterion': ["gini", "entropy"], 'max_depth': [4, 5, 6, 8], 'min_samples_leaf': [2, 3, 5]} dec_tree_cross_val_clf = GridSearchCV(dt_clf, hiperparams_search_space, cv=10, scoring=['accuracy', 'recall', 'precision', 'roc_auc'], refit='recall',return_train_score=True, n_jobs=-1) dec_tree_cross_val_clf.fit(X_train, y_train) where for each scoring metric, you would have train and test values across your k-folds in a dataframe via dec_tree_cross_val_clf.cv_results_: use clf.best_estimator_ to make predictions on your validation set to calculate a final score metric: dec_tree_cross_val_clf_best_est = dec_tree_cross_val_clf.best_estimator_
H: R code that gives results like Wolfram Alpha for the expectation of a function of a random variable? When I ask Wolfram Alpha to calculate $E[f(X)]$ where $f(x) = e^{-x^2}$ and $X \sim \mathcal{N}(1,4)$, it gives the result $$ E[f(X)] = \frac{1}{3\sqrt[9]{e}} \approx 0.29828, $$ and the following plot which appears to be based on taking a number of simulations: How can I generate the same type of plot using R? AI: You can simulate one of these lines like this: library(ggplot2) X<-rnorm(1000, 1, sqrt(4)) f<-function(x) exp(-x*x) df<-data.frame(sample_size=seq_len(1000), sample_means=cumsum(f(X))/seq_len(1000)) ggplot(df, aes(x=sample_size, y=sample_means)) + geom_line() + theme_minimal() + labs(x=NULL, y=NULL)
H: Should I include all dummy variables or N-1 dummy variables (keep one as reference) in neural networks I have a categorical variable with N factor levels (e.g. gender has two levels) in classification problem. I have converted it into dummy variables (male and female). I have to use neural network (nnet) to classify. I have two options - Include any N-1 dummy variables in the input data (e.g. include either male or female). In statistical models, we use N-1 dummy variables. Include all N dummy variables (e.g. include both male and female) Can someone please highlight the pros and cons of both options in predictive power and interpretability AI: I will answer the question in the Neural Network context (i.e. I won't talk about regularization in regression algorithms handles this problem). The problem of not dropping one of the encoded variables will cause multicollinearity. That is, one of the variables can be estimated using others. e.g. If you have a variable that says whether a person is female or not, why would you need another variable which says whether a person is male or not. However, the main thing that people forgot to say while answering this question is that multicollinearity is not actually a big problem unless you need the interpretation of your variables. Multicollinearity will cause your coefficients to be false, but it won't affect your predictions. Thus, it should not be a big problem unless you need to explain your model in terms of your variables. Another possibility is that, if the covariance between your variables in the training set and test set would be different, then your predictions would be affected, and you would have incorrect results. However, if you shuffle your dataset good enough (which is your assumption) and split the train and test set (also validation) correctly then covariance should be the same in both. In other words, the relation between those correlated variables can be assumed to stay the same in the train and test set. Thus, you can safely assume that your predictions are correct.
H: Model inclines because of imbalanced data In life, some events are rare and most cases are normal. So I am wondering, to detect the rare cases, shall we use an imbalanced dataset with more historical rare cases? Taking the German Credit Data, as an example. It contains data on 20 variables and the classification whether an applicant is considered a Good or a Bad credit risk for 1000 loan applicants. 70% are Good. 30% Bad. With this original data set, I assume the model would incline to better recognize the normal cases (because more normal cases in the data) If a balanced dataset is used, i.e. the number of good credits equals to that of bad credits, a final model will be good at predicting both ‘Good’ and ‘Bad’. But if we want to use machine learning to recognize the rare events, e.g. in this case, the bad credit customers. Shall we use an imbalanced dataset (for example 70% Bad credits, 30% Good in total 1000 records), that contains much more bad credit customers, than good ones, so the final model is good at recognizing the bad customers? Or a balanced dataset is always necessary (also the only right way). Can anybody please shed some light on this? Thank you. Link to data: https://online.stat.psu.edu/stat857/sites/onlinecourses.science.psu.edu.stat857/files/german_credit/index.csv AI: Impact of imbalanced datasets First I would say that imbalanced dataset impact depends on the type of model you are using. For instance: Gaussian Naive Bayes should not be that much impacted if you have a certain amount of data for each class that is enough to approximate your gaussian distributions. (and that your data are normally distributed) Neural Networks learning is using the error of your predictions to update its model, so having an imbalanced dataset would lead to imbalanced learning (for example 70% of the weights/biais have been updated according to class 'Good'). You don't want this happening a priori. How to deal with imbalanced dataset ? There might be other approaches, but you can do at least: Use an algorithm that is not much impacted by imbalanced datasets Some algorithms have a class_weight parameter. You can use it to penalize more the minority class during the learning process so the model is forced to pay more attention to the minority class observations. See this post for more details: How does the class_weight parameter in scikit-learn work? Resample your dataset so it becomes balanced. That could be through undersampling the majority class or oversampling the minority one. Further reading How to Deal with Imbalanced Data 8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset Hope it helps.
H: Training, Testing and Validation Dataset I'm training a Unet model for tumor segmentation. I have a dataset of 400 patients for that. The used images are CT scans (3D images) that I divide into 2D images (for a total of 30k 2D images). I am actually splitting the dataset into: 10% test data, 18% validation data, 72% actual training data. I'm dividing the test and training data over patients (i.e. the patients used for testing are not the same as the one for training). Afterwards, I shuffle the 2D images and split in training/validation dataset (i.e. the same patients can be found in training dataset and validation dataset but not same stack images). I have two questions: Should I split the train/validation dataset according to patients too ? Are the division percentages in train/test/validation adapted for my problem ? AI: Generally numbers (percentages) do not matter. What matters is that your Splitting (Train/test/Validation) does 2 things. Represent the real world sitatution and making sure the model can generalise given that ist evaluated on the holdout sets. So what does that mean here exactly? You have 30k Images and 400 patients. Most likely patients(scans) will differ from each other so you should split according to patients also to make sure the model can generalise on slightly different distributions of images. And according to percentages. You Need to make sure that Things you find in Train test and Validation represent your Problem. This can mean Splitting by Patient, Splitting by some other feature, checking the Distribution of data etc. but what it does not mean is that only cause you have 12% in one set you are sure. What does that mean. Lets say you have 1000 rows of data. You split 90% 10% so in holdout you have 100 data Points. But in the Train set out of the 900, the majority of them are same similiar. And they differ from the 100 Points in holdout. Is this a good split? obviously not cause your model is learning Nothing.
H: How to sample with guaranteed minimum spacing? I have a timeseries stored as a pandas.Series, and have computed the autocorrelation to be negiligable for 12 hours lag. Therefore, I want to test the approach of sampling data points randomly, but with at least 12 hours between them, and then treating them as IID data. How do I do this algorithmically, in an efficient way? ps if I want at most one data point per 12 hour window, I could do like below: import pandas as pd import sklearn import numpy as np ndata = 2*24*60*60 s = pd.Series(data=np.random.random(ndata),index=pd.date_range(start=pd.Timestamp("2000-01-01"), periods=ndata, freq='S'),name='data') window = pd.Series((s.index - s.index[0]).total_seconds()//(12*60*60),index=s.index,name='window_id') df = pd.concat([s,window],axis=1) random_datapoints = sklearn.utils.resample(df,stratify=df.window_id,n_samples=df.window_id.unique().size,replace=False) However, most probably (this happens almost surely as $n\to\infty$, Ill get two samples that are less than 12 hours apart, even though they will be 12 hours apart on average.... AI: Let's say you have very frequent data across a period of time T and you want to sample N points. Instead of sampling points directly you could sample the time that separate them. You then just need to add 12 hours to all gaps to enforce your constraints. To do so you could sample N points uniformly in [0, T - (N-1) x 12h] and then compute the difference between consecutive points. Sampling N + 1 points in [0, T - N x 12h] allows you to also sample the first timestep. Afterwards you can match thoses times with the closest samples available in your data. import numpy as np T = 500 N = 200 min_diff = 0.5 t_diffs = min_diff + np.diff(np.sort(np.random.uniform(0, T - N * min_diff, N+1))) times = np.cumsum(t_diffs) import matplotlib.pyplot as plt plt.figure() plt.scatter(np.arange(len(times)), times) plt.ylabel("Time") plt.xlabel("Point index") plt.savefig("fig.png") plt.show()
H: Maximum Likelihood with Gradient Descent or Coordinate Descent blows up Context The maximum likelihood estimators for a Normal distribution with unknown mean and unknown variance are $$ \widehat{\mu} = \frac{1}{n}\sum_{i=1}^n x_i \qquad \text{and} \qquad \widehat{\sigma}^2 = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2 $$ These can be found (for example) by taking derivatives of the average log-likelihood $$ \frac{1}{n}\sum_{i=1}^n \log p(x_i) = -\frac{1}{2}\log(2\pi) - \frac{1}{2n\sigma^2}\sum^n_{i=1} (x^{(i)} - \mu)^2 - \log \sigma $$ Question: What if I want to use a gradient-based method? Yes, I know I can just use the estimators found above. However, I want to find such estimators using a gradient-based method such as coordinate descent or gradient descent. These are the gradients with respect to $\mu$ and with respect to $\sigma$ (which you can set equal to zero to find the estimators above) $$ \begin{align} \frac{\partial}{\partial \mu} \frac{1}{n} \sum^n_{i=1} \log p(x^{(i)}) &= \frac{\overline{x}}{\sigma^2} - \frac{\mu}{\sigma^2} \\ \frac{\partial}{\partial \sigma} \frac{1}{n}\sum^n_{i=1} \log p(x^{(i)}) &= \frac{1}{n\sigma^3}\sum^n_{i=1}(x^{(i)} - \mu)^2 - \frac{1}{\sigma} \end{align} $$ I tried using them in gradient descent $$ \begin{align} \mu_{t+1} &\longleftarrow \mu_t + \gamma \left(\frac{\overline{x}}{\sigma^2_t} - \frac{\mu_t}{\sigma^2_t}\right) \\ \sigma_{t+1} &\longleftarrow \sigma_t + \gamma\left(\frac{1}{n\sigma^3_t}\sum^n_{i=1}(x^{(i)} - \mu_{t+1})^2 - \frac{1}{\sigma_t}\right) \end{align} $$ or in coordinate ascent (where I would keep, say $\sigma_t$ fixed and optimize $\mu_t$ for $n_{\text{inner}}$ times and then switch: keep $\mu_t$ fixed and optimize $\sigma_t$ for $n_{\text{inner}}$ times. All this for $n_{\text{outer}}$ times. However it seems to blow up for some reason and not give me the obvious answer. You can run the code here. What am I doing wrong? AI: You correctly computed the gradients, but the optimization step is incorrect. Your gradient is computed at point $(\mu,\sigma)$, and, therefore, it should be used to update $\mu$ and $\sigma$ simultaneously. Correct update rule is $$ \begin{align} \mu_{t+1} &\longleftarrow \mu_t + \gamma \left(\frac{\overline{x}}{\sigma^2_t} - \frac{\mu_t}{\sigma^2_t}\right) \\ \sigma_{t+1} &\longleftarrow \sigma_t + \gamma\left(\frac{1}{n\sigma^3_t}\sum^n_{i=1}(x^{(i)} - \mu_{t})^2 - \frac{1}{\sigma_t}\right) \end{align} $$ The formula that you have in the question implements the alternating optimization which may or may not converge (it is guaranteed to converge if the gradient norm along the optimization path is at most one). Therefore, I corrected your code as follows: # Loop through and update mu and sigma mus = [mu] sigmas = [sigma] for i in range(n_iter): # Compute gradients mu_grad = (np.mean(x) - mu) / (sigma**2) sigma_grad = (np.mean((x - mu)**2)/sigma**3 - 1 / sigma) # Update mu and sigma mu, sigma = mu + gamma_mu*mu_grad, sigma + gamma_sigma*sigma_grad # Store mu and sigma mus.append(mu) sigmas.append(sigma) The result converges quickly.
H: How to create a model to suggest similar words in realtime? I have a huge database of job titles, I want to build a system where if you enter something like "jav " then it should suggest next some similar job titles like (java developer, java engineer) etc.. How should one approach this problem? How can build something like this, the latency is biggest concern because it has to be real time. We have to integrate this in UI at the end. Any suggestions how to proceed further? AI: Autocompletion algorithms usually build on either tree or hash structure for the sake of efficiency. One famous approach is the Ternary Search Tree. Use this source to have an initial idea. It is also a comparably space-efficient solution. You can find more complex systems build on the Ternary Search Tree. Another good solution is Trie. Trie data structure is a good alternative of Ternary Search Trees, as an example implementation check this. These both are handy at the front end of the application since they are simple but powerful approaches. Other than that, they have a good balance of time and space complexity. Moreover, a fast-autocomplete library may help you. Since, in your case, you may have two names for the same job, it has synonym functionality. In general, tree structures are what you are looking for. You can do this task even using Binary Trees. However, I would suggest you consider the case that in reality many times there appears spelling problems, so in that case, you either will not be able to suggest a word or the suggestion will be incorrect. For such problems, you can also have a spelling check algorithm that first checks if such a word exists, if not using spell check you can have proposed words and you can suggest the autocompletion based on those suggestions. E.g. instead of 'Java' one may type in 'Jsva' and your algorithm will find the correct word and make the suggestion accordingly. This is a one-of-the best algorithms that does auto-completion using ML techniques. You can also build a simple one using Levenshtein distance. (However, spelling error cases mostly discarded in the systems because of its being costly to fix them)
H: Which probability distribution will you use to model outliers? I was asked this question in a recent interview for the position of a Data Scientist: Which probability distribution will you use to model outliers ? I told him outliers are like rare events which can be modelled by a Poisson distribution. I pretty sure I'm wrong and the interviewer seemed to think the same. But I don't know the answer to this. Please advise. AI: I think the answer is Gaussian distribution. This is a famous approach that is used in Anomaly Detection. What you do is to fit your feature to the Gaussian distribution and the samples which have the probability below the specific threshold are labeled as an outlier. Quoting from the paper Modeling Outlier Score Distributions: Many existing unsupervised outlier detection algorithms calculate some kind of score per data object which serves as a measure of the degree of outlier. Scores are used in ranking data points such that the top n points are considered as outliers. For example, the statistical-based approach proposed in [4], uses a Gaussian mixture model to represent normal behavior and each datum is given a score on the basis of changes in the model. A high score indicates a high possibility of being an outlier. Example of usage - 1. Example of usage - 2
H: Is the result of feature extraction a feature representation? If a use a feature extraction method on images, do I then get a feature representation or is there a different meaning behind feature representation? To my understanding, when I use a CNN on an image the convolutional layers perform feature extraction. Can I then also say that I get a new feature representation after each of these layers? Thanks in advance! I am relatively new to machine learning... AI: Yes, you are right. As a result of the extraction of features, new feature representations are created. For example, SIFT and HARR are algorithms to extract the crucial features from the images and create feature representation. CNN does it automatically with the help of convolution layers. In an image, smooth area, edge, corners are considered features that may represent it best. The importance of the smooth area, edge, and corners (there are others too) are normally accepted in ascending order. In a simple CNN, if you can visualize the layers you can see that what it extracts from the image are mainly, corners and edge. It is also able to learn more complex features than we know (or defined). So, as you said, the convolution layers try to create a representation of features by extracting them. Then, this features representations (important information from the image) are used in the let's say prediction process.
H: Where is the rotated angle actually located in fitEllipse method? Main task: I fit the ellipse using the fitEllipse() method and then I'd like to count the rotation angle between the horizontal axis and the major axis of the generated ellipse. I'm going to do this using the 3rd returned argument from the fitEllipse() method - $\theta$ (rotation angle). Main issue: I can't find exact information about which axes this angle is located between. Other: If I'm right the length of minor axis and the length of major axis in ellipse it's the same lengths as two sides in a rotated rectangle. Sources: From documentation here (section nb. 9) it seems that this angle is between horizontal axis and the first side how it's written in CvBox2D section. So, it means that the angle can be between the horizontal axis and minor or major axis. But: In this article (section 3) the first example is good but in the 2nd example the angle should be between horizontal axis and height instead of width (referring to my aforementioned first point). In this article it shows that the angle is between vertical axis and one side of rectangle. So, where is the rotated angle located in fitEllipse() method? Any hints how it works are welcome. AI: I've implemented the following simple code: import cv2 import numpy as np nr_im = 9876 font = cv2.FONT_HERSHEY_SIMPLEX fontScale = 1 colorText = (0, 0, 255) thickness = 2 img = cv2.imread('testing/' + str(nr_im) + '.jpg') original = img.copy() blured_img = cv2.GaussianBlur(img,(17,17),5) image = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) lower = np.array([0, 0, 140], dtype="uint8") upper = np.array([0, 0, 255], dtype="uint8") mask = cv2.inRange(image, lower, upper) # Morphological Closing: Get rid of the noise inside the object mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (25, 25))) # Find contours cnts, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) print(len(cnts)) cntsElps = [] for num_cnt, cnt in enumerate(cnts): genEllipse= cv2.fitEllipse(cnt) cntsElps.append(genEllipse) cv2.ellipse(original,genEllipse,(0,255,0),2) cv2.putText(original, str(num_cnt+1), (int(genEllipse[0][0]),int(genEllipse[0][1])), font, fontScale, colorText, thickness, cv2.LINE_AA) print("Ellipse nb: " + str(num_cnt+1) + " has angle: " + str(genEllipse[2]) + "\n") cv2.imwrite('testing/' + str(nr_im) + '_' + 'trash2' + '.png', original) And I used this image as example: I've got the following image result: And the rotation angle for each ellipse was: Ellipse nb: 1 has angle: 55.63788986206055 Ellipse nb: 2 has angle: 108.58539581298828 Ellipse nb: 3 has angle: 170.23861694335938 Ellipse nb: 4 has angle: 73.59089660644531 So, my conclusion is that an angle between vertical axis and major side of rectangle(=major ellipse axis) is the rotation angle in fitEllipse() method. Addendum If you look at this question from the point of view of how opencv-python defines axes (positive x-axis to the right, positive y-axis downwards), the angle is defined between the horizontal axis and the minor ellipse diameter. To demonstrate this, different angles are plotted on a white canvas import math import cv2 import numpy as np # create white canvas img = np.zeros([512, 512, 3], dtype=np.uint8) img.fill(255) xc = 256 yc = 256 angles = list(range(0, 360, 30)) # radii = np.linspace(30, 200, len(angles)) radii = [175] * len(angles) for idx, (angle, radius) in enumerate(zip(angles, radii)): xtop = xc + math.cos(math.radians(angle)) * radius ytop = yc + math.sin(math.radians(angle)) * radius cv2.line(img, (int(xtop), int(ytop)), (int(xc), int(yc)), (0, 0, 255), 1) # Put the contour index in the ellipse cv2.putText(img, f'{round(math.radians(angle) / math.pi, 2)} pi', (int(xtop), int(ytop)), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2, cv2.LINE_AA) cv2.imwrite('opencv_angles.jpg', img) cv2.imshow('Definition of angle', img) cv2.waitKey(0) cv2.destroyAllWindows() The text in the image shows the angles for that line in radians The angle can also be plotted for the fitted ellipses, using the angle returned by fitEllipse. (xc, yc), (width, height), angle = genEllipse rminor = min(width, height) / 2 xtop = xc + math.cos(math.radians(angle)) * rminor ytop = yc + math.sin(math.radians(angle)) * rminor cv2.line(result, (int(xtop), int(ytop)), (int(xc), int(yc)), (0, 0, 255), 3) Here you can see that the angle is between the horizontal axis and minor diameter. Remember the rotation angle for each ellipse: Ellipse nb: 1 has angle: 55.63788986206055 Ellipse nb: 2 has angle: 108.58539581298828 Ellipse nb: 3 has angle: 170.23861694335938 Ellipse nb: 4 has angle: 73.59089660644531 Conclusion Angles in opencv, where positive x is to the right and positive y is downwards in images, means that the rotation angle for ellipses is best seen as between positive x-axis and minor ellipse diameter (downwards towards the positive y-axis). However, if you flip the axes (in your mind), so that positive y-axis is upwards and positive x-axis is to the right in the image, then you can also interpret the ellipse rotation angle as between the positive y-axis and major ellipse diameter (to the right towards the positive x-axis)
H: What is a distribution-wise asymmetric measure? I was trying to understand KL-Divergence, $$D_{KL} \langle P(X) \Vert P(Y) \rangle,$$ and was going through its Wikipedia article. It says the following In contrast to variation of information, it is a distribution-wise asymmetric measure and thus does not qualify as a statistical metric of spread - it also does not satisfy the triangle inequality. What is the meaning of distribution-wise asymmetric measure? Is there a symmetric measure? What are the rules that a quantity should follow to be qualified as a statistical metric of spread? AI: What is the meaning of distribution-wise asymmetric measure? The (forward) KL-divergence is distribution-wise asymmetric because if you calculate it as $$D_{KL} \langle P(X) \Vert P(Y) \rangle$$ where $P(X)$ and $P(Y)$ are two different probability distributions with the latter being the reference distribution, then $$D_{KL} \langle P(Y)\Vert P(X)\rangle \neq D_{KL}\langle P(X)\Vert P(Y)\rangle.$$ In other words, the reverse KL-divergence is not equal to the forward KLD. If forward KLD were symmetric, then the above would be an equality, not an inequality. Is there a symmetric measure? A distribution-wise symmetric measure would, for example, be mutual information: $$I(X;Y) = H(X)+H(Y)-H(X,Y) = D_{KL} \langle P(X,Y) || P(X) \cdot P(Y) \rangle,$$ where $H(X)$ is the entropy of a variable $X$'s probability distribution, since $I(Y;X) = I(X;Y)$. Mutual information is a special case of the KLD in which the joint distribution is measured against the product of marginal distributions. What are the rules that a quantity should follow to be qualified as a statistical metric of spread? The three axioms that a distance metric should meet are: Identity of indiscernibles Symmetry Sub-additivity or triangle inequality Since mutual information does not obey the triangle of inequality, it does not fit the full criteria for being a distance metric. Instead, variation of information does meet all of the above requirements and is a true metric: $$VI(X;Y) = H(X,Y) - I(X;Y)$$ where $H(X,Y)$ is the joint entropy.
H: Problem importing imblearn I am trying to do: import imblearn in my jupyter notebook, but when I run the cell I get the following error message: SyntaxError: invalid syntax even if I have installed the package. I am using python 2. What could the problem be, and how do I solve it? AI: If you check the pypi source package you can see that they are most likely depracated and they advise you to use imbalanced-learn instead.
H: What is Deep supervision? I'm interested in segmentation models for medical imaging purposes. When I looked at the state of the art, I fell on a paper on a new architecture, Unet++: UNet++: A Nested U-Net Architecture for Medical Image Segmentation from Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang at Arizona State University Like Unet, it is has an encoder/decoder architecture with skip connections (adding fine-grained feature maps of the encoder to the decoder). However, in Unet++, the skip connections are nested and dense so that a model can improve its ability to capture fine-grained details. The second difference with Unet is the use of deep supervision. The papers says that deep supervision enables: "the model to operate in two modes: 1) accurate mode wherein the outputs from all segmentation branches are averaged; 2) fast mode wherein the final segmentation map is selected from only one of the segmentation branches, the choice of which determines the extent of model pruning and speed gain" I didn't understand how deep supervision works and what are the benefits in the case of Unet++. Can you explain how it works? Thank you in advance for your help. AI: Partial answer: Quoting a paper "The advantage of such deep supervision is evident: (1)for small training data and relatively shallower networks, deep supervision functions as a strong “regularization” for classification accuracy and learned features; (2) for large training data and deeper networks deep supervision makes it convenient to exploit the significant performance gains that extremely deep networks can bring by improving otherwise problematic convergence behavior"
H: Fitting probabilities in scikit-learn RandomForestClassifier This is a newbie questions, so please bear with me. Given this random forest model: from sklearn.ensemble import RandomForestClassifier X = [ [2,1,1,1], [2,0,2,1], [3,1,1,1] ] y = [ 0, 1, 2 ] regr = RandomForestClassifier(n_estimators=200, max_depth=5) regr.fit(X, y) X_test = [ [3, 1, 1, 1] ] y_result = regr.predict_proba(X_test) print('y_result:' , y_result ) The result is: y_result: [[0.26 0.05 0.69]] I understand that these are the probabilities of the first, second and third value, or 0 = 26%, 1 = 5% and 2 = 69%. Question: if the test set is [3, 1, 1, 1] and it fits to the value 2, why do I get 69% probability of 2 instead of 100%? AI: Well, based on how this algorithm works, here is why : You want to build 200 trees, with a max depth of 5 levels. Each tree is a decision tree, train with a subset of your data. To answer your question, it doesn't fit to the value 2 because that's not how it works. What's happening is that 52 trees predicted the value 0, 10 predicted the value 1, and 138 predicted the value 2. So most of the trees predicted the value 2 You asked for the probability (using predict_proba(X_test)). If you want only one value, don't use predict_proba but predict(X_test). But it will only be the reflect of the probabilities you got.