text
stringlengths
83
79.5k
H: Using MinMaxScaler on Training Set... Do I need to scale the input for a prediction as well? I know this is a rookie question, but I'm having trouble with getting predictions out of a model. I use a MinMaxScaler() function on the training set as seen below... scaler = MinMaxScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) Now that I have trained and saved the model, I'm trying to run the model using this code... filename='cb1.sav' loaded_model = pickle.load(open(filename, 'rb')) predicted_y = loaded_model.predict_proba(sys.argv[1:]) print(predicted_y) I pass all the values for the prediction in as arguments like... #python3 ./test.py 43 28 0.2348 0.00002934 8 Resulting in an array in sys.argv where sys.argv[1:] represents all the X values in correct order and skipping the first since the first is the name of the file (test.py) Question: Do I need to perform any scaling or transformation on the sys.argv[1:0] values? AI: Yes. Any preprocessing that you did manually before the model object was trained needs to be applied to prediction data; the model is expecting the inputs in the same format as when you trained it. The easiest way to do that, if you used sklearn preprocessors, is to also save all the preprocessing steps and load those in your prediction script. This is made even easier by putting them together with your model into a Pipeline and just fitting and saving the entire pipeline as a single object.
H: Should I try to predict the probability of being 1 or zero based on recall score? I built a random forest model to classify if a men's NCAA basketball game would go over or under the total. The model was given training data that specified over as 1 and under as 0. I then used a classification report to calculate the precision, recall, f1-score, and support. These were my results: for a model like this when my goal is to accurately predict whether the test data will have an outcome of 1 or 0 should I use recall in my decision for choosing whether to try to predict if the outcome will be a 0 or 1? Also I had my model using predict_proba and using the second value from the array return which would give me the probability of the test data outcome being a 1, but from this classification report it looks like I should be trying to predict the probability of the outcome being 0 as it is more descriptive when it comes to recall. # Example: # less accurate way ynew = model.predict(Xnew) results = ynew[0][1] # predict probability of Xnew being 1 # Example 2: # more accurate way ynew = model.predict(Xnew) results = ynew[0][0] # predict probability of Xnew being 0 Is my assumption correct? AI: I have some bad news: your model is doing nothing useful. From the confusion matrix you can see that the model predicts class 0 for 68% of the instances and class 1 for 32%: (1446+1355)/4093=0.68 and (676+616)/4093=0.32. You can also see that when the true class is 0 the model predicts 68% of the instance as class 0 and 32% as class 1: 1446/2122=68% and 676/2122=32%. Almost same result when the true class is 1: 1355/1971=69% and 616/1971=31%. This means that for any instance (no matter the true class) the model randomly predicts the label: class 0 with 68% chance, class 1 with 32% chance. You can also see this problem from the fact the sum of the top-left to bottom right diagonal (correctly classified cases) is 2062, almost the same as the bottom-left to top-right diagonal (errors) which is 2031. A good model would have a much larger difference between correct cases and incorrect cases. This is why the accuracy is only 50.4%, barely more than a random baseline with two classes, and this is why the recall for each class corresponds exactly to the arbitrary probability that the model assigns to them. should I use recall in my decision for choosing whether to try to predict if the outcome will be a 0 or 1? This doesn't make sense: the recall is an evaluation score, it can only be calculated after the predictions have been made. Also I had my model using predict_proba and using the second value from the array return which would give me the probability of the test data outcome being a 1, but from this classification report it looks like I should be trying to predict the probability of the outcome being 0 as it is more descriptive when it comes to recall. There are at least two misunderstandings here: By itself, the fact that the model predicts class 0 more often than class 1 is neither good or bad. It would be completely wrong to choose to evaluate the performance using only the recall on the most frequent class: it's true that it's a higher value, but it's not at all a good performance indicator. For example, if one uses a model which always predicts class 0 (100% of the time) then by definition this classifier will have recall 100% for class 0, even though this model is pointless. In binary classification, the probability of predicting the two classes are mathematically bound together: $p(C=0)+p(C=1)=1$. So choosing to predict the outcome 0 instead of 1 doesn't change anything about what the model does, it would be exactly the same model with the same performance. What I suggest you should try is to work on your features: normally the features should be indicators that the model uses to predict the class, but unfortunately your current features are terrible at their job. Assuming that the features "make sense" for the task, it should be possible to "present them" to the model so that it can actually use them efficiently. This is called feature engineering: it's often the hard part of the job but also the most likely to really make things work :)
H: High accuracy on test-set, what could go wrong? You are given a pre-trained binary ML classification model with 99% accuracy on the test-set (assume the customer required 95% and that the test-set is balanced). We would like to deploy our model in production. What could go wrong? How would you check it? My answer was that the model could be biased towards our test set and could fail to generalize on yet to be seen data. We can check this by running the model on multiple unrelated test-set that it haven't seen before. Is this the right angle? AI: There are many possible ways to make errors giving you a really huge score on test in Data Science. Here are a few examples : Your test set is also in your train set : Imagine you have data from 2010 to 2020, and use 2019 as test. If you trained on all 2010-2020 without excluding 2019, then you'll test on a data well known by the model, since it was used to train it. Moreover, if the model tends to overfit (so fits "too perfectly and precisely" with training set), you could achieve a 99% accuracy Data leakage : This is a phenomenom in which your test set contains info that you shouldn't have in real new cases. Example : you're using Titanic dataset, predicting who dies and who survives. Imagine now the dataset has an attribute called "Hour of death", empty if the person survived, and filled with an hour if he died. Then your model will just learn "if this attribute is empty then the person survive, else he died". On your test set, you'll apply your model, knowing this info that you shouldn't know if you had to predict true new cases Wathever happens, a 99% accuracy have to make you wonder and look for errors, this is almost impossible to achieve unless your problem is REALLY easy (and might not need a Data Science model at all)
H: Is the PR AUC invariant under label flip? The ROC-AUC curve is invariant under a flip of the labels. I don't know if its a famous result so I will give the proof below. My question is if the PR-AUC curve also has this property. I have not been able to prove or disprove it yet. The reason this is important is that when data sets are highly imbalanced ROC-AUC curves perform well mechanically and it is better to look at PR-AUC curves. This obviously raises the question of a better curve for PR-AUC for one class being better for the other class as well or not. === Proof that ROC-AUC is invariant under label flip ==== $$ \begin{array}{|c|c|c|} \hline &predicted~positive & predicted~negative \\ \hline \hline Positives & TP(\xi) & FN(\xi) \\ \hline Negatives & FP(\xi) & TN(\xi) \\ \hline \end{array} $$ where $\xi \in [0,1]$ is the output of a classifier. For a fixed dataset we have $$ TP(\xi) + FN(\xi) = P \\ FP(\xi)+TN(\xi) = N $$ where $P$ and $N$ are actual number of negatives and positives. The ROC-AUC defined wrt to this definition of positives and negatives is $$ A = \frac{1}{PN} \int_0^1 TP(\xi)~d FP(\xi) $$ Now lets look at what happens under a label flip. We get $$ \begin{eqnarray} \tilde A &=& \frac{1}{PN} \int_1^0 TN(\xi)~d FN(\xi) \\ &=& - \frac{1}{PN} \int_1^0 (N-FP(\xi)~d TP(\xi) \\ &=& \frac{1}{PN} \int_0^1 (N-FP(\xi)~d TP(\xi) \\ &=& \frac{1}{P} TP(\xi) |_0^1 - \frac{1}{PN} (TP(\xi))(FP(\xi))|_0^1 + A \\ &=& A \end{eqnarray} $$ where the last equality follows from $TP(0)=0$, $TP(1)=P$ and $TN(1)=N$. AI: I propose to look at this a bit differently. The different classification status of the two classes are related as follows: $TP_1=TN_0$ $FP_1=FN_0$ $FN_1=FP_0$ $TN_1=TP_0$ From there we have: $$FPR_0=\frac{FP_0}{FP_0+TN_0}=\frac{FN_1}{FN_1+TP_1}=1-\frac{TP_1}{FN_1+TP_1}=1-TPR_1$$ and naturally $FPR_1=1-TPR_0$ for the same reasons. Therefore when one switches the labels on a ROC curve, one obtains a mirror image of the curve using the top-left to bottom-right diagonal as the symmetry axis. It follows that the AUC is identical. However this does not hold with a PR curve, because there is no such symmetry between precision and recall of the two classes. A counter-example should be enough to prove this: Suppose a test set with 100 instances, 90/10 of which have true class 0/1. Let's assume the classifier predicts all the instances as class 0 except 2 instances as class 1, one true 0 and one true 1. $$ \begin{array}{|c|c|c|} \hline &predicted~0 & predicted~1 \\ \hline \hline true~0 & 89 & 1 \\ \hline true~1 & 9 & 1 \\ \hline \end{array} $$ Thus we have: $P_0=0.92, R_0=0.99, P_1=0.5, R_1=0.1$ [edited] Assuming there is only this point on the PR curve and the two following extreme cases: Predict every instance as class 0: $P_0=0.9, R_0=1, P_1=NaN, R_1=0$ Predict every instance as class 1: $P_0=NaN, R_0=0, P_1=0.1, R_1=1$ Note: I'm not totally sure about the proper way to deal with $P=NaN$ but I think it's ok to assume P=1 in order to put these points on the curve (this is not essential to the point anyway). It's easy to see that the AUC is very different whether the focus class is 0 or 1: In the former case the curve is entirely in the top part of the graph and the area is almost the maximum. In the latter the curve is mostly in the bottom part of the graph and the area is less than 0.5.
H: Does statsmodels compute R2 and other metrics on a validation-/test- set? Does statsmodels compute R2 and other metrics on a validation set? I am using the OLS from the statsmodels.api when printing summary, an r2 and r2_asjusted are presented. I did not trust those 0.88 and computed an own adjusted R2 with scikit-learn r2_score and the adjusted r2 function from this answer resulting in 0.88 as well. So the question arose. AI: No. There's another question over on Stack Overflow concerning wrapping statsmodels models to use the sklearn conventions, so that you can use sklearn cross-validation, hyperparameter searches, etc.
H: Does Gradient Boosting perform n-ary splits where n > 2? I wonder whether algorithms such as GBM, XGBoost, CatBoost, and LightGBM perform more than two splits at a node in the decision trees? Can a node be split into 3 or more branches instead of merely binary splits? Can more than one feature be used in deciding how to split a node? Can a feature be re-used in splitting a descendant node? AI: Gradient boosting can be applied to any base model, so doing it with a Quinlan-family decision tree (which allow for such higher-arity splits for categorical features) should make this possible. However, all implementations of gradient boosted trees that I know of (and certainly XGBoost, CatBoost, LightGBM) all use CART as their tree model, so you won't get anything but binary trees. (These GBMs do modify CART a little, e.g. in using histogram binning to reduce the split searches, but nothing as drastic as n-ary splits for categoricals.)
H: Maximum Likelihood estimation Given a sample $ X_1,X_2 \dots X_{100}$ and the density function $ f(x;\theta) = \frac{1}{\pi \cdot \left(1+\left(x-\theta \right)^2\right)}$ , find an approximate solution for $\hat{\theta}_{MLE.}$ My attempt: I have found the joint likelihood $L(\theta;x_1,x_2\dots x_{100}) = \prod _{i=1}^{100}\left(\frac{1}{\pi \cdot \left(1+\left(x_i-\theta \right)^2\right)}\right)\:$ $l$ = $\log(L) = -100*\ln(\pi)-\sum^{100}_{i=1}(\ln(1+(x-\theta)^2)$. I'm not sure of this step $\frac{\partial }{\partial \theta}\left(\log(L)\right) = \sum_{i=1}^{100}(\frac{2(x_i-\theta)}{1+(x_i-\theta)^2}$ then I used Newton's method to find the maxima. this is the script I used to calculate the maxima #deravitive of log(L). fun1 <- function(theta){ y1 <- 0 for(i in 1:length(x)){ y1 <- y1 + (2*(theta-x[i]))/(1+(x[i]-theta)^2) } return(y1) } #derivative of fun1. fun1.tag <- function(theta){ y <- 0 for(i in 1:length(x)){ y <- 2*(theta^2+(x[i]^2)-20*x[i]-1)/((1+(x[i]-theta)^2)^2) } return(y) } # The Newton's method. guess <- function(theta_guess){ theta2 <- theta_guess - fun1(theta_guess)/fun1.tag(theta_guess) return(theta2) } theta1 <- median(data$x) epsilon <- 1 theta_before <- 0 while(epsilon >0.0001){ theta1 <- guess(theta1) epsilon <- (theta_before- theta1)^2 theta_before <- theta1 } What I got was $\hat{\theta}_{MLE} = 5.166$ I'm now trying to plot the data(in my case x) and check if $\hat{\theta}_{MLE} = 5.166$ is actually a maxima. AI: You have a typo in your formula #derivative of fun1. fun1.tag <- function(theta){ y <- 0 for(i in 1:length(x)){ y <- y + 2*(theta^2+(x[i]^2)-20*x[i]-1)/((1+(x[i]-theta)^2)^2) } return(y) } There is y + missing inside the loop.
H: Scaling the activation function It is obvious that I have to scale the output data if the range of values is between say [-10;10] and the activation function of the output layer takes values in the interval [-1;1]. But I could also scale the activation function by multiplying it with the factor 10 instead. It seems to me that it is more common to scale the data and not the function. Is there a reason for it? AI: All it does is make a difference in the step size of the learning step. Scaling the outputs down makes the gradients smaller, so the gradient descent updates are smaller. We want that so we do not jump over a good solution with bigger steps. Let's say the activation function is $a =\sigma(x)$ where $x$ is the input to the activation function and $a$ is the output. The range of true outputs $y$ is on the order of 10 times of the range of $\sigma$. We can either scale $\sigma$ up, or scale $y$ down. Scaling activation function Lets scale the output by 10: $y' = 10a$. The loss function $L$ compares predictions vs. truth: $L(y', y)$. We need the gradient w.r.t $x$ for backpropagation. That is, to find how the loss changes with respect to input so we can change the input: $$ \frac{\partial L(y', y)}{\partial x} = \frac{\partial L(y', y)}{\partial y'} \times \frac{\partial y'}{\partial a} \times \frac{\partial a}{\partial x} \\= \frac{\partial L(y', y)}{\partial y'} \times 10 \times \frac{\partial a}{\partial x} \\ = \frac{\partial L(10a, y)}{10\cdot \partial a} \times 10 \times \frac{\partial a}{\partial x} \\= \frac{\partial L(10a, y)}{\partial a} \times \frac{\partial a}{\partial x} \\= \frac{\partial L(10 \cdot (a, 0.1y))}{\partial a} \times \frac{\partial a}{\partial x} $$ Scaling output On the other hand, lets scale down to data, so $y_s = 0.1y$. This means we do not need to scale $a$. The loss function gradient now is: $$ \frac{\partial L(a, y_s)}{\partial x} = \frac{\partial L(a, 0.1y)}{\partial a} \times \frac{\partial a}{\partial x} $$ Now for both cases, note the final forms of the gradient. The only difference is the arguments of $L$. For the case where the output was scaled, the arguments of the loss function are 10 times smaller. Which means that the gradient will be smaller. Which means that the step update made to $x$ will be smaller. We usually want small updates so we can converge to an optimal solution. But also note, we can make the step size smaller anyways by reducing the learning rate too. So scaling the output down instead of the activation up is a nice rule of thumb to get better convergence. It is not a rule.
H: Is there a fundamental difference from creating a model for each value in a category? I am creating a few models based on service requests. The services being requested are not distributed equally, some services being used sparingly, whereas others are quite common. I had these services as categorical variables and built pipelines to incorporate them through one-hot encoding. I got to thinking that it may make more sense to train a model per service(at least for the common ones). Or does it make more sense to lump in the less common ones in a special category? I am struggling with the regression model, coming in at 0.41 for my R2 value. AI: Is there a fundamental difference from creating a model for each value in a category? Yes there is. If a model is trained for each specific value of a variable (a category), then only the subset of data for this category can be used to train and test the model. As a consequence each model has a smaller number of instances to be trained from. Consequences: In the case of a small category, there might not be enough instances to obtain a reliable model. Every model is independent. This can be good or bad depending on whether this independence is also true in the data or not, or to what extent: If the features behave in a completely different way depending on the category, then it's better to create individual models since each can really exploit the specific patterns for this category. If the features have a very similar behavior across the categories, then independent models by category would potentially lose a lot of information. In conclusion the choice often depends on: How much data is available for each category. How independent are the other features with respect to the category.
H: Pandas replace column values by condition with averages based on a value in another column I have a dataframe with people's CV data. Among others, there's a column with years of experience, and a column with age. Some people stated their age and experience in a way that experience > age. For example age=30 and experience=40, which is a problem. I am trying to perform a following task: If experience > age, replace the value of experience with an average experience of people of the same age. For example if age=30 and experience=40, replace experience with an average experience of all 30-year olds. The problem is that I don't know how to obtain the age of a specific person in the slice df[df['experience'] > df['age']]. AFAIK it could be done by something like this: mask = (df['experience'] > df['age']) df['experience_cleaned'] = np.where(mask, df[df['age'] == age]['experience'].mean() ,df['experience']) I would appreciate if you show me how it can be done, because I could use the technique to also preprocess more data in a similar way. AI: Get a dictionary with average age where experience < age mapping = df[df['age']>df['experience']].groupby('age').mean().apply(list).to_dict()['experience'] Use this dictionary to replace values where experience > age df.loc[df['age']<df['experience'],'experience'] = df[df['age']<df['experience']]['age'].map(mapping)
H: Which definition of Likelihood function is correct? In the online version of the Deep Learning book on chapter 5 the estimator for likelihood function is defined as: That is the product of individual probabilities. After taking the log it arrives at the log-likelihood funciton (Eq.1): It then rescales the above by dividing it by m to obtain a version expressed as the expected value (Eq.2): OK. Here is my first question: The expected value is defined as And so I think the expected value of log will be the below expression which is not the same as Eq.1 in the book. The only way for this to be true(that is to divide the Eq.1 by m and arrive at Eq.2 as claimed by the book author) is for the probabilities of p(data) to be uniform. But this is also not a valid assumption. So I don't see what I am missing here? Next, the book argues that maximizing the above log-likelihood function (Eq.2) is same as minimizing the KL divergence: Or more simply just minimizing the second term. And so the author says that either way we arrive at the same function as Eq.2. On the other hand, from the Wikipedia page the cross entropy of two probability is defined as : I can understand this definition as the expected value of log(q) which is same as the expression in the Eq.2. but not Eq.1 From the same Wiki page the likelihood definition is given as below which is different than the likelihood function definition from the book(above). Here the probability of q (model) has been raised to the number of occurrences; which then on taking log it is understandable to see it as the expected value. So I am confused that first of all which definition of likelihood function is the correct one? Given the definition from the Wikipedia I can understand that maximizing the log-likelihood function is same as minimizing the cross-entropy function. However I cannot arrive at the same conclusion from the definition of the likelihood and the log-likelihood function given in the book, for the reasons I explained above. AI: 1 - Equivalence between $\mathbb{E}_{x\sim \hat{p}_{data}}\log(p_{model}(x;\theta))$ and $\frac{1}{m}\sum_{i=1}^m \log(p_{model}(x;\theta))$: Yes, as you mentioned in the question, the key issue here is to realize that the following expectation is with respect to the empirical distribution defined by the training data ($\hat{p}_{data}$): $$ \theta_{ML} = \arg\max_{\theta}\mathbb{E}_{x\sim \hat{p}_{data}} \log(p_{model}(x;\theta)) $$ By definition, an empirical distribution assigns a probability of $1/m$ on each of the $m$ points $\mathbb{X} = \{x^{(1)},x^{(2)},...,x^{(m)}\}$ $\to$ Thereby the value of $\hat{p}_{data}(x_i)=\frac{1}{m}f(x_i)$ where $f(x_i)$ represents the ocurrence frequency of a value $x_i$ in our dataset. Note that a value $x_i$ may appear more than once in our dataset $\mathbb{X}$ $\Rightarrow f(x_i)$ can be $> 1$. This is why we can express the previous expectation as: $$\begin{align} \mathbb{E}_{x\sim \hat{p}_{data}} \log(p_{model}(x;\theta)) &= \sum_{i=1}^m \frac{1}{m}\log(p_{model}(x^{(i)};\theta))\\ \end{align}$$ Hence, the maximization of both expressions lead to the same value of $\theta_{ML}$ 2 - Equivalence between the minimization of $D_{KL}(\hat{p}_{data} \Vert p_{model})$ and $H(\hat{p}_{data}, p_{model})$ As said in the question, the KL divergence of the probability distributions $\hat{p}_{data}$ (empirical distribution of $\mathbb{X}$) and $p_{model}$ (our statistical model that we are using to fit $\mathbb{X}$) is given by the next expression: $$D_{\text{KL}}(\hat{p}_{data} \parallel p_{model}) = \mathbb{E}_{x\sim \hat{p}_{data}} [ \log(\hat{p}_{data}(x)) - \log(p_{model}(x;\theta))]$$ Here, we can see that $\mathbb{E}_{x\sim \hat{p}_{data}} \log(\hat{p}_{data}(x))$ does not depend on $\theta$ (it only depends on the data generating process), so it can be trated as a constant. Hence we can adress the same problem of finding the value of $\theta$ that minimizes the cross entropy by minimizing this KL divergence, because this is the same as minimizing: $$ \mathbb{E}_{x\sim \hat{p}_{data}} [- \log(p_{model}(x;\theta))] = - \sum_{i=1}^m \frac{1}{m}\log(p_{model}(x^{(i)};\theta))$$ Which is the definition of the cross entropy $H(\hat{p}_{data}, p_{model})$. 3 - Equivalence between the minimization of $H(\hat{p}_{data}, p_{model})$ and the maximization of $\sum_{i=1}^m \log(p_{model}(x;\theta))$ Here, we have to proof that $$\arg\max_{\theta}\sum_{i=1}^m \log(p_{model}(x;\theta))$$ gives the same value of $\theta_{ML}$ than: $$\begin{align} \arg\min_{\theta}H(\hat{p}_{data}, p_{model}) &= \arg\min_{\theta}-\frac{1}{m}\sum_{i=1}^m \log(p_{model}(x^{(i)};\theta))\\ &= \arg\max_{\theta} \frac{1}{m}\sum_{i=1}^m \log(p_{model}(x^{(i)};\theta)) \end{align}$$ Both expressions are equivalent (they give the same value of $\theta_{ML}$) because a scale factor (such as $1/m$) does not affect in a minimization/ maximization problem. 4 - Equivalence between $\prod_{i=1}^m p_{model}(x^{(i)};\theta)$ and $\prod_{j} p_{model}(x_j;\theta)^{\#\text{ ocurrences of }j}$ This is because $i$ and $j$ represent different things: $i$ represents each individual datapoint of a dataset $\mathbb{X}\Rightarrow x^{(i)}$ is a datapoint. $j$ represents a certain value present in our dataset $\mathbb{X}\Rightarrow x_j$ is a certain value. Given this, if a datapoint $x^{(i)}$ has the same value ($x_j$) of other $n$ datapoints, then we can express the likelihood of these datapoints in two equivalent ways: $$ \prod_{i=1}^n p_{model}(x^{(i)};\theta) = p_{model}(x_j;\theta)^n$$ So, if we extend this reasoning to all the possible values ($x_j$) that our dataset can have, we reach the equivalency given by wikipedia: $$\prod_{i=1}^m p_{model}(x^{(i)};\theta) = \prod_{j} p_{model}(x_j;\theta)^{\#\text{ ocurrences of }j}$$
H: Building a summary string in a Pandas groupby (Possibly cross-tab or pivot-table question) I am a novice in Python pandas. I would like to do a particular aggregation / grouping / cross-tab but I know so little of the terminology that I do not even know how to look this up. But here is what I would like. Say I have a table like this: Bob, Oranges, 5 Bob, Apples, 10 Bob, Bananas, 12 Tim, Oranges, 3 Tim, Apples, 20 Tim, Bananas, 5 I would like to groupby fruit to find the total for each type of fruit. But produce another field containing a string which has the details sorted by their value in another column. So I would like an output something like this: Oranges, 8, "Bob(5), Tim(3)" Apples, 30, "Tim(20), Bob(10)" Bananas, 17, "Bob(12), Tim(5)" Where the string aggregates the values from the names column in a list sorted by the associated numeric value. I know that there isn't something out of the box to do this, but what is this kind of operation or aggregation or pivoting (where you take one of the columns and turn it to be "horizontal" if that makes any sense) actually called? How would I go about implementing it in pandas functions? AI: Say you have a dataframe of your data in this format: df = pd.DataFrame({ 'name': ['Bob', 'Bob', 'Bob', 'Tim', 'Tim', 'Tim'], 'fruit': ['Oranges', 'Apples', 'Bananas', 'Oranges', 'Apples', 'Bananas'], 'num': [5, 10, 12, 3, 20, 5] }) You can perform a groupby on fruit and aggregate the sum of the num field. After that you can apply a function on your aggregated dataframe that leverages values from your non-aggregated data like so: df_agg = df.groupby('fruit').sum() df_agg.reset_index(inplace=True) def desc(x): d = [] for idx, row in df.loc[df['fruit']==x].sort_values(['num'], ascending=False).iterrows(): d.append(f"{row['name']}({row['num']})") return ', '.join(d) df_agg['desc_str'] = df_agg['fruit'].apply(lambda x: desc(x)) This gives you the following df_agg that you are looking for: fruit num desc_str 0 Apples 30 Tim(20), Bob(10) 1 Bananas 17 Bob(12), Tim(5) 2 Oranges 8 Bob(5), Tim(3)
H: Why isn't all data periodic? Let's say we have 2 classes (-1 and 1), 1 feature (x), and an arbitrary amount of data points. Why can't we always find a frequency and phase that fit a sine wave or a square wave to our data points? When graphed, as frequency is increased, the sine function almost seems to cover the entire [-1≤y≤1] domain, so it isn't intuitive why we can't always find a periodic function that fits our data perfectly. Is there a way to prove that not all data is periodic? AI: In order for data to be periodic, there needs to be a "time" independent variable over which the data depends. Not always there is a time variable in the data so, in those cases, you cannot approximate the data with trigonometric functions. For the cases where there is time in the data, you can use the Discrete Sine Transform, as well as the more popular Discrete Cosine Transform, to approximate the signal in terms of a sum of sine/cosine functions oscillating at different frequencies. You should take into account, however, that discontinuities affect the DCT accuracy; from the wikipedia page for the DCT: In particular, it is well known that any discontinuities in a function reduce the rate of convergence of the Fourier series, so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed.
H: Regressing Elbow-like Plot I have data consisting of light wavelength and some coefficient. The wavelength in the data I have is in steps of 10s and I need the coefficient for a wavelength that is between the values I have (eg. 432.5 nm). Data: wavelen coeff 380 0.982 390 0.564 400 0.378 ... ... I was thinking of regressing the data I have to find a function that represent the data, but not quite sure what type of function I should choose. I tried exponential and power functions, but the results not accurate for the first few wavelengths due to the coefficient drops extremely fast for earlier wavelengths. What would be the best type of regression for this data? I was also thinking of splitting the regression for the first few data points, like a piecewise function. Is this a valid method? AI: Answer Thought I'd elaborate a little on my comments with a demo of different curves. The main thing is that short of fitting a curve backed by theory (which we've discussed and I know isn't an option), I don't think there's any one method which is more 'valid' than the others. With just a single value at each wavelength, using a notion such as error doesn't help, as we can just choose splines so that they pass through the original data points. You could try to use theory to suggest the curve should be smooth (if that is appropriate) but each of the interpolation methods I demonstrate below provide smooth curves, so this is unlikely to be fruitful as you would need to argue some degree of smoothness and it does not sound like that argument can sensibly be made. Illustration of Interpolation Curves To illustrate some of the options for interpolating this data, I've taken the first few data points provided in the question and added a few estimated from the provided plot. Using that data, I fitted a smoothing spline (smooth.spline), loess regression (loess), cubic spline (spline) and polynomial (lm with poly). The result can be seen in this plot: I haven't tried to tune any of the methods but you can still see the main differences. The splines are all quite similar. Obviously, they have the advantage of passing through the given data points. You could argue that the 'anchoring' of the spline curves would make interpolating at values close to the original wavelengths more likely to be closer to the 'true' value (but there is no guarantee that would be the case unless some theory can be applied to explain why the curve should be smooth like that). On the other hand, you could consider the polynomial to give some degree of 'regularisation' and so may postulate that, were many data points to be taken, the true values may 'average out' to a smoother curve of this shape. Without more data, it really is a case of choosing the assumptions that you wish to make rather than choosing an objectively better way to interpolate this data. It sounds like gathering more data (repeat readings to average and/or intermediate values) would be the only way to really hone in on a method but I realise that may well not be practicable. Below is the R code I used to generate the plot, in case it's of use. library(ggplot2) library(magrittr) library(tidyr) theme_set(theme_light()) wavelen <- seq(380, 440, 10) coeff <- c(0.982, 0.564, 0.378, 0.220, 0.175, 0.160, 0.150) dat <- data.frame(wavelen = wavelen, coeff = coeff) # Create a new vector of wavelengths with a finer resolution to view the results new_wavelen <- seq(380, 440, 1) # Smoothing spline smooth_spline <- smooth.spline(dat$wavelen, dat$coeff) spline_pred <- predict(smooth_spline, new_wavelen) spline_pred <- as.data.frame(spline_pred) # Loess Regression loess_reg <- loess(coeff ~ wavelen, dat) loess_pred <- predict(loess_reg, new_wavelen) # Third degree polynomial poly_mod <- lm(coeff ~ poly(wavelen, 3, raw=TRUE), dat) poly_pred <- predict(poly_mod, data.frame(wavelen = new_wavelen)) # Cubic spline cubic_spline <- spline(x = dat$wavelen, y = dat$coeff, method = "fmm", xout = new_wavelen) # Plot results plot_dat <- cbind(new_wavelen, spline_pred$y, loess_pred, poly_pred, cubic_spline$y) colnames(plot_dat) <- c("wavelen", "Smooth", "Loess", "Poly", "Cubic") plot_dat <- plot_dat %>% as.data.frame() %>% pivot_longer(-wavelen, names_to = "Method", values_to = "Coeff") dat$Method <- "Original_Data" # hacky way to get the original data in the plot and legend. fig <- ggplot(data = plot_dat, aes(x = wavelen, y = Coeff, colour = Method, linetype = Method)) + geom_line() fig <- fig + geom_point(data = dat, mapping = aes(x = wavelen, y = coeff)) fig <- fig + labs(title = "Demo of Different Curve Fitting Methods", x = "Wavelength") fig
H: Calibrating probability thresholds for multiclass classification I have built a network for the classification of three classes. The network consists of a CNN followed by two fully-connected layers. The CNN consists of convolutional layers, followed by batch normalization, a RELU activation, max pooling and drop out. The three classes are imbalanced (as can be seen in the confusion matrix below). I have optimized the parameters of the network to maximize AUC. I'm calculating the AUC using macro- and micro-averaging. As can be seen in the ROC plot, the AUC is not that bad. On the other hand, the confusion matrix looks pretty bad, especially the first (low) class is badly predicted. The network tends to predict the majority class. As output of the network I'm getting a probability for each class. Then, I'm just taking the class according to the maximum probability for creating the confusion matrix. I have tried to use balanced class weights while training the network (in the fit method of Keras). This helped that the network also predicts more often the minority class(es) but on the other hand the AUC was decreasing. Is there a way to infer probability thresholds from the ROC plot? I think for two classes the optimal probability threshold can be inferred from the ROC plot by taking the max(TPR - FPR) but here I have three classes... Or is there another method? AI: Disclaimer: This answer describes the thoughts I had about this problem, I don't offer any guarantee about their validity so use at your own risks ;) There are two distinct parts in this problem: Finding an optimal threshold over three classes Improving performance with respect to the minority class First part: I might be wrong but as far as I know there's no way to select a particular probability threshold when there are three classes. And if there were a way, it probably wouldn't be from the ROC plot since the curves for the different plots are independent, so picking a point on the curve would correspond to different thresholds for different classes and I don't see how this would be manageable. The only way that I know is what you did: label with the class which has the maximum probability. Second part: if you want to force the model to take care of all the classes, you could also try optimizing with macro-AUC, but that would probably lead to the same problem of decreasing the micro-AUC since there would be more errors of true neutral or high predicted as low. The way I see it, the three way model doesn't work well: the imbalance of the minority class is not that bad, it's only 2 or 3 times less than the other classes. So the fact that the model almost completely dismisses this class is a bit strange. I also see that instances of the true class low are almost as often predicted as class high than class neutral, even though I would expect that the vast majority of errors on class low to be predicted as neutral. So my vague intuition is that maybe the system could be designed in a way which avoids the three-way problem for the model. I can think of two options in this perspective: A two-steps system where the first model classifies between low and neutral+high then the second one between neutral and high (or first with low+neutral vs. high and then low vs. neutral). This way each model is binary classification so you have more control over the thresholds at each step. Normally this is not recommended in classification, but here I assume that the classes are not truly categorical. Pushing the same idea further: there seems to be an order between the classes low < neutral < high, so maybe it would be possible to treat the task as a regression problem. This might help the model avoiding these "big" errors between low and high. Importantly you would also have flexibility with the predicted values: there would be two thresholds to determine and these could be found to optimize any appropriate evaluation measure.
H: Methods of disaggregating data to smaller units? I have a relatively straightforward question that I know poses some difficult challenges. Let's say I have a state-level rate of X. I would like to disaggregate the state-level rate to the county-level. I realize this is can be dangerous (ecological fallacy), but I have seen some studies use the technique with a set of assumptions. For example, if I know that each county is a certain proportion of the entire state population, I could take that proportion and multiply it by the state-level rate of X to get an (incredibly) naive county-level rate of X. I'm trying to find more information on ways to make this approach 'less' naive, but I can't seem to get any momentum. I've tried using the terms 'disaggregating' and 'weights', but I can't seem to tap into the right body of literature. Does anyone know of any methods/body of work that have attempted to handle this problem? AI: From a statistical point of view this is impossible if one doesn't have any data at the fine-grained level. Any statistical inference must be based on a sample from which specific patterns can be observed. If there is no data at the fine-grained level, any calculation is based on assumptions. For example one may assume that a variable is proportional to population (linear relation). But why not assume that the variable is a polynomial function of the temperature? Or that it is related to the prevalence of a particular gene? The main issue is that without any data there's no way to test any of these assumptions, so no there can be not reliable conclusion.
H: Trying to figure out which the training set is Can someone help me on this one? As can be seen in the screenshot, it says the loss is 1/2. Where's that 1/2 coming from? How can I replace the values in the h(s) function? Source PDF AI: As can bee seen in the screenshot, it says the loss is 1/2. Where does that 1/2 is coming from? What does the predictor $h_S(\mathbf{x})$ actually do? In simple English, it predicts the label from the training set if $\mathbf{x}$ is a member of the training set, and otherwise it predicts 0. Obviously, this predictor will perform quite well when evaluated on the training set. It will achieve a loss of 0. But what happens if we evaluate this predictor not just on the training set, but on the entire distribution $\mathcal{D}$? Notice that the predictor will correctly predict any instance outside of the dotted blue square. Because this is a continuous area, there are an infinite number of points that $h_S$ will predict correctly. Also notice that there are an infinite number of points that $h_S$ will predict incorrectly. Any point that is inside the blue square but was not found in the training set will incorrectly be predicted 0 instead of 1. Since there are an infinite number of points where $h_S$ is incorrect, and an infinite number of points where $h_S$ is correct, the authors say that the loss is 1/2. How can I replace the values in the h(s) function? $h_S(\mathbf{x})$ is a memorization classifier. If it has already "seen" a data point during training, then it will regurgitate the classification of that point. Otherwise it just predicts 0. So if you give $h_S$ a red point, it will predict 0. If you give it a blue point, it will predict 1. If you give it any other point in the square, it will predict 0.
H: XGBoost feature importance has all features but decision tree doesn't I have used XGBoost to train a model with 400 features. My understanding is that since the max_depth is default at only 6, and 2^6 < 400, not all features will end up in the tree. How come when I output the feature importance chart, it shows every single feature with above 0 importance? The decision tree output clearly shows that not every feature has been used in the final tree. AI: XGBoost iteratively trains many trees (=boosting). So there is not only one tree. For example you can plot a single tree, see: https://machinelearningmastery.com/visualize-gradient-boosting-decision-trees-xgboost-python/ A basic decision tree algorithm creates just one tree. If you apply pruning to the tree not all features would be present in the tree. The first split would be the one with the highest importance, ...
H: Hyperparameter tunning for Random Forest- choose the best max depth I'm trying to choose the best parameters for random forest model. For that goal I hae run my model in loop with only one parameter and each time I have changed the number for the parameter max depth. I have created two charts: one for the model score and one for the MAE. this is how the charts look like: My question is what is the right number to take for that parameter? I fell likeI want to choose really low number when the training set and the test set are close to each other and high (the second point, when the accuracy is above 90) : But I see that it gets stabalize much after and i'm afrid that it might lead to overfitting. Is it true? should I choose bigger number from the stable zone or is ok to take low number as long as the training and test data are having the same accuracy and same error? AI: In general, the max depth parameter should be kept at a low value in order to avoid overfitting: if the tree is deep it means that the model creates more rules at a more detailed level using fewer instances. Very often some of these rules are due to chance, i.e. they don't correspond to a real pattern in the data. Overfitting is visible in your graphs from the quite high difference between training and test set performance. One can observe that the performance on the test set increases with the first few values of depth (I'm guessing until around 5 or so), and it starts decreasing after that. So the optimal point (performance and no overfitting) is the point that you mention, it's after this point that the model is overfit. The stable part of the graph is probably due to another parameter (e.g. min. number of instance per leaf) which stops the model from overfitting even more. If the model was free to overfit as much as it wants, it would probably reach max performance on the training set and very low performance on the test set. Note: I'm confused why you're using both accuracy and MAE, normally the former is for classification and the latter for regression.
H: Why decreasing the number of convolutional layers inside a CNN increases the number of parameters? I am building a CNN from scratch, and I am trying to change the number of convolutional layers to see what happens. I have noticed that decreasing the number of convolutional layers increases the number of parameters. This fact really surprised me, since I was expecting to see a decrease in the number of parameters. Why decreasing the number of convolutional layers inside a CNN increases the number of parameters? AI: Generally removing the number of convolutional layers indeed decreases the number of parameters in a network, but given your situation I suspect you are using fully connected layers after the convolutional layers in your network. When removing some convolutional layers it means that the image/tensor size before being passed to your fully connected layers is larger and contains more pixels. Since each neuron in the fully connected layer is connected to all pixels in the layer before it the increase in the number of parameters caused by the larger image size is larger than the decrease in the number of parameters caused by the removal of the convolutional layers (leading to an overall increase in the number of parameters in your network).
H: Lower training accuracy than testing accuracy (MLP/Dropout) I am working on a problem of multi-class classification by MLP. I have set dropout to each middle layer. Now I observe the training accuracy is around 10% less than the testing accuracy. My guess is, dropout is active only during training but inactive during testing. So part of the neurons are reset at training (leading to low accuracy), but it is not happening for testing. My questions: Is my understanding correct? In other words, if I remove the dropout part, will the training accuracy increase but the testing accuracy will remain the same? When reporting the MLP accuracy, should I report training acuracy or testing accuracy? AI: Yes, your understanding is correct: dropout can be the cause. Nevertheless, you can only be sure if you evaluate on the training data with dropout disabled. This depends on the context, but normally you would report the test accuracy.
H: Train and test the model using 2 separate datasets I have a big data-set (14K subjects) and a small data-set (100 subjects). Both have same number and similar features (20). They are not overlapped. I used the big data-set to train a regression model and validate it. Then I identified the most significant features in the big data-set. Now I want to identify most significant features in the small data-set. My question is, should I train the model on the small or on the big data-set? If I train the model on the big and then test on the small, this might results in high MAE which is reasonable, but I am afraid will not give me the right list of significant features. If I train the model on the small data-set and then test on the same data-set, this might results in low MAE, but this might provide the right list of significant features. What do you think? AI: In general, the idea is that the model should be trained on data that is similar to the one you will use in production. Otherwise, you would be training your model to be good at something other than what you would be evaluating it on.
H: Which phrase should be returned in case of multiple matches when comparing text? I want to compare one sentence to some other sentences using the Bag of Words model. Suppose that my comparing sentence is: I am playing football and there are three more sentences that I want to compare my comparing sentence with. They are: 1. and I am playing Cricket 2. Why do you play Cricket 3. I love playing Cricket when I am at school Now, if I compare my comparing sentence to the above three sentences by counting words, the number 1 and number 2 sentences have the same number of words that the comparing sentence has. and that is 3 (I, am , playing). Now the question is, Which sentence is more related to my comparing sentence in this case? there are no semantic meanings involved at all. In some places I saw, they say, it is less convoluted to return the shortest sentence in this case. What are your thoughts? AI: This is usually done by carefully choosing two things: The sentence representation. Word count is the most simple option but there can be many others: TFIDF weights, with/without removing stop words, with/without lemmatization, etc. In a DL approach the sentence would be represented as a sentence embedding. The similarity measure between two sentences. Again there are many options, in BoW approaches the standard ones would include counting words in common (e.g. Jaccard) and cosine TFIDF. So the answer is: it depends on the similarity score. A complex similarity score like cosine TFIDF rarely produces ties so the highest score can be selected. More simple methods give ties, and then the logical answer is to return all the tied sentences.
H: Which script can be used to finetune BERT for SQuAD question answering in Hugging Face library? I have gone through lot of blogs which talk about run_squad.py script from Hugging Face, but I could not find it in the latest repo. So, which script has to be used now for fine tuning? AI: A recent PR changed the location of the scripts you are looking for to examples/legacy/question-answering
H: Replace value of a column if the value of another column is a duplicate I have a Pandas dataframe that contains three columns: ID, name and date. The name column is not unique and may contain duplicates. I want to change the date of the duplicated name to the earliest date. For example: Input: id name date 1 Rachel 3/10/2020 2 Dave 1/5/2020 3 Rachel 1/10/2020 4 Rachel 1/1/2020 5 Jason 4/15/2020 6 Dave 7/1/2020 Output: id name date 1 Rachel 1/1/2020 2 Dave 1/5/2020 3 Rachel 1/1/2020 4 Rachel 1/1/2020 5 Jason 4/15/2020 6 Dave 1/5/2020 In the sample data above, there are three dates associated to Rachel, the earliest date is 1/1/2020, therefore, in the output, all the dates associated to Rachel are changed to 1/1/2020. AI: A cleaner way would be: df["date"] = df.groupby("name").date.transform("min") Outputs:
H: How to reconstruct a scikit-learn predictor for Gradient Boosting Regressor? I would like to train my datasets in scikit-learn but export the final Gradient Boosting Regressor elsewhere so that I can make predictions directly on another platform. I am aware that we can obtain the individual decision trees used by the regressor by accessing regressor.estimators[].tree_. What I would like to know is how to fit these decision trees together to make the final regression predictor. AI: There are two estimators i.e. The initial predictor and the sub-estimators init_estimator The estimator that provides the initial predictions. Set via the init argument or loss.init_estimator. estimators_ ndarray of DecisionTreeRegressor of shape (n_estimators, 1) The collection of fitted sub-estimators. Prediction after the first (i.e. init) estimator is controlled by the learning rate. You can get the prediction as done in the below code - trees = model.estimators_ x = x_test.iloc[10,:].values # A sample X to be predicted y_pred = model.init_.predict(x.reshape(1, -1)) # prediction from init estimator for tree in trees: pred = tree[0].predict(x.reshape(1, -1)) # prediction from sub-estimator y_pred = y_pred + model.learning_rate*pred # Summing with LR y_pred
H: Understanding projection layer for BLSTM In many research papers there are 'projection layers' related to BLSTM layers. For example, from here: "we trained an 8-layer BLSTM encoder including 320 cells in each layer and direction, and the linear projection layer with 320 units followed by each BLSTM layer" I can't understand what this means and how it works. Any help on this topic would be appreciated. Thanks in advance! AI: A "projection" is a simple linear/dense layer, that is, a matrix multiplication and a bias vector addition. It is called projection because you "project" a representation of dimensionality $M$ into a representation space of dimensionality $N$. Sometimes, especially for sequences or 2D data, these projections are implemented as a convolution of size 1, which is equivalent to the computations I described above.
H: Order of hyperparamater tunning I'm trying to do hyperparameter tunning for random forest regression model. My question is- is there any order I should do it? like starting with specific parameter and then move on on the other? should I check the model each time with one paramter only or for each parameter i'm checking I should add the ones I have found already? Exmaple: if I run model with only n_job -1 and min_sample_leaf, and I got the best value for min_sample_leaf and now I want to check the max depth, and after that more parameters, should I put already the best value I found for min)sample_leaf? or I should check it without any other parameter and in the end create model with the best values from each one? AI: Have a look at this blog post: https://towardsdatascience.com/hyperparameter-tuning-the-random-forest-in-python-using-scikit-learn-28d2aa77dd74 Ideally you should optimise the hyperparameters jointly and not one after the other. More importantly, you should be doing cross validation. Consider also the RandomizedSearchCV described in the post.
H: What is the meaning of pct_trend_adjusted in FiveThirtyEight's polling data? I'm working on polling data from fivethirtyeight. What is the meaning of the pct_trend_adjusted column here? I understand that pct stands for percentage, but I don't get what the trend is that they are adjusting for, and how these figures are obtained. AI: Looking at the methodology provided by FiveThirtyEight, it seems as though this figure (i.e. polling figures on state level) is adjusted for trends in national polls. See differentiator 1 on the linked page for more info: Basically, in states that haven’t been polled recently, we make inferences about what’s going on there using national polls or polls from other states that have been surveyed recently
H: Does shuffling the training data matter in a K Nearest Neighbors Classifier model? I am new to machine learning and I have a couple of questions about a project. So, I created a classifier using the MNIST data set for a ML project that I was working on. I augmented the data by shifting each original point 2 pixels up, down, left, and right (so I have 240,0000 new images in addition to the 60,000 original ones). I was wondering if I should bother retraining my model with a shuffled training set, since I think (please correct me if I am very wrong) that the order of the training data in a KNN classifier does not play a factor in the final model, since predictions are made on nearest neighbors and shuffling the dataset will not change these neighbors. *As a side note, I am using the classifier/model on user drawn images, and the one with the augmented data set (unsurprisingly) performs better than the model trained only with the original data. However, I seem to be getting the same accuracy score (the exact same to the 4th decimal place) on my test set with my augmented data set (30,000 total images) as I did with my original data set (60,000 total images). Is this something to be concerned about/did I do something wrong? Thanks so much for your help! AI: If you are talking about K-nn algorithm per se, then, you do not have any advantage in shuffling your training set. Indeed, as you said, K-nn is just looking for the K nearest neighbors and does not care at all about the order of the samples, the algorithm will scan the entire training set for a single prediction, no matter what (unless you are using some efficient implementation like ball-tree or kd-tree). Regarding the second point of your question, I am wondering what your pipeline is: are you augmenting the dataset before splitting in train-validation-test? If so, I think you are injecting some bias in the evaluation of you algorithm, because you are evaluating your algorithm in samples that do not exist. You should augment only the training set and then validate/test your algorithm on real data.
H: Is it advisable to merge similar datasets to improve model accuracy? I'm trying to build a classifier that would help me classify whether a statement collected from Reddit is bullish, bearish or neutral. To this end, I have hand-labelled a fairly small dataset of 2500 entries, each with max 280 characters. Unfortunately, the data is unbalanced (60% neutral, 25% bullish and 15% bearish) and my initial attempts are returning poor results for the bullish and bearish classes. I've managed to obtain 10,000 entries of similar data from StockTwits, each labelled bullish or bearish (no neutral...) and 280 characters max. I tested supplementing my Reddit dataset with bullish/bearish data from this dataset to balance out the classes. In other words, I added data from the StockTwits dataset until the number of bullish and bearish entries in matched the neutral. With this model, I'm getting much better results. Is merging similar datasets in this way advisable? My gut instinct says "no" but I haven't found anything suggesting this is not a good idea. AI: Ideally if you mix data from different sources (different distributions), the mix should be uniform for each class. Otherwise if for example class A gets more of source S and class B gets more of source T, there is a risk that the model will learn to associate class A with source S and B with source T and that it won't generalize across sources. To validate such an approach, keep a separate test set with both sources and compute several accuracies, one for each (class, source).
H: high accuracy on non trained class in tensorflow model I have trained a 4 multi-class (apple_nature, apple_disease, apple_blacrot, apple_healthy) image classification algorithm using TensorFlow. However, after training, we get a good accuracy model. The problem I am facing is when I predict tomato images it gets high accuracy on apple values, so how can I fix this issue? Actually, it gets a high accuracy rate on non trained classes. AI: It is normal because the model has never seen such data so the probability for such unseen data is like random, it can be high or low. You should add all other images (like the ones of tomato) into a new dedicated class OTHER where you will put everything which is not in the 4 original classes. That way, for an image of tomato your model will give a high probability for the OTHER class, and low probability for the apple classes.
H: Fill missing values(NaN) based on the previous row that contains a specific value? I would like to fill the missing data using the value from the previous trading day of the same stock. In this example, the AAPL stock should be 100. I've tried with fillna but I am not able to pick a specific row based on the stock. My data: date stock price 22/12/20 MSFT 87 22/12/20 AAPL 99 22/12/20 FCA 81 23/12/20 MSFT 90 23/12/20 AAPL 100 23/12/20 FCA 80 24/12/20 MSFT 91 24/12/20 AAPL NaN 24/12/20 FCA 78 AI: You can groupby by 'stock', forward fill missing values with ffill and use its result in fillna. For example: date stock price 22/12/20 MSFT 87 22/12/20 AAPL 99 22/12/20 FCA 81 23/12/20 MSFT NaN 23/12/20 AAPL 100 23/12/20 FCA 80 24/12/20 MSFT 91 24/12/20 AAPL NaN 24/12/20 FCA NaN df.fillna(df.groupby('stock').ffill()) Result: date stock price 0 22/12/20 MSFT 87.0 1 22/12/20 AAPL 99.0 2 22/12/20 FCA 81.0 3 23/12/20 MSFT 87.0 4 23/12/20 AAPL 100.0 5 23/12/20 FCA 80.0 6 24/12/20 MSFT 91.0 7 24/12/20 AAPL 100.0 8 24/12/20 FCA 80.0
H: ValueError: y should be a 1d array, got an array of shape (1045, 5) instead I have just started Python and working on training models. The task that I have been assigned is to train a dataset named "austin_Weather" Original Dataset y attribute After having done some manipulations (following the article), these are my new datasets for training and testing, X Train y Train Now, I tried to apply Naive-Baysen algorithm on these two datasets, but I got this following error ValueError: y should be a 1d array, got an array of shape (1045, 5) instead. So, I tried searching it up online, and I found one solution that I should reshape these arrays, so I tried the following code, X_train = X_train.reshape(-1, 1) y_train = y_train.reshape(-1, 1) X_test = X_test.reshape(-1, 1) y_test = y_test.reshape(-1, 1) But I got this error on this, AttributeError: 'DataFrame' object has no attribute 'reshape' I tried to fix it up by searching it up online, but I couldn't understand most of the things due to my lack of knowledge in Python syntax. If anyone can help resolve this and tell me what exactly is the problem here, I would be very thankful. AI: Your data is multilabel: each row can have more than one label. But sklearn's Naive Bayes doesn't support that format of problem. You can use MultiOutputClassifier to wrap the Naive Bayes classifier, effectively training one model for each of the labels. See the User Guide, especially the MultiOutput classification section. You may also want to consider the ClassifierChain (in the next section). The last error you see is because you're using pandas structures, but reshape is a numpy method. Anyway, as @Oxbowerce observed, reshaping won't help; you need to deal with having multiple labels.
H: Identify outliers for annotation in text data I read the book "Human-in-the-Loop Machine Learning" by Robert (Munro) Monarch about Active Learning. I don't understand the following approach to get a diverse set of items for humans to label: Take each item in the unlabeled data and count the average number of word matches it has with items already in the training data Rank the items by their average match Sample the item with the lowest average number of matches Add that item to the ‘labeled’ data and repeat 1-3 until we have sampled enough for one iteration of human review It's not clear how to calculate the average number of word matches. AI: The idea is to find the documents which are not well represented in the current labeled data. The first point is indeed a bit vague and can probably be interpreted in different ways. My interpretation would be something like this: For every document $d_u$ in the unlabeled data, count the number of words in common with every document $d_l$ in the labeled data. This value is the "match score" between $d_u$ and $d_l$. Note: I think that this value should be normalized, for example using the overlap coefficient. Note that other similarity measures could be used as well, for instance cosine-TFIDF. As output from the above step, for a single document $d_u$ one obtains a "match score" for every labeled document. The average across the labeled documents gives the "average match" for $d_u$.
H: Classify tweets by topic I am approaching machine learning for the first time because of my studies. I have been given a bunch of tweets and the goal is to classify them per topic. I really have no clue on how this should be done. Is there a particular way to follow? Until now, I have only found topics and was thinking about making a DTM-like dataframe for the training data containing not only the number of times not-sparse words occur but also the number of times particular N-grams occur and a ground truth column with the topic. Is this totally wrong? How else could I train a classifier without having features? AI: Since there are no predefined topics, the task is unsupervised: the goal is to group tweets which are semantically similar together (as opposed to classification, which requires training a model to predict among specific classes). The standard unsupervised approach is topic modelling. In the traditional LDA approach, a topic model groups the documents into clusters and also provides the probability of a word given a topic, so a list of "top words" by topic can be extracted from the model. LDA requires the number of topics as input parameter but Hierarchical Dirichlet Processes can be used to avoid this issue (it's less common however).
H: What's the difference between data classification and clustering (from a Data point of view) What are the differences and the similarities between data classification (using dedicated distance-based methods) and data clustering (which has certain defined methods such as k-means) Is data classification a sub-topic of data clustering ? AI: Classification is a problem where your input data consists of elements with 2 parts: Some data features that reflect the traits of an entity A label that assigns the entity to a group or class. With that kind of data, you can train a model that receives the data features (first part) and generates the label (second part). This kind of training, where you train a system to generate some output when it receives a specific input is called "supervised learning". On the other hand, in Clustering, your dataset only has the data features, that is, your dataset does not have the labels. Clustering methods allow you to group the entities in classes without having any labels, normally by defining a priori how many groups you want, and then grouping the entities by their similarity. This kind of training, where there are no labels and you have to learn just from the entity data features is called "unsupervised learning"
H: PCA, covariance, eigenvector matrix and rotation I am following the Coursera NLP specialization, and in particular the lab "Another explanation about PCA" in Course 1 Week 3. From the lab, I recovered the following code. It creates 2 random variables, rotates them to make them dependent and correlated, and then run PCA on them: import numpy as np from sklearn.decomposition import PCA import math std1 = 1 # The desired standard deviation of our first random variable std2 = 0.333 # The desired standard deviation of our second random variable x = np.random.normal(0, std1, 1000) # Get 1000 samples from x ~ N(0, std1) y = np.random.normal(0, std2, 1000) # Get 1000 samples from y ~ N(0, std2) # PCA works better if the data is centered x = x - np.mean(x) # Center x y = y - np.mean(y) # Center y #Define a pair of dependent variables with a desired amount of covariance n = 1 # Magnitude of covariance. angle = np.arctan(1 / n) # Convert the covariance to and angle print('angle: ', angle * 180 / math.pi) # Create a rotation matrix using the given angle rotationMatrix = np.array([[np.cos(angle), np.sin(angle)], [-np.sin(angle), np.cos(angle)]]) # Create a matrix with columns x and y xy = np.concatenate(([x], [y]), axis=0).T # Get covariance matrix of xy print("Covariance matrix of xy") covmat = np.cov(xy, rowvar=False) print(f"{np.sqrt(covmat[0,0]):.3f} = {std1}") print(f"{np.sqrt(covmat[1,1]):.3f} = {std2}") # Transform the data using the rotation matrix. It correlates the two variables data = np.dot(xy, rotationMatrix) # Get covariance matrix of data print("Covariance matrix of data") covmat = np.cov(data, rowvar=False) print(f"{np.sqrt(covmat[0,0]):.3f} = {std1}") print(f"{np.sqrt(covmat[1,1]):.3f} = {std2}") print(f"{covmat[0,1]:.3f} = {n}") # Apply PCA. pcaTr = PCA(n_components=2).fit(data) # In theory, the Eigenvector matrix must be the # inverse of the original rotationMatrix. print("** These two matrices should be equal **") print("Eigenvector matrix") print(pcaTr.components_) print("Inverse of original rotation matrix") print(np.linalg.inv(rotationMatrix)) I get the following output: angle: 45.0 Covariance matrix of xy 1.031 = 1 0.325 = 0.333 Covariance matrix of data 0.764 = 1 0.765 = 0.333 0.479 = 1 ** This two matrices should be equal ** Eigenvector matrix [[ 0.70632393 0.70788877] [ 0.70788877 -0.70632393]] Inverse of original rotation matrix [[ 0.70710678 0.70710678] [-0.70710678 0.70710678]] Why does n=1 define the magnitude of the covariance? Why we can obtain the angle between the variables as angle = np.arctan(1 / n)? Why I don't achieve this covariance between the variable when I take element (0,1) of the covariance matrix (the second time that I run np.cov, so the line covmat = np.cov(data, rowvar=False)? Why does the rotation change the variables variance from the initial std1 and std2? Why "In theory, the Eigenvector matrix must be the inverse of the original rotationMatrix" ? Why this is not the case? AI: Having 2 uncorrelated normal variables $x_1 \sim N(0, 1)$ and $x_2 \sim N(0, 1)$, one can correlate them through a linear transformation: $$A = \begin{bmatrix}a & b \\ c & d\end{bmatrix}$$ $$X' = AX$$ The covariance matrix of the transformed correlated variables $X'$ is given by: $$\Sigma' = A A^T$$ For a rotation by an angle $\theta$, and scaling for adding individual standard deviations, the transform is (sign is same as that given): $$A = \begin{bmatrix}cos(\theta) & sin(\theta) \\ -sin(\theta) & cos(\theta)\end{bmatrix} \begin{bmatrix}\sigma_1 & 0 \\ 0 & \sigma_2\end{bmatrix}$$ This produces the covariance matrix $\Sigma'$ as: $$\Sigma' = \begin{bmatrix}\sigma_1cos(\theta) & \sigma_2sin(\theta) \\ -\sigma_1sin(\theta) & \sigma_2cos(\theta)\end{bmatrix} \begin{bmatrix}\sigma_1cos(\theta) & -\sigma_1sin(\theta) \\ \sigma_2sin(\theta) & \sigma_2cos(\theta)\end{bmatrix}$$ The cross-variance component is given by: $$\Sigma'_{12} = \sigma_1\sigma_2(cos^2(\theta)-sin^2(\theta))+(\sigma_2^2-\sigma_1^2)cos(\theta)sin(\theta)$$ This is the relation of cross-variance on angle $\theta$. Setting $\theta = 45^\circ$ we get: $$\Sigma'_{12} = \frac{\sigma_2^2-\sigma_1^2}{2}$$ Setting $\sigma_1 = 0.333$ and $\sigma_2 = 1$ then $\Sigma'_{12} = 0.445$ Note $n = \frac{1}{tan(\theta)} \ne \Sigma'_{12}$ does not represent the covariance component as shown by analysis above. PCA decomposes the covariance matrix into un-correlated components. In other words it undoes the rotation (the scaling remains), since the rotation correlated the components in the first place. Explicitly PCA decomposes the covariance matrix into: $$\Sigma' = U S U^T$$ where $U$ is an orthogonal matrix of eigen-vectors and $S$ is a diagonal matrix which represents individual variances. Going back to $\Sigma' = A A^T$ and representing the rotation by $R$ we have: $$\Sigma' = R \begin{bmatrix}\sigma_1^2 & 0 \\ 0 & \sigma_2^2\end{bmatrix} R^T$$ Thus the orthogonal matrix of eigen-vectors $U$ corresponds to the (orthogonal) rotation matrix $R$. Thus the eigen-vectors should (precisely in theory, approximately in practise) correspond to the rotation matrix (up to some sign factor and permutation which remains arbitrary). For further info see: Bivariate normal distribution Understanding the Covariance Matrix Interesting Properties of the Covariance Matrix
H: How to apply MultiOutputClassifier to a dataset for Naive-Bayes algorithm I have a dataset which is as follows, (it's taken from an article online and I have been trying to Naive Bayesian algorithm on it) Original Dataset y attribute After having done some manipulations (following the article), these are my new datasets for training and testing, X Train y Train Now, it contains a multilabel and I have been asked to look at Multioutput classification for the problem. I have been trying to understand this classification and tried to implement it myself too, but I couldn't get it to done. First of all, I tried following this sample code given on the website, from sklearn.datasets import make_classification from sklearn.multioutput import MultiOutputClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.utils import shuffle import numpy as np X, y1 = make_classification(n_samples=10, n_features=100, n_informative=30, n_classes=3, random_state=1) y2 = shuffle(y1, random_state=1) y3 = shuffle(y1, random_state=2) Y = np.vstack((y1, y2, y3)).T n_samples, n_features = X.shape # 10,100 n_outputs = Y.shape[1] # 3 n_classes = 3 forest = RandomForestClassifier(n_estimators=100, random_state=1) multi_target_forest = MultiOutputClassifier(forest, n_jobs=-1) multi_target_forest.fit(X, Y).predict(X) But, since, I am new to all this, I didn't understand anything at all.. I didn't understand why he did the make_classification call, and then shuffled the data and etc. I tried to implement it on my y_train variable and then placed it in my model.fit for Naive-Baysen algorithm, from sklearn.naive_bayes import GaussianNB model = GaussianNB() Yt = np.vstack(y_train).T n_samples, n_features = X_train.shape # 10,100 n_outputs = Yt.shape[1] # 3 n_classes = 3 forest = RandomForestClassifier(n_estimators=100, random_state=1) multi_target_forest = MultiOutputClassifier(forest, n_jobs=-1) model.fit(X_train, multi_target_forest) But it gave the same error which I was receiving previously, which meant that I didn't do the multioutputclassification properly, ValueError: y should be a 1d array, got an array of shape () instead. Can anyone help me in telling how to actually implement this classification, so that the Y variable can be used for the Naive Baysen? AI: The examples's make_classification, shuffles, and stack are just to generate a multilabel dataset. You've already processed to y_train and don't need any of that. Then, you want a base estimator to be Naive Bayes, not a random forest as in the example. And all sklearn models should be fit as model_instance.fit(X, y, sometimes_optional_parameters); the error being raised is because you passed a model object instead of y (and also are using the wrong model object). So the following should do what you want: nb = GaussianNB() multi_target_nb = MultiOutputClassifier(nb, n_jobs=-1) multi_target_nb.fit(X_train, y_train)
H: Output of evaluation metric for XGBoost - is it cumulative? On the 10th boosting round for XGBoost, I get an MAP of 0.32 on the test data. Does that reflect the performance of just that 10th tree? Or the performance of all 10 trees combined that have been created so far? AI: That reflects the performance of the boosting of the 10 trees together. I do reccomend you this tutorial https://www.youtube.com/watch?v=sRktKszFmSk&t=4s Gradient boosting is well explained there. Using as a predictor only the 10th tree will give you weird results, since it is fitted on the residual errors of the previous 9
H: Why do machine learning engineers insist on training with more data than validation set? Among my colleagues I have noticed a curious insistence on training with, say, 70% or 80% of data and validating on the remainder. The reason it is curious to me is the lack of any theoretical reasoning, and it smacks of influence from a five-fold cross-validation habit. Is there any reason for choosing a larger training set when attempting to detect overfitting during training? In other words, why not use $n^{0.75}$ for training and $n - n^{0.75}$ for validation if the influence really is from cross-validation practices carried over from linear modeling theory as I suggest in this answer? I posted a similar question on stats.stackexchange.com but based on the response thought I might have a more interesting discussion here. The concept of training for multiple epochs is, in my opinion, inherently Bayesian and thus the concept of cross-validation may be ill-suited at worst, unnecessary at best, for reasons I suggest in that post. AI: The reasoning will be: "The more data for training the better". Then you have to keep in mind that the validation/hold-out set has to resemble how it should work on production/testing. The theory is that the larger the training data, the better the model should generalize. The validation set can be much smaller, on extremely big dataset you can make it even 0.01% of the data, and there should be no problem. In basic cases you don't even need to do the K-fold, this makes the training more expensive and only for hyperparameter search and inside the training set it needs to be done. For your case, you can consider the split you want. Just keep the balance of the training data to be as larger as possible and validation data to resemble the best the production environment as possible.
H: Pytorch Luong global attention: what is the shape of the alignment vector supposed to be? I am looking at the Luong paper on Attention models and global attention. I understand how the alignment vector is computed from a dot product of the encoder hidden state and the decoder hidden state. So that all makes sense. My question is, what should the dimensions of the alignment score tensor be? If I have my data in batches, I basically compute a single score for each timestep in the hidden state, right. So should the alignment vector be of dimension [sequence length, 1], or something like that? I would then softmax this alignment vector and multiply it by every element in the batch to compute the context vector, right. Again, my key question is what the dimensions of the alignment score vector or tensor should be. Thanks. AI: You answered yourself [sequence length, 1] is correct assuming you work with a single sentence. (Or actually, the 1 dimension depends on implementation.) In practice, the data is typically batched, so it will be [batch, sequence length 1]. This can be element-wise multiplied with the encoder states of dimension [batch, sequence length, hidden size] and summed in the middle dimension to get a context vector of dimension [batch, hidden_size]. Note also that if the sentences do not have the same length, you need to deal with padding, i.e., set the padded positions to -inf before softmax.
H: Recommender/Clustering data to support a hypothesis. Is this a valid use-case for unsupervised ML? I have a dataset where some items have been labelled (categorized into 4 classes [A,B,C,D]). However, there is a vast majority of the dataset which has not been labelled. My hypothesis is that there are some characteristics which influence which category is applied to each item. Would clustering or maybe even a recommender system be able to suggest where each item should be placed? On a practical level, would I provide the "labels" within the model? Or would I keep it apart until the end and then overlay those labels on whatever the model managed to group together? The above example seems like a clustering use case. However, can I spin the problem into a recommender system? As in, you labelled item X as A, and it has characteristics 1,2,3... item Y has similar characteristics, maybe you should label it A as well? AI: Clustering and recommendation are similar tasks, however in recommendation you usually want to recommend several items while clustering usually assigns each sample to only one cluster. Anyway for your problem a clustering or even a classifier might help. If labels are assigned on the basis of a similarity metric (and you have a good guess of what this metric might be) you could use a clustering algorithm to impute the missing labels (clustering is based on similarity between samples). However if it's reasonable that samples can be similar according to different metrics (and you are uncertain about it) then a classifier might help - provided you have enough labeled data to train the classifier. Which approach works best will also depend on the number of labeled samples and number of features. Whichever strategy you choose to use the imputed labels will be noisy (some samples will be mislabeled). This is important to take into account especially if you are going to further process those labels.
H: Dropping features after final evaluation on test data Would you please let me know if I am committing a statistical or machine learning mal-practice in this procedure? I want to estimate meteorological variable y1 from ${x_1, ..., x_{10}}$ variables. I use data from different weather stations. I keep some weather stations as test sites/data. I do feature selection and hyper parameter tuning with cross-validation on training data. My model is Random Forest (RF) and two other tree based models. Before I evaluate my models on test sites I was skeptic about keeping one of the features: Elevation of the weather station, $x_{10}$. This is a static feature that would be present/same in all rows of data related to a station. Knowing a tiny bit about RF made me worried that the model will use this as a kind of "site_id" and possibly overfit to this feature. It wouldn't make me worried if I was using linear/nonlinear regression models. So I train my models once with and once without $x_{10}$ as a feature. Then I evaluate my models on the test sites and turns out that the models without $x_{10}$ do significantly better on test sites. Even before testing this hypothesis about the static feature, I wanted to do similar tests with dropping other features as well, say $x_{9}$. Now my question is: now that I know that $x_{10}$ hurts my model, I like to retrain my models without $x_{10}$ and test model performance with and without $x_{9}$ in the filtered feature set. To me it seems like Im using my test data to kind of filter out my feature so it is not right. But then, I have this information and if $x_{10}$ is hurting my models in the end, why should I go on testing hypothesis and preparing my models with $x_{10}$ being in them? AI: What you're doing is manual feature selection based on the test set. You're right that it's not correct to proceed this way: in theory, feature selection should be done using only the training set and a validation set, not the final test set. The risk is data leakage: you're modifying the model using information from the test set. Maybe the performance is better without these features on the test set because they happen to be bad for this particular current test set by chance. As a result the model could be overfit, and you wouldn't be able to detect this problem on this test set since it's the source of the overfitting. So in principle it's always better to separate the data first, keep the test set aside until the final evaluation and use a validation set for intermediate evaluation until the final model (including the set of features) is determined. In practice, it sometimes happens that we realize that we should have done something differently after applying the model on the final test set. It's a mistake but it's not the end of the world, usually the risk of bias is low. As you said, obviously we don't have to ignore an information which is important for the performance of the model. However if you know that you're going to repeat this procedure with several features, you should defintely do it using a separate validation set (taken from the training data), not the test set: the more you use the test set like this, the higher the risk of data leakage and bias.
H: Build a corpus for machine translation I want to train an LSTM with attention for translation between French and a "rare" language. I say rare because it is an african language with less digital content, and especially databases with seq to seq like format. I have found somewhere a dataset, but in terms of quality, both french and native language sentences where awfully wrong. When I used this dataset, of course my translations where damn funny ... So I decided to do some web scraping to build myself my parallel corpus and it might be useful for research in the future. It worked well and I managed to collect some good articles from a website containing some articles (monthly, since 2016 in both languages). Now the tricky part is putting everything into sentence to sentence format. I did a trial with a text and its translation just by tokenizing into sentence and I noticed that for example I had 23 sentences for French and 24 for native language. Further checking showed that some small differences where notices in both languages, like a sentence where a comma was replaced in the other language by a dot. So my question is : Is it mandatory to put my articles into sentence-French to sentence-Native language format ? Or can I let it as text / paragraphs ? AI: What you would typically do in your case is to apply a sentence alignment tool. Some popular options for that are: hunalign: a classical tool that relies on a bilingual dictionary. bleualign: it aligns based on the BLEU score similarity vecalign: it is based on sentence embeddings, like LASER's. I suggest you take a look at the preprocessing applied for the ParaCrawl corpus. In the article you can find an overview of the most popular methods for each processing step. A different option altogether, as you suggest, is to translate at the document level. However, most NMT models are constrained in the length of the input text they accept, so if you go for document-level translation, you must ensure that your NMT system can handle such very long inputs. An example of NMT system that can be used for document-level NMT out of the box is Marian NMT with its gradient-checkpointing feature.
H: Tweet Classification into topics- What to do with data Good evening, First of all, I want to apologize if the title is misleading. I have a dataset made of around 60000 tweets, their date and time as well as the username. I need to classify them into topics. I am working on topic modelling with LDA getting the right number of topics (I guess) thanks to this R package, which calculates the value of three metrics("CaoJuan2009", "Arun2010", "Deveaud2014"). Since I am very new to this, I just thought about a few questions that might be obvious for some of you, but I can't find online. I have removed, before cleaning the data (removing mentions, stopwords, weird characters, numbers etc), all duplicate instances (having all three columns in common), in order to avoid them influencing the results of topic modelling. Is this right? Should I, for the same reason mentioned before, remove also all retweets? Until now, I thought about classifing using the "per-document-per-topic" probability. If I get rid of so many instances, do I have to classify them based on the "per-word-per-topic" probability? Do I have to divide the dataset into testing and training? I thought that is a thing only in supervised training, since I cannot really use the testing dataset to measure quality of classification. Antoher goal would be to classify twitterers based the topic they most are passionate about. Do you have any idea about how to implement this? Thank you all very much in advance. AI: As far as I'm aware there is no correct/standard way to apply topic modelling, most decisions depend on the specifics of the case. So below I just give my opinion about these points: I have removed, before cleaning the data (removing mentions, stopwords, weird characters, numbers etc), all duplicate instances (having all three columns in common), in order to avoid them influencing the results of topic modelling. Is this right? Should I, for the same reason mentioned before, remove also all retweets? In general there is no strict need to deduplicate the data, doing it or not would depend on the goal. Duplicate documents would affect the proportion of the words which appear in these documents, and in turn the probability of the topic these documents are assigned to. If you want the model to integrate the notion of popularity/prominence of tweets/words/topics, it would probably make sense not to deduplicate and keep retweets. However if there is large amount of duplicates/retweets the imbalance might cause less frequent tweets/words to be less visible, possibly causing less diverse topics (the smallest topics might get merged together for instance). Until now, I thought about classifing using the "per-document-per-topic" probability. If I get rid of so many instances, do I have to classify them based on the "per-word-per-topic" probability? I'm not sure what is called the "per-document-per-topic" probability in this package. The typical way to use LDA in order to cluster the documents is to use the posterior probability of topic given document (this might be the same thing, I'm not sure): for any document $d$, the model can provide the conditional probability of every topic $t$ given $d$. The sum of this value across topics sums to 1 (it's a distribution over topics for $d$), and for classification purposes one can just select the topic which has the highest probability given $d$. Do I have to divide the dataset into testing and training? I thought that is a thing only in supervised training, since I cannot really use the testing dataset to measure quality of classification. You're right, you don't need to split into training and test set since this is unsupervised learning. Antoher goal would be to classify twitterers based the topic they most are passionate about. Do you have any idea about how to implement this? The model gives you the posterior probability distribution over topics for every tweet. From these values I think you can obtain a similar distribution over topics for every tweeter, simply by marginalizing over the tweets by this author $a$: if I'm not mistaken, this probability $p(t|a)$ can be obtained simply by calculating the mean of $p(t|d)$ across all the documents/tweets $d$ by author $a$.
H: How to deal with ternary Output neurons in the Output classification layer of a simple feedforward Neural Net? I was looking into the multi-label classification on the output layer of a Neural Network. I have 5 Output Neurons where each Neuron can be 1, 0, or -1. independent of other Neurons. So for example an Output would look like : Output 1 0 -1 0 1 I used to take the tanh- activation function and partition the neuron into 3 ( y<-0.5, -0.5<y<0.5, y>0.5) to decide the class in each of those neurons after the prediction. See Figure below. Question: Are there any better alternatives on how to approach this ternary Output activation? I stumbled upon this blog post regarding n-ary activation functions which I found very interesting. I think the newly suggested would make the partitioning mentioned above much more meaningful! Or do you think I should one-hot-encode my whole system such that a neuron can only have a value of 0 or 1 and then just use sigmoids activation with a threshold of 0.5? AI: The typical approach for this kind of scenario is to handle the output as a discrete element and have each output be a probability distribution over the discrete output space, that is, for each output you generate 3 numbers between 0 and 1 where the sum of them adds up to 1. You normally obtain this by generating for each position a vector of 3 elements in $\mathbb{R}$ instead of a single element and then passing it through a softmax function. The typical loss function for that is the categorical cross-entropy, which receives the probability vector (although depending on the implementation and the parameters, it may receive the vector before the softmax, which are called the "logits"), and the index of the expected discrete output. This is the usual approach for multi-class classification that you will find in most tutorials, like this one.
H: Is there a difference if input nodes have discrete or range value If you have some input nodes containing fruit values like {apple=0, pear=1, oranges=2} vs temperature values like {5, 10, 30, 50}. Is there any difference on how you set up the neural network to learn the output? I'm guessing on the first case the neural network only learn about the input you used to train. For example if you then try with banana the result will be something random. But in the case of temperature if I input 35 I would expect a result similar to 30. Also, I'm guessing for the first case would need much more data to learn than the second case. So wondering if have to take different consideration on each case. AI: If categorical values are 1-hot encoded, then there is a node for each item in the category. There would be an input note that indicate the presence or absence of a categorical value. There would be an apple node, a pear node, or a orange node. If apple is present, then the apple input node would take a value of 1; 0 otherwise. Numerical values have one node per feature. The activation node have higher value if the numerical value is higher. Typically, these input node are rescaled between 0 and 1. The activation of an input node can take on any value between 0 and 1. In the case of temperature, it depends on how temperature is measured. If temperature is measured continuously, then it can be encoded as a numerical value. If temperature can only take a limited subset of values, then can not be encoded as a numerical value. It would be measured on a ordinal scale.
H: Interpreting vertical and horizontal parts of ROC curve It's not clear to me how I can interpret vertical and horizontal parts of the ROC curve. What important information can I gain from this? This is a text from the book "Human-in-the-Loop Machine Learning" by Robert Monarch: In this example, we can see that the line of the ROC curve is near-vertical for the first 20%. This tells us that for the 20% most confident predictions, we are almost 100% accurate. The ROC curve is almost horizontal at 1.0 for the final 30%. This tells us that by the time we get to the 30% least most confident predictions for a label, there are few very items with that label remaining. AI: The explanation follows from understanding what a ROC curve is made of. First, a reminder: a ROC curve represents the performance of a soft binary classifier, i.e. a classifier which predicts a numerical value (usually a probability) which represents the likelihood of the instance to be positive. The points of a ROC curves correspond to what happens when splitting the instances at different thresholds of their predicted value/probability. For example let's say we have 10 instances with the following predictions and status, ordered by decreasing predicted value: predicted value gold status 1.0 P 0.95 P 0.95 P 0.85 N 0.70 P 0.55 P 0.52 N 0.47 P 0.26 N 0.14 N If the threshold is set at 0.60, we have 4 TP, 1 FP, 2 FN and 3 TN. If the threshold is set at 0.75, we have 3 TP, 1 FP, 3 FN and 3 TN. etc. If the classifier does a decent job, the top part should contain only gold positive instances and the bottom part only negative instances, since the classifier predicts a very high (resp. very low) value for these instances. By contrast the middle part is more likely to contain errors. This means that with a decent classifier, we encounter only positive instances until a certain threshold from the top: in my example, there are 3 P instances above threshold = 0.85, so any threshold chosen between 0.85 and 1 will have no FP instances. This implies FPR=0 with TPR ranging from 0 to 3/10, hence a vertical line on the ROC curve from the origin to (0,0.3). Applying the same observation for the lowest values in my example, we see than when the threshold is between 0 and 0.47 (excluded) we have no FN, so we have a perfect TPR=1 and therefore an horizontal line from (1,1) to (0.8, 1). These two extreme parts of the ranking are what the author calls "most confident predictions" and "least most confident predictions". Btw the latter is not a very good term because actually the bottom part is where the classifier is very confident about the instance being negative. If anything, the part where the classifier is the least confident is around the middle of the ranking. [edit: forgot to answer this question] What important information can I gain from this? So these two straight lines show how far the classifier can go: without making any false positive error (bottom left vertical line) without making any false negative error (top right horizontal line) It can be useful in applications where one wants to have perfect precision at the cost of recall, or the converse. However in general (optimizing F1-score for instance) the optimal threshold is somewhere in the middle of the curve. Of course the longer the line, the better the classifier in general. Geometrically, it's clear that when these two lines are long then the area under the curve (AUC) is large. Note that in theory it's possible for a classifier to have a few errors close to the extremes (i.e. short lines) while still performing well in the middle range, but in practice it's quite rare.
H: Inference order in BERT masking task In BERT, multiple words in a single sentence can be masked at once. Does the model infer all of those words at once or iterate over them in either left to right or some other order? For example: The dog walked in the park. Can be masked as The [mask] walked in [mask] park. In what order (if any) are these tokens predicted? If you have any further reading on the topic, I'd appreciate it as well. AI: All the tokens are inferred at once, independently from one another.
H: Strings/ features in Turicreate decision tree I am trying to create a prediction model by using a decision tree with Turicreate. While my problem does involve numbers, it also involves strings and ultimately I want it to return the string 'true/false'. Are Turicreate decision trees able to process strings as input and output? AI: I'm not familiar with the package, but the documentation suggests that strings as input will be handled internally by dummy-encoding (with a reference level). The linked section is for linear regression, but is linked to from the Advanced Features section of the decision tree page, so I assume that applies.
H: How to specify a location for text in a graph plotted by Python? I would like to plot a graph of actual and predicted values with Python after doing a regression. I used the following codes. However, the text "R^2=0.91" is placed on the right hand side and crossing the second Y axis. Is it possible that I change its location? Thanks a lot. text = 'R^2=0.91' fig, ax = plt.subplots() colorsMap = 'jet' cm = plt.get_cmap(colorsMap) cNorm = matplotlib.colors.Normalize(vmin=min(yhat), vmax=max(yhat)) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=cm) ax.scatter(Y_test, yhat, c=scalarMap.to_rgba(yhat), s=2, alpha=0.7) scalarMap.set_array(yhat) fig.colorbar(scalarMap) ax.annotate(s=text, xy=(max(Y_test)-2, max(yhat)-1)) ax.set_xlabel('Actual') ax.set_ylabel('Predicted') # add diagonal line ident = [np.min(Y_test), np.max(Y_test)] plt.plot(ident, ident, color='red') AI: At the moment you are setting the location of the annotation in ax.annotate using the xy argument. If you want to change the location you can simply change the value to the argument to something different (e.g. xy=(min(Y_test) + 2, min(yhat) + 1)).
H: Grouping by similarity I would like to find a way/algorithm to group people into, say, four groups by their answer similarity to yes/no questions. So, each pair of people in one group would have given the same answers for a big part of the questions – (mostly) bigger than compared to people from the other three groups. Would appreciate any pointers to the right direction. AI: I would start with the simple option: represent every person as a boolean vector in which every position represents the answer to a particular question (the length is the total number of questions). Then you can apply any standard clustering algorithm, such as K-means (hierarchical clustering would probably also work with data like this).
H: Which visualization tool should I use? We consider the below function : $f(x,y,z,a) = x*y*z*a$, where $x,y \in \mathbb{Q}\cap[0,1000],$ $ a\in\{2,3\} $ and $z=z(x)$, taking values from the below table based on the level-range that $x$ belongs to. For instance, if $x=150$, then $z=5.$ I am looking for the proper visualization tool that can produce all possible 2-dim and 3-dim graphs of the above function.\ For instance, if I fix $x=150$ and $a=2$, I want the 2-dim graph $(y,f(150,y,5,2))$ if I fix $a=2$, I want the 3-dim graph $(x,y, f(x,y,z(x),2) )$ and if I fix $y=10.5$, I want the two 2-dim graphs $(x,f(x,10.5,z(x),2))$ and $(x,f(x,10.5,z(x),3))$ I would like your suggestions on the above. Would Tableau do the job here ? I am not familiar with any tool and I am trying to chose the most situable to start with. Should I consider some other tool maybe? (Power BI or ??) In addition, please advise on the functionality and the advanced options provided by the above tools. AI: Based on your explanation, $z$ is not a parameter so your function can be simplified like this: $$f(x,y,a) = x*y*z(x)*a\text{, where } x,y \in \mathbb{Q}\cap[0,1000], a\in\{2,3\} $$ Note that $z(x)$ is a piecewise function. Also $a$ can have only two values so $f$ can be divided into $f_2$ and $f_3$, each of these having only two arguments so it simplifies things. I don't know Tableau or Powerbi so I can't advise about that. I use R and I played a bit with your function. For a global visualization I came up with a kind of heat map like this: In case you're interested, I did it like this with R: library(ggplot2) myrandomdata <- function(n) { a2 <- data.frame(x=runif(n,0,1000),y=runif(n,0,1000),z=20,a=rep(2,n)) a3 <- data.frame(x=runif(n,0,1000),y=runif(n,0,1000),z=20,a=rep(3,n)) df <- rbind(a2,a3) df$z[df$z<=500] <- 10 df$z[df$z<=200] <- 5 df$z[df$z<=100] <- 2 df$f <- df$x *df$y * df$z * df$a df } df <- myrandomdata(50000) ggplot(df,aes(x,y,colour=f))+geom_point()+facet_grid(a~.)+scale_color_gradient(low="blue", high="red") (using random sampling is probably not the standard way but it was the easy way)
H: Why the Silhouette Score and optimal number of Cluster changes when using 2D and 3D data? I am experimenting with Kmeans clustering. My data (vectors) was in 300 dimensions which I am converting into 2D and 3D using PCA. Now, to find the optimal number of clusters, I used the Silhouette score. However, for 2D the best Silhouette score is showing for 3 clusters (silhouette score = 0.45), and for 3D it is showing 9 clusters (silhouette score = 0.3861). I want to know whether it is normal? If yes, what is the reason for this? What should I choose 2D or 3D? Also, the reason for experimenting with 2D and 3D is because I wanted to plot the 3D graph using seaborn. AI: Yes it can happen. In fact it is quite normal since there are different clusters in 2D and different in 3D, since more or less information is added to data (by having more dimensions). This is a by-product of the curse of dimensionality. Adding as more relevant information as possible would make clusters more close to underlying groups. So 3D would be better than 2D. This is a general observation. There are of course cases where projecting data in a low-dimensionalal manifold is indeed better since it can eliminate noise and/or capture specific attributes better than clustering on all (possibly irrelevant) dimensions (another by-product of the curse of dimensionality). If the relevant information in your data has low dimensionality but this information is correlated along many dimensions in the original data then a feature extraction method is needed in order to capture the low-dimensional relevant information from original data (eg PCA, ICA ,..). For some references along this direction see for example: How to cluster in High Dimensions An investigation of K-means clustering to high and multi-dimensional biological data How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
H: Optimal selection of k in K-NN I am currently reviewing some concepts related to Machine Learning, and I started to wonder about the hyperparameter selection of K-NN classifier. Suppose you need to solve a classification task with a number of classes equal to M: I was thinking that the best choice for the parameter K of our classifier should be for K > M. In this way, we are avoiding all the pathological cases in which a sample may be in the middle of all the M classes and then have a tie. For instance, consider the following example in which we have M=3 and each geometrical shape represents a class: Assume that K<=M: for sure you will have a tie for a sample in the middle of samples 1, 2 and 3. This tie could be avoided if K > M. Clearly this is just a toy example, but I think it is sufficient to illustrate my thoughts. I have tried to look for an answer but I wasn't able to find any resource mentioning this, am I wrong in some way or this reasoning may be sound? AI: Your idea isn't wrong, however in k-NN there always might be a case where you have the same number of votes for 2 or more classes (e.g. you have $k=6$ and you have 3 samples of one class vs 3 of another). With your solution you are just overcoming one very small case of ties and the $k$ that you choose might not be the optimal $k$ for classification, which is your primary objective (i.e. you might hurt your classification performance with a sub-optimal $k$ so that you don't get one type of ties). Because there are also other times of ties, you always need to have a tiebreaker condition. I have seen a lot of strategies for this, like reducing $k$ by one until you break the tie, selecting the class with the minimum distance, selecting the class with the fewest samples overall, etc. Since you already have such a condition, you could also handle your situation, without selecting a sub-optimal $k$.
H: How to version data science projects with large files I am working on a project with large data files (~300MB). I want to version my work along with the data files so that it is always available online. I tried using git-lfs but it has a 1GB/month bandwidth limit, beyond which you're blocked for a month. What are versioning tools using by data scientists for projects that use >100MB data files (both static and generated)? AI: I have used dvc. It has data versioning though I do not use that often. I tend to use the makefile (pipeline) features.
H: Classic sport match prediction So, I am currently learning machine learning and data analysis. I have created for my self a problem that is: Who will win a match of soccer Now, I have narrowed it down as being a Binary Classification problem as I only want to figure out who will be the winner of a match. For this I have some data containing the following data points: Team One Team Two Duration Goal (Team One) Goal (Team Two) Winner Now, this is where the trouble begins (for me at least) I am unsure which features to choose for my model and also kind of off as to how to manage the data. Say that I have two teams playing (Liverpool vs Chelsea) now I do have around 5000 data points for all matches played however only around 82 points where it is Liverpool against Chelsea. Which dataset should I use? Also sometimes their position in the dataset changes meaning that sometimes Liverpool is Team One and sometimes Chelsea is Team One does this matter or should I process the dataset to always match one team at a certain position? In general, what is the best way to train my model? Should I use the small dataset containing only the matches between these two teams or should I go for all matches in general? I am sorry for the beginner question I really hope someone can help me out :) AI: When making decisions about which data to use in a model you have to be aware of several pitfalls. One of them is information leakage i.e. including data that contains information that you shouldn't have at the time of prediction. Both Duration and Goals are data points that you do not have at the time of prediction (that is before a match) and therefore should not be used in a prediction model. This leaves us with only one information point: the teams involved. You could still make a ML model but it would only tell you something trivial: which of these teams won more head 2 head games in the past because that is the only information it would use to determine the likely winner (therefore the model would also always predict the same outcome because the input does not change). This means you need much more data to actually make a model. Modern models for predicting the outcomes of sport events use data such as recent performance of the teams, time (e.g. time of day, weekday, season, etc.), recent performance of all involved players, context (e.g. weather, etc.). It does not seem like your dataset contains this information so I am afraid you will be unable to create a model. I would recommend to start with a different problem that has a better dataset available and is easier to crack for a beginner. The IRIS dataset is a wellknown beginner ML problem and teaches you how to model a classification algorithm.
H: Can I pass path to a dataset in train_test_split My Directory C: --Dataset --image1.png --image2.png --image3.png --image4.png --image5.png lst = [C:\Dataset\image1.png, C:\Dataset\image2.png, C:\Dataset\image3.png ...] Can I pass this list to train_test_split instead of loading and input the dataset to the method. If yes, is this a preferable method or should I load the dataset first. AI: You can pass in a list of file paths into scikit-learn's train_test_split. Here is an example: from pathlib import Path from sklearn.model_selection import train_test_split file_locations = [Path("images1.png"), Path("images2.png")] X_train, X_test = train_test_split(file_locations) The choice between loading data all-at-once vs batch-by-batch is context dependent. If you have small amount of data relative to RAM, then loading in the data into memory is preferable because it will be faster and easier to handle. If you do not have enough RAM available, then taking the pointer-to-the-data approach will enable training to happen. The training in this scenario might be slower and require custom code to load the data at the moment it is needed.
H: How to interpret results of a Clustering Heart Failure Dataset? I am doing an analysis about this dataset: click In this dataset there are 13 features, 12 of input and 1 is the target variable, called "DEATH_EVENT". I tried to predict the survival of the patients in this dataset, using the features. Hoewever, now I was trying to do a cluster analysis to see if the patients are grouped in clusters. This is the code I have written. from sklearn.cluster import KMeans Features = ['ejection_fraction','serum_creatinine'] #the highest correlated features with death_event X = heart_data[Features] wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans.fit(X) wcss.append(kmeans.inertia_) plt.plot(range(1, 11), wcss) plt.title('Elbow Method') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() From this graph I can observe that there are 2 clusters. Now kmeans = KMeans(n_clusters=2, init='k-means++', max_iter=300, n_init=10, random_state=0) pred_y = kmeans.fit_predict(X) plt.scatter(X["ejection_fraction"], X["serum_creatinine"]) plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=300, c='red') plt.show() And I obtained this chart: Now, what can I say from this chart? I think that it is unuseful, right? I used only the features ejection_fraction and serum_creatinine because these are the only I used for the prediction. Or I have to use all the variables except from DEATH_EVENT? In this way: X = heart_data.iloc[:, :11] But in this case I obtain this: I am not able to understand these charts, I think that I am doing something wrong, but what? Where are the clusters? How to interpret these results? UPDATE: I am not able to use Umap_learn, my Mac can not install it, I received a lot of errors. However, I did something related to your advices. Here there is all the code: https://pastebin.com/RdJb0ydu The first 2 parts are the code that you have written here. In the 3rd part I use kmeans(n_clusters=2) because from the silhouette I saw that the best was with 2 clusters. Then I did the prediction and concatenated the results to the original dataset and I printed out the column of DEATH_EVENT and the column with the results of clustering. From this column, what can I say? How can I understand if the 0 of the prediction refers to the survived patients or to the died patients? AI: I would use all the features and see how the separateness of my clusters behave according to some metric, for example, silhouette score Additionally, it is very important to scale your data prior to clustering since kmeans is a distance-based algorithm. heart_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/00519/heart_failure_clinical_records_dataset.csv") from sklearn.cluster import KMeans from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.metrics import silhouette_score Features = heart_data.drop(["DEATH_EVENT"], axis = 1).columns X = heart_data[Features] sc = [] for i in range(2, 25): kmeans = Pipeline([("scaling",StandardScaler()),("clustering",KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0))]).fit(X) score = silhouette_score(X, kmeans["clustering"].labels_) sc.append(score) plt.plot(range(2, 25), sc, marker = "o") plt.title('Silhouette') plt.xlabel('Number of clusters') plt.ylabel('Score') plt.show() You could also try different combinations of features so that score is maximum For visualization purposes you can use a decomposition technique from sklearn.decomposition import PCA import matplotlib.pyplot as plt plt.style.use("seaborn-whitegrid") pca = Pipeline([("scaling",StandardScaler()),("decompositioning",PCA(n_components = 2))]).fit(X) X2D = pca.transform(X) plt.scatter(X2D[:,0],X2D[:,1], c = kmeans["clustering"].labels_, cmap = "RdYlBu") plt.colorbar(); Last but not least, I recommend to use a manifold projection such as UMAP in your data, It might help on your task by generating "well-defined" clusters (might but not necessarily, nonetheless it is worthy to try) Look, by using UMAP the results seems to improve: code: # pip install umap-learn heart_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/00519/heart_failure_clinical_records_dataset.csv") from sklearn.cluster import KMeans, DBSCAN from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.metrics import silhouette_score from umap import UMAP Features = heart_data.drop(["DEATH_EVENT"], axis = 1).columns X = heart_data[Features] sc = [] for i in range(2, 25): kmeans = Pipeline([("scaling",StandardScaler()),("umap",UMAP()),("clustering",KMeans(n_clusters=i, init='k-means++',random_state=0))]).fit(X) score = silhouette_score(X, kmeans["clustering"].labels_) sc.append(score) plt.plot(range(2, 25), sc, marker = "o") plt.title('Silhouette') plt.xlabel('Number of clusters') plt.ylabel('Score') plt.show() from sklearn.decomposition import PCA import matplotlib.pyplot as plt plt.style.use("seaborn-whitegrid") kmeans = Pipeline([("scaling",StandardScaler()),("umap",UMAP()),("clustering",KMeans(n_clusters=3, init='k-means++',random_state=0))]).fit(X) pca = Pipeline([("scaling",StandardScaler()),("umap",UMAP()),("decompositioning",PCA(n_components = 2))]).fit(X) X2D = pca.transform(X) plt.scatter(X2D[:,0],X2D[:,1], c = kmeans["clustering"].labels_, cmap = "RdYlBu") plt.colorbar(); Plot show first and second principal components of umap projection (It is simply a projection of how all the features would look in 2D space) Colours are the cluster id. i.e. for every colour we see in which cluster the algorithm (k-means) assigned each observation to.
H: OneHotEncoding target variable? I'm working on a multiclass classifier with 6 classes on the target column and I was thinking about Hot Encoding the classes, thus having 6 target columns. Will this improve efficiency? I am using sklearn. L.E: improve efficiency compared to just label encode AI: It would be a bad idea to use one hot encoding because this would make the problem multi-label. This would make the problem much more complex for the model and would very likely lead to lower performance, or it would require much more data in order to reach the same performance as regular classification. This is because there are much more combinations possible in the multi-label problem: every instance can have any number of classes, from zero to all of them. Your training data would not have these cases, but the algorithm would assume that they can happen. So in your case this higher level of complexity is pointless since there can be only one class. With scikit the standard way to encode the target in a multiclass task is to use the LabelEncoder.
H: How to determine if dataset is a suitable representation of the context? How can I determine if the data I have collected is a good enough representation of the context? For example, I am working on an object detection system and have been building an image dataset. How can I know if my dataset represents the task? For example, I need to account for instances where the object is close up and far away. But what about contexts/situations I have missed, or not even considered? Is there an art to building datasets? AI: Before collecting the data, domain knowledge is used to determine plausible variations the task may present. It is not necessary to capture in data all the variations experts may point out (for example, some may be syntheticaly constructed). But domain knowledge indeed dictates the variations a certain task may have to take account for. That being said, one then gathers the data trying to match plausible variations in the percent that domain experts dictate. Again some may even be syntheticaly constructed a-posteriori. Then we have a representative dataset. In case the dataset proves inadequate, this means domain knowledge needs to be updated with new information about the task.
H: What will happen if almost constant values for features? In a problem of an epidemiology dataset, is it desirable to keep the features that have almost constant values? For example, In case of the feature, type_of_residence Large for 97 percent and Small for 2.7 percent of subjects. Is it okay to keep this feature? My target variable is outcome of patients and this data set is unbalanced. Like oversampling and under sampling techniques in class imbalance problem, is there any process in ML to address other predicting features? I am not doing feature selection using ML at this point for knowing the importance of this feature. But would like to know is there any general rules regarding keeping or not keeping these kind of features in dataset. AI: If a feature is truly constant across all the instances, then it's useless for classification and it can be removed. But in any other case, as in your example, a feature can be useful even if constant with most instances. In your example it might turn out that the target class is strongly correlated with type_of_residence, so it would be a mistake to get rid of it since your model can take advantage of it. Unless you have a good reason to get rid of a feature (expert knowledge), keep all of them. If you must reduce the number of features (for instance to avoid overfitting), then the selection should be based on information about which features are the least informative (e.g. using some correlation measure). Ideally the best way to know which features are useful is to try to train a classifier and evaluate it with every possible subset of features (but it's rarely possible in practice)
H: Is a BiLSTM layer required if we use BERT? I am new to Deep learning based NLP and I have a doubt - I am trying to build a NER model and I found some journals where people are relying on BERT-BiLSTM-CRF model for it. As far as I know BERT is a language model that scans the contexts in both the directions and embeds words according to the context. Now my question is - if context is captured during word embedding with BERT, why do we need another layer of BiLSTM? AI: That layer isn't required indeed as it also encodes the sequence, albeit in a different way than BERT. What I assume is that in a BERT-BiLSTM-CRF, setup, the BERT layer is either frozen or difficult to fine-tune due to its sheer size. Which is likely why the BiLSTM layer has been added there.
H: Microsoft custom vision vs Tensorflow model? I am planning to implement my own image classifier model using TensorFlow instead of a custom vision platform. what is the biggest difference between custom vision(https://www.customvision.ai/) vs TensorFlow? AI: There are many differences as these are inherently complete different products with different goals. customvision [+] cloud deployment comes out of the box (including a rest API) [+] labelling tool to add data and label them [-] you have no control over the learning algorithm [-] difficult to run your model locally/completely for free tensorflow (or any framework really) [-] you need deploy your model yourself [-] you need to manage your data yourself [+] you have full control over your network and how you train it [+] you can embed your model into your code, run it locally, whatever you feel like
H: Getting prediction labels from TensorFlow 2 ImageGenerator I have created Image Generators that I used for training on labeled data. Now I want to make predictions on unlabeled data using the generators. I created a test generator as follows: test_generator = gen_test.flow_from_directory( test_path, target_size=IMAGE_SIZE, class_mode=None, shuffle=False, batch_size=batch_size) The outputs I get seem to be one hot encoded like this array([[0., 0., 1., 0., 0.], [0., 0., 0., 0., 1.], [0., 0., 0., 0., 1.], ..., [0., 1., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 0., 0., 1.]], dtype=float32) I would like to know the correct method of getting the actual class labels for each of the images. For instance, the classes in my training image generator is something like this array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4], dtype=int32) I am using this for the first time, so I there is probably a simpler way to get this information. Any pointers are much appreciated. AI: Using argmax with model.predict is the way to go. # testdata is the dataframe of Generator paths = testdata.filenames # Your files path y_pred = model.predict(testdata).argmax(axis=1) # Predict prob and get Class Indices classes = testdata.class_indices # Map of Indices to Class name from keras.preprocessing import image a_img_rand = np.random.randint(0,len(paths)) # A rand to pick a rand image img = image.load_img(paths[a_img_rand]) img = image.img_to_array(img) from google.colab.patches import cv2_imshow cv2_imshow(img) print(f'Class Predicted ---- {list(classes)[y_pred[a_img_rand]]}')
H: AttributeError: 'numpy.ndarray' object has no attribute 'nan_to_num' I'm trying to run a Random Forest model from sklearn but I keep getting an error: ValueError: Input contains NaN, infinity or a value too large for dtype('float32'). I tried following steps in ValueError: Input contains NaN, infinity or a value too large for dtype('float32') fillna(0) on my pandas dataframe still gave the ValueError. So I tried working with my numpy array: val = setTo.ravel().nan_to_num(0) But I keep getting an error: 'numpy.ndarray' object has no attribute 'nan_to_num' I'm wondering how I can deal with the nan values if I have ndarray? Update Thanks so much to @Beniamin H for all the help, as suggested, I rescaled the data, which I based on https://stackoverflow.com/questions/34771118/sklearn-random-forest-error-on-input and it worked! AI: You are using the right method but in a wrong way :) nan_to_num is a method of numpy module, not numpy.ndarray. So instead of calling nan_to_num on you data, call it on numpy module giving your data as a paramter: import numpy as np data = np.array([1,2,3,np.nan,np.nan,5]) data_without_nan = np.nan_to_num(data) prints: array([1., 2., 3., 0., 0., 5.]) In your example: import numpy as np val = np.nan_to_num(setTo.ravel())
H: IterativeImputer - Returning -0 and other wierd results I am using IterativeImputer to impute my dataset. from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer imp = IterativeImputer(random_state=0, max_iter=100, verbose=10) imp.fit(hosp) hosp_imputed = pd.DataFrame(imp.transform(hosp), columns=cols) I have a boolean column "ICU" that had 8 missing values. But after imputation, it outputs very weird results. Please see the attached screenshot. Can you please let me know what am I doing wrong? AI: Despite the fact that I'm not sure why it's that, it's seems that's normal. Please see the example from sklearn documentation: https://scikit-learn.org/stable/modules/impute.html#multivariate-feature-imputation import numpy as np from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer imp = IterativeImputer(max_iter=10, random_state=0) imp.fit([[1, 2], [3, 6], [4, 8], [np.nan, 3], [7, np.nan]]) X_test = [[np.nan, 2], [6, np.nan], [np.nan, 6]] # the model learns that the second feature is double the first print(np.round(imp.transform(X_test))) They use np.round at the end to get results: array([[ 1., 2.], [ 6., 12.], [ 3., 6.]]) Running this code without rounding gives results similar to yours: array([[ 1.00007297, 2. ], [ 6. , 12.00002754], [ 2.99996145, 6. ]]) Try to add np.round before casting to pd.DataFrame to see if it helps.
H: What is the number of neurons for the input layer of the BERT? I think it is the vocab size. However I am not sure and I appreciate your help. AI: It is not very clear what you are referring to with "number of input neurons". The input layer in BERT is an embedding layer, which is a table of vectors. Each of those vectors has dimensionality 768, and each vector is associated to one of the tokens in the vocabulary (so the number of vectors in the embedding table is the vocabulary size).
H: CNN inference is slow on Jetson Nano I'm running what I believe is a pretty lightweight CNN on an nVidia Jetson Nano with Jetpack 4.4. nVidia claims the Nano can run a ResNet-50 at 36fps, so I expected my much smaller network to run at 30+ fps with ease. Actually though, each forward pass takes 160-180ms, so I score 5-6 fps at best. At production predictions have to be made real-time on a live camera stream, so improving the per-sample-performance by using batches > 1 is not an option. Is there something fundamentally wrong with my inference code? Am I wrong thinking the network architecture should be pretty fast to compute compared to e.g. ResNet-50? What can I do to find out what exactly takes so much time? My CNN: Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lambda (Lambda) (None, 210, 848, 3) 0 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 210, 282, 3) 0 _________________________________________________________________ conv2d (Conv2D) (None, 102, 138, 16) 2368 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 51, 69, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 33, 32) 12832 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 12, 16, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 4, 6, 64) 51264 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 2, 3, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 384) 0 _________________________________________________________________ dropout (Dropout) (None, 384) 0 _________________________________________________________________ dense (Dense) (None, 64) 24640 _________________________________________________________________ dropout_1 (Dropout) (None, 64) 0 _________________________________________________________________ elu (ELU) (None, 64) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 65 ================================================================= Total params: 91,169 Trainable params: 91,169 Non-trainable params: 0 _________________________________________________________________ Code: import numpy as np import cv2 import time import tensorflow as tf from tensorflow import keras model_name = 'v9_small_FC_epoch_3' loaded_model = keras.models.load_model('/home/jetson/notebooks/trained_models/' + model_name + '.h5') loaded_model.summary() frame = cv2.imread('/home/jetson/notebooks/frame1.jpg') test_data = np.expand_dims(frame, axis=0) for i in range(10): start = time.time() predictions = loaded_model.predict(test_data) print(predictions[0][0]) end = time.time() print("Inference took {}s".format(end-start)) Result: 4.7763316333293915 Inference took 10.111131191253662s 4.7763316333293915 Inference took 0.1822071075439453s 4.7763316333293915 Inference took 0.17330455780029297s 4.7763316333293915 Inference took 0.18085694313049316s 4.7763316333293915 Inference took 0.16646790504455566s 4.7763316333293915 Inference took 0.1703803539276123s 4.7763316333293915 Inference took 0.1788337230682373s 4.7763316333293915 Inference took 0.17131853103637695s 4.7763316333293915 Inference took 0.1660606861114502s 4.7763316333293915 Inference took 0.18377089500427246s Edit: To make sure I'm not just underestimating my network, I replaced it with one that just consists of a single output and a single output neuron. As expected the initial loading of the model is significantly faster, but after that, inference is almost as slow. Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lambda (Lambda) (None, 1, 1, 1) 0 _________________________________________________________________ dense (Dense) (None, 1, 1, 1) 2 ================================================================= Total params: 2 Trainable params: 2 Non-trainable params: 0 _________________________________________________________________ 2021-01-06 20:44:22.361558: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10 Inference took 1.9230175018310547s Inference took 0.17112112045288086s Inference took 0.16610288619995117s Inference took 0.1768038272857666s Inference took 0.16962003707885742s Inference took 0.16416263580322266s Inference took 0.17536258697509766s Inference took 0.16603755950927734s Inference took 0.16376280784606934s Inference took 0.16828060150146484s On my Desktop (i5-2500k, GTX 1070Ti) even the first prediction takes just around 26ms: Inference took 0.02569293975830078s Inference took 0.026061534881591797s Inference took 0.023118019104003906s Inference took 0.023060083389282227s Inference took 0.02504444122314453s Inference took 0.02664470672607422s AI: Seems like converting to TensorRT improves performance by more than 10x (!) for me, which I didn't expect at all. The downside is, loading the TensorRT model takes >2min now and for reasons I can't grasp the script eats 2.2G of memory. Also getting the conversion process to work was beyond painful. I'm gonna open a new Q&A on that topic, because it seems a lot of people eventually give up on it. The TensorRT model seems to need a bit of warmup (~100 passes) and then settles at the final inference speed, which in my case is ~15-17ms (68-66fps). Pretty amazing improvement I have to say. Inference took 100.2991828918457s Inference took 0.2558176517486572s Inference took 0.04433894157409668s Inference took 0.037764787673950195s Inference took 0.03640627861022949s Inference took 0.04129934310913086s Inference took 0.024821043014526367s Inference took 0.0219266414642334s ... Inference took 0.0170745849609375s Inference took 0.016851186752319336s Inference took 0.016122817993164062s Inference took 0.01502084732055664s Inference took 0.015442371368408203s Inference took 0.01560211181640625s Without TensorRT, not only did inference take much longer in general, there were also occasional passes that took even longer, up to 750ms in some cases. For real-time applications that's a deal breaker. With TensorRT the inference times are pretty stable, I didn't see more than 15% variation within 10K passes.
H: Calculate the similarity between pairs of time series data I have 5 pieces of time series data. It is the weekly sales of 5 different stores (A,B,C,D,E). There are no missing values. A quick visual inspection shows that these 5 pieces of time series data have similar trend & seasonality. I would like to calculate/quantify how similar Store B,C,D,E is to Store A respectively. I know how to calculate the simple cosine distance and Euclidean distance, and I have experience dealing with time series data (e.g. ARIMA, Prophet), but never had to deal with finding (i.e. quantifying) the similarity / distance between time series data. Some google search suggests that "dynamic time warping" could be the method to go. I have not yet heard of this algorithm before, so not sure if this is indeed the right direction. Additionally, can I still perform cosine distance and Euclidean distance on time series data? Is that doable? How do I approach this issue? Any insights are appreciated. AI: Euclidean distance metric is unsuitable for time series...In short, it is invariant to time shifts, ignoring the time dimension of the data. If two time series are highly correlated, but one is shifted by even one time step, Euclidean distance would erroneously measure them as further apart. You might use it to compare your time series if you are extracting their features. Don't need to go and use directly PCA. You can use simple descriptive stats such as mean, min, max, kurtosis, etc...Then you can compare it with Euclidean. I advise you to give it a try with DTW. ;) How does it work? http://alexminnaar.com/2014/04/16/Time-Series-Classification-and-Clustering-with-Python.html (you can check that blog post, it explains it nicely) Good luck to you!
H: Xgboost : A variable specific Feature importance I have a data set something like this: data = [['Alex',10,13,1,0],['Bob',11,14,12,0],['Clarke',13,15,13,1],['bob',12,15,1,1]] df = pd.DataFrame(data,columns = ["dealer","x","y","z","loss"]) I am trying to predict binary column loss, I have done this xgboost model. I got Overall feature importance. Now I need top 5 most important features dealer wise. How to do that? I have tried to use lime package but it is only working for Random forest. If I get Feature importance for each observation(row) then also I can compute the feature importance dealer wise. kindly help Desired Output : AI: If you use a per-observation explanation, you could just average (or aggregate in some other way) the importances of features across the samples for each Dealer. For example, using shap to generate the per-observation explanation: import pandas as pd from sklearn.linear_model import LogisticRegression import shap data = [['Alex',10,13,1,0],['Bob',11,14,12,0],['Clarke',13,15,13,1],['Bob',12,15,1,1]] df = pd.DataFrame(data, columns=["dealer","x","y","z","loss"]) lr = LogisticRegression() lr.fit(df[['x', 'y', 'z']], df['loss']) # Whatever explainer you prefer: explainer = shap.explainers.Permutation(lr.predict_proba, df[['x', 'y', 'z']]) shap_values = explainer(df[['x', 'y', 'z']]) # get just the explanations for the positive class shap_values = shap_values[...,1] shap_df = pd.DataFrame(abs(shap_values.values)) shap_df.columns = ['x_shap', 'y_shap', 'z_shap'] shap_df['dealer'] = df['dealer'] shap_df.groupby('dealer').mean() produces dealer x_shap y_shap z_shap Alex 0.260427 0.140054 0.075176 Bob 0.106593 0.069035 0.091146 Clarke 0.268328 0.083706 0.085807
H: PyTorch: LSTM for time-series failing to learn I'm currently working on building an LSTM network to forecast time-series data using PyTorch. I tried to share all the code pieces that I thought would be helpful, but please feel free to let me know if there's anything further I can provide. I added some comments at the end of the post regarding what the underlying issue might be. From the univariate time-series data indexed by date, I created 3 date features and split the data into training and validation sets as below. # X_train weekday monthday hour timestamp 2015-01-08 17:00:00 3 8 17 2015-01-12 19:30:00 0 12 19 2014-12-01 15:30:00 0 1 15 2014-07-26 09:00:00 5 26 9 2014-10-17 20:30:00 4 17 20 ... ... ... ... 2014-08-29 06:30:00 4 29 6 2014-10-13 14:30:00 0 13 14 2015-01-03 02:00:00 5 3 2 2014-12-06 16:00:00 5 6 16 2015-01-06 20:30:00 1 6 20 8256 rows × 3 columns # y_train value timestamp 2015-01-08 17:00:00 17871 2015-01-12 19:30:00 20321 2014-12-01 15:30:00 16870 2014-07-26 09:00:00 11209 2014-10-17 20:30:00 26144 ... ... 2014-08-29 06:30:00 9008 2014-10-13 14:30:00 17698 2015-01-03 02:00:00 12850 2014-12-06 16:00:00 18277 2015-01-06 20:30:00 19640 8256 rows × 1 columns # X_val weekday monthday hour timestamp 2015-01-08 07:00:00 3 8 7 2014-10-13 22:00:00 0 13 22 2014-12-07 01:30:00 6 7 1 2014-10-14 17:30:00 1 14 17 2014-10-25 09:30:00 5 25 9 ... ... ... ... 2014-09-26 12:30:00 4 26 12 2014-10-08 16:00:00 2 8 16 2014-12-03 01:30:00 2 3 1 2014-09-11 08:00:00 3 11 8 2015-01-15 10:00:00 3 15 10 2064 rows × 3 columns # y_val value timestamp 2014-09-13 13:00:00 21345 2014-10-28 20:30:00 23210 2015-01-21 17:00:00 17001 2014-07-20 10:30:00 13936 2015-01-29 02:00:00 3604 ... ... 2014-11-17 11:00:00 15247 2015-01-14 00:00:00 10584 2014-09-02 13:00:00 17698 2014-08-31 13:00:00 16652 2014-08-30 12:30:00 15775 2064 rows × 1 columns Then, I transformed the values in the datasets by using MinMaxScaler from the sklearn library. scaler = MinMaxScaler() X_train_arr = scaler.fit_transform(X_train) X_val_arr = scaler.transform(X_val) y_train_arr = scaler.fit_transform(y_train) y_val_arr = scaler.transform(y_val) After converting these NumPy arrays into PyTorch Tensors, I created iterable datasets using TensorDataset and DataLoader classes provided by PyTorch. from torch.utils.data import TensorDataset, DataLoader from torch.autograd import Variable train_features = torch.Tensor(X_train_arr) train_targets = torch.Tensor(y_train_arr) val_features = torch.Tensor(X_val_arr) val_targets = torch.Tensor(y_val_arr) train = TensorDataset(train_features, train_targets) train_loader = DataLoader(train, batch_size=64, shuffle=False) val = TensorDataset(val_features, val_targets) val_loader = DataLoader(train, batch_size=64, shuffle=False) Then, I defined my LSTM Model and train_step functions as follows: class LSTMModel(nn.Module): def __init__(self, input_dim, hidden_dim, layer_dim, output_dim): super(LSTMModel, self).__init__() # Hidden dimensions self.hidden_dim = hidden_dim # Number of hidden layers self.layer_dim = layer_dim # Building your LSTM # batch_first=True causes input/output tensors to be of shape # (batch_dim, seq_dim, feature_dim) self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True) # Readout layer self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): # Initialize hidden state with zeros h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() # Initialize cell state c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() # We need to detach as we are doing truncated backpropagation through time (BPTT) # If we don't, we'll backprop all the way to the start even after going through another batch out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach())) # Index hidden state of last time step out = self.fc(out[:, -1, :]) return out def make_train_step(model, loss_fn, optimizer): # Builds function that performs a step in the train loop def train_step(x, y): # Sets model to TRAIN mode model.train() # Makes predictions yhat = model(x) # Computes loss loss = loss_fn(y, yhat) # Computes gradients loss.backward() # Updates parameters and zeroes gradients optimizer.step() optimizer.zero_grad() # Returns the loss return loss.item() # Returns the function that will be called inside the train loop return train_step Finally, I start training my LSTM model in mini-batches with AdamOptimizer for 20 epochs, which is already long enough to see the model is not learning. import torch.optim as optim input_dim = n_features hidden_dim = 64 layer_dim = 3 output_dim = 1 model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim) criterion = nn.MSELoss(reduction='mean') optimizer = optim.Adam(model.parameters(), lr=1e-2) train_losses = [] val_losses = [] train_step = make_train_step(model, criterion, optimizer) n_epochs = 20 device = 'cuda' if torch.cuda.is_available() else 'cpu' for epoch in range(n_epochs): batch_losses = [] for x_batch, y_batch in train_loader: x_batch = x_batch.unsqueeze(dim=0).to(device) y_batch = y_batch.to(device) loss = train_step(x_batch, y_batch) batch_losses.append(loss) training_loss = np.mean(batch_losses) train_losses.append(training_loss) with torch.no_grad(): batch_val_losses = [] for x_val, y_val in val_loader: x_val = x_val.unsqueeze(dim=0).to(device) y_val = y_val.to(device) model.eval() yhat = model(x_val) val_loss = criterion(y_val, yhat).item() batch_val_losses.append(val_loss) validation_loss = np.mean(batch_val_losses) val_losses.append(validation_loss) print(f"[{epoch+1}] Training loss: {training_loss:.4f}\t Validation loss: {validation_loss:.4f}") And this is the output: C:\Users\VS32XI\Anaconda3\lib\site-packages\torch\nn\modules\loss.py:446: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([64, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) [1] Training loss: 0.0505 Validation loss: 0.0315 [2] Training loss: 0.0317 Validation loss: 0.0315 [3] Training loss: 0.0317 Validation loss: 0.0315 [4] Training loss: 0.0317 Validation loss: 0.0315 [5] Training loss: 0.0317 Validation loss: 0.0315 [6] Training loss: 0.0317 Validation loss: 0.0315 [7] Training loss: 0.0317 Validation loss: 0.0315 [8] Training loss: 0.0317 Validation loss: 0.0315 [9] Training loss: 0.0317 Validation loss: 0.0315 [10] Training loss: 0.0317 Validation loss: 0.0315 [11] Training loss: 0.0317 Validation loss: 0.0315 [12] Training loss: 0.0317 Validation loss: 0.0315 [13] Training loss: 0.0317 Validation loss: 0.0315 [14] Training loss: 0.0317 Validation loss: 0.0315 [15] Training loss: 0.0317 Validation loss: 0.0315 [16] Training loss: 0.0317 Validation loss: 0.0315 [17] Training loss: 0.0317 Validation loss: 0.0315 [18] Training loss: 0.0317 Validation loss: 0.0315 [19] Training loss: 0.0317 Validation loss: 0.0315 [20] Training loss: 0.0317 Validation loss: 0.0315 Note 1: Looking at the warning given, I'm not sure if that's the real reason why the model is not learning well. After all, I'm trying to predict the future values in the time-series data; therefore, 1 would be a plausible output dimension. Note 2: To train the model in mini-batches, I relied on the class DataLoader. When iterating over the X and Y batches in both train and validation DataLoaders, the dimensions of x_batches were 2, while the model expected 3. So, I used PyTorch's unsqueeze function to match the expected dimension as in x_batch.unsqueeze(dim=0) . I'm not sure if this is how I should have gone about it, which could also be the issue. AI: The issue was resolved once I used Tensor View to reshape the mini-batches for the features in the training and in the validation set. As a side note, view() enable fast and memory-efficient reshaping, slicing, and element-wise operations, by avoiding an explicit data copy. It turned out that in the earlier implementation torch.unsqueeze() did not reshape the batches into tensors with the dimensions (batch size, timesteps, number of features). Instead, the function unsqueeze(dim=0) returns a new tensor with a singleton dimension inserted at the Oth index. So, the mini batches for the feature sets is shaped as follows x_batch = x_batch.view([batch_size, -1, n_features]).to(device) Then, the new training loop becomes: for epoch in range(n_epochs): batch_losses = [] for x_batch, y_batch in train_loader: x_batch = x_batch.view([batch_size, -1, n_features]).to(device) # <--- y_batch = y_batch.to(device) loss = train_step(x_batch, y_batch) batch_losses.append(loss) training_loss = np.mean(batch_losses) train_losses.append(training_loss) with torch.no_grad(): batch_val_losses = [] for x_val, y_val in val_loader: x_val = x_val.view([batch_size, -1, n_features]).to(device) # <--- y_val = y_val.to(device) model.eval() yhat = model(x_val) val_loss = criterion(y_val, yhat).item() batch_val_losses.append(val_loss) validation_loss = np.mean(batch_val_losses) val_losses.append(validation_loss) print(f"[{epoch+1}] Training loss: {training_loss:.4f}\t Validation loss: {validation_loss:.4f}") Here's the output: [1] Training loss: 0.0235 Validation loss: 0.0173 [2] Training loss: 0.0149 Validation loss: 0.0086 [3] Training loss: 0.0083 Validation loss: 0.0074 [4] Training loss: 0.0079 Validation loss: 0.0069 [5] Training loss: 0.0076 Validation loss: 0.0069 ... [96] Training loss: 0.0025 Validation loss: 0.0028 [97] Training loss: 0.0024 Validation loss: 0.0027 [98] Training loss: 0.0027 Validation loss: 0.0033 [99] Training loss: 0.0027 Validation loss: 0.0030 [100] Training loss: 0.0023 Validation loss: 0.0028
H: How to replace values in a numpy array? I'm learning how to implement and evaluate a Logistic Regression Model, for this I need to change the values of my array from strings to 0 & 1. I have the following numpy ndarray as a result of a DataFrame.values call ['PAIDOFF', 'COLLECTION', 'COLLECTION', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF', ...] I would like to know how can I change the values like: 'PAIDOFF' to 0 and 'COLLECTION' to 1 Any help will be much appreciated AI: import numpy as np a = np.array(['PAIDOFF', 'COLLECTION', 'COLLECTION', 'PAIDOFF']) f = lambda x: 1 if x == "COLLECTION" else 0 np.fromiter(map(f,a),dtype=np.int) Alternative: np.where(a == "COLLECTION",1,0)
H: Why does BERT embedding increase the number of tokens? I am new to DataScience and trying to implement BERT embedding for one of my problems. But I am having one doubt here. I am trying to embed the following sentence with BERT - "Twinkle twinkle little star". BERT tokenizer generates the following tokens - ['twin', '##kle', 'twin', '##kle', 'little', 'star'] But the final embedded tensor is having a dimension of [1,8,1024] Why is the number of tokens 8 instead of 6? For any text, I am observing that number of tokens in the final embedding is getting increased by 2. Can anyone please help me to understand this? I am giving the code snippet here - from transformers import BertTokenizer, BertForSequenceClassification, BertModel PRE_TRAINED_MODEL_PATH = 'BERT\wwm_cased_L-24_H-1024_A-16' tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_PATH) model = BertModel.from_pretrained(PRE_TRAINED_MODEL_PATH) encoded_input = tokenizer(texts, return_tensors='pt', padding=True) emb = model(**encoded_input) AI: A [CLS] token is added to the beginning of the sentence, and a [SEP] token is added to the end. These two special tokens have specific functions: [CLS] is needed because it was used in the training loss of BERT to keep the first output position for a different purpose than the rest: it was used for the "next sentence prediction" loss, which trained the model to tell if the 2 text segments passed as input were consecutive in the original text or not. When finetuning BERT, the output of this position is normally used for sentence classification tasks. When BERT is used for feature extraction, the output vector at that position is used as sentence embedding. [SEP] is used to mark the ending of the sentence.
H: What are the differences between MLP and DNN? So I have been reading about the topic for a while, but i did not find a clear answer why MLP and DNN are being used interchangeably even though there are some differences between them. So far I have filtered some informations: "The field of artificial neural networks is often just called neural networks or multi-layer perceptrons after perhaps the most useful type of neural network. A perceptron is a single neuron(input, output, weights, activation) model that was a precursor to larger neural networks. MLP is a subset of DNN. While DNN can have loops and MLP are always feed-forward(a type of Neural Network architecture where the connections are "fed forward", do not form cycles (like in recurrent nets). Multilayer Perceptron is a finite acyclic graph, not like RNN and it's subsets which are cyclic in nature. MLP uses back propagation for training the network." So what makes MLP different from DNN ? AI: You explained it already quite well. An MLP is a type of neural network, the same way CNNs, RNNs, and other types exist. DNN is an umbrella term for all types of neural networks. The reason some literature may be using these terms interchangeably is likely due to the fact that MLPs are some of the oldest forms of neural networks and therefore, at the time the literature was written, MLPs and DNNs were ubiquitous.
H: Variance of product of a matrix and vector using Python I am trying to calculate the variance of the product of matrix A and vector b. As it was mathematically discussed on another post https://math.stackexchange.com/questions/2365166/what-is-the-variance-of-a-constant-matrix-times-a-random-vector, I have tried to do the calculation in python. To recap, we have \begin{equation} Y = \bf{Ab} \end{equation} where \begin{equation} \bf{b} \sim N(\mu_b, \bf{I}\sigma_b^2) \end{equation} Therefore \begin{equation} Var(Y) = \bf{A}\sigma_b^2\bf{A}^T \end{equation} I have tried this code with import numpy as np import statistics A = np.arange(12).reshape(4, 3) # some 4 by 3 array b = np.array([2,4,5]) # some 3 by 1 vector Y = np.dot(A,b) statistics.variance(Y) >> 1815 Vb = np.dot(np.identity(b.shape[0]), statistics.variance(b)) np.dot(A.dot(Vb), np.transpose(A)) >> array([[ 10., 28., 46., 64.], [ 28., 100., 172., 244.], [ 46., 172., 298., 424.], [ 64., 244., 424., 604.]]) However, the statistic.variance(Y) outputs a different result compared to np.dot(A.dot(Vb), np.transpose(A)). I wonder how should I do it properly in python? Your help is really appreciated! AI: To test formulas like these numerically, you need to compute sample statistics, like this: import numpy as np A = np.arange(12).reshape(4, 3) # Note: A is a fixed matrix b_avg = np.array([2, 4, 5]) # b is random, with a mean vector b_var = 1e0 * np.eye(b_avg.size) # and a (co)variance matrix # generate a random sample of b vectors n_sample = 10**4 b_sample = np.random.multivariate_normal( mean=b_avg, cov=b_var, size=n_sample).T # compute sample moments (each column is a sample) b_avg_smp = b_sample.mean(axis=1) b_var_smp = np.cov(b_sample) # compare sample moments to population moments print('b vector: statistics') print('Averages (population vs. sample):', b_avg.tolist(), b_avg_smp.round(1).tolist(), sep='\n') print('Variances (population vs. sample):', b_var.tolist(), b_var_smp.round(1).tolist(), sep='\n') This will give something like this: b vector: statistics Averages (population vs. sample): [2, 4, 5] [2.0, 4.0, 5.0] Variances (population vs. sample): [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]] [[1.0, -0.0, 0.0], [-0.0, 1.0, 0.0], [0.0, 0.0, 1.0]] i.e. the sample statistics are consistent with the specified $b$ distribution. Then we can transform the $b$ sample into a $y$ sample, and do the analogous comparison, using your analytically computed $y$ moments, i.e. # transform b moments into y moments (@ is matrix multiply) y_avg = A @ b_avg y_var = A @ b_var @ A.T # transform b sample into y sample, and compute sample moments y_sample = A @ b_sample y_avg_smp = y_sample.mean(axis=1) y_var_smp = np.cov(y_sample) # compare population vs. sample moments print('y vector: statistics') print('Averages (population vs. sample):', y_avg.tolist(), y_avg_smp.round(1).tolist(), sep='\n') print('Variances (population vs. sample):', y_var.tolist(), y_var_smp.round(1).tolist(), sep='\n') This will give something like this: y vector: statistics Averages (population vs. sample): [14, 47, 80, 113] [13.9, 46.8, 79.7, 112.7] Variances (population vs. sample): [[5.0, 14.0, 23.0, 32.0], [14.0, 50.0, 86.0, 122.0], [23.0, 86.0, 149.0, 212.0], [32.0, 122.0, 212.0, 302.0]] [[5.0, 14.0, 23.1, 32.1], [14.0, 50.1, 86.1, 122.1], [23.1, 86.1, 149.0, 212.0], [32.1, 122.1, 212.0, 302.0]] i.e. the sample $y$ statistics are consistent with the analytically computed $y$ distribution. (Alternatively, the uncertainties package will automatically do linearized error propagation for Gaussian random variables.)
H: GridSearchCV with custom tune grid What is the best way to perform custom parameter search CV with the Scikit-learn API? I really like GridSearchCV. However for my case the param_grid parameter is inflexible because it will search over the entire span of parameter combinations. Ideally, I would like to provide my own parameter space in a dataframe, one column for each parameter. Thus, is there an appropriate class within Scikit-learn to help me achieve this result? AI: GridSearchCV does allow param_grid to be a list of grid-dicts, which sometimes is sufficient. In this case, separate grids are generated and their union is searched. There isn't quite a convenient implementation by which you provide your own list of hyperparameter points. But looking at the source code for GridSearchCV, you'll notice that it's amazingly sparse. Most everything is handled by the inherited class BaseSearchCV. The grid-specific stuff is also factored out: def _run_search(self, evaluate_candidates): """Search all candidates in param_grid""" evaluate_candidates(ParameterGrid(self.param_grid)) The ParameterGrid class is what's responsible for turning the (possibly list of) dict(s) into a grid(s). So you could probably easily write a custom class, inheriting from BaseSearchCV, where you pass the set of hyperparameter points you want and define _run_search to just evaluate each element from your set. In fact, already evaluate_candidates is defined assuming that the hyperparameter points live in an iterable candidate_params; the ParameterGrid does all the hard work there, but in your case you can just feed your list directly.
H: What is the name of this visualization with a circle and internal arcs? I came across this visualization where there is a circle with data points on the circle and internal lines (arcs or edges) between the points on the circle. What is the name of this type of visualization? How would I generate it from a 2-D table like the one below? I want the points A, B, C, D, and E on the circle and the value from point i to j to be an arc between them. Would something like d3.js be able to generate it? AI: This is a chord diagram. It is usuallly generated from a square matrix like the one you have. This can be done with specific libraries at least in R (circlize package) and Python (plotly). It's also possible to do it with d3.js but apparently not so easily. General advice: I find the "X Graph Gallery" websites quite convenient for exploring different types of graphs: R graph gallery, Python graph gallery and D3 graph gallery.
H: How to apply Normalisation using the MinMaxScaler() to all Columns, but Exclude the Categorical? Below, I have the following datatset: sample_df.head(2) ID S_LENGTH S_WIDTH P_LENGTH P_WIDTH SPECIES ------------------------------------------------------------------- 1 3.5 2.5 5.6 1.7 VIRGINICA 2 4.5 5.6 3.4 8.7 SETOSA Therefore, how to I apply normalisation to this dataset using the following code below to all my columns, excluding the ID and SPECIES columns? I basically want to use the preprocessing.MinMaxScaler() to apply normalisation, so that all the features are in a range of 0 and 1. This is the code I am using... min_max = preprocessing.MinMaxScaler() min_max.fit_transform(sample_df) ...but when I execute it, I get this error: ValueError: could not convert string to float: 'SETOSA' Alternatively, if I do this... min_max = preprocessing.MinMaxScaler() min_max.fit_transform(sample_df[['S_LENGTH', 'S_WIDTH']]) sample_df.head(2) ...I get this error: AttributeError: 'numpy.ndarray' object has no attribute 'sample' Any help on how to accomplish what I want to do is much appreciated! AI: If you want to apply the result of fit_transform, you must assign to your columns. columns = ['S_LENGTH', 'S_WIDTH', 'P_LENGTH', 'P_WIDTH'] min_max = preprocessing.MinMaxScaler() df[columns] = min_max.fit_transform(df[columns]) df.head() Output ID S_LENGTH S_WIDTH P_LENGTH P_WIDTH SPECIES 0 1 0.0 0.0 1.0 0.0 VIRGINICA 1 2 1.0 1.0 0.0 1.0 SETOSA
H: How to add a column for descending row numbers into dataset in R I am new to R and would like to insert a new column that numbers the row to a large dataset. I have no idea how to use 'mutate()' to insert this. Would appreciate any help. Thanks. AI: Use mutate in combination with row_number as follows: df %>% mutate(row = row_number())
H: Is the output size of the last layer of a standard fully connected neural network the same as the input size? Let's say I have a neural net with Dense layers. The input layer has 3 neurons, the single hidden layer has 5 neurons, and the final output layer has 2 neurons. For layer 1, 3 inputs go in and 5 inputs go out. For layer 2, the 5 inputs from layer 1 go in and 2 inputs go out. So for layer 3, would 2 inputs go in and 2 inputs come out? AI: In a Dense NN, Input to a Layer depends on the output of the previous layer and its Neuron count Previous Layer for the first layer is the Input Data Features Output of a Layer is equal to the Layer's neuron count. A copy of each goes to all the Neuron of the Next Layer It means for your case, - First Layer will have *M(Input features)3 inputs going in i.e. M to each Neuron and 3 coming out. - The Second layer will have *3(Previous Layer)5 inputs going in and 5 coming out - The Last layer will have 5(Previous Layer)*2 inputs going in and 2 coming out A Neural Network is a complex Tensor operation. Arrow and Circle are logical representations. Each arrow got a weight and each Circle got the Activation function and the Bias term. So, you may count this way too. $\hspace{3cm}$ Validate this with Keras Model.summary result which shows the parameter count.
H: Tuning the model parameters vs the parameter of optimizer for Deep Neural Networks? I understand that there are rarely general recipes in field of machine learning and the many results can be achieved only by trial and error, and are task specific as well. My question is, if the model doesn't give a desired quality, but one may expect, that the model of this or similar architecture can achieve a reasonable result, based on some prior assumptions or knowledge, what is, in general, better strategy, more successful in majority cases: Tuning of the optimizer parameters of the optimizer. (playing with the learning rates, random seeds, weight decay, momentum, e.t.c). hoping that it converges to the better optimum Changing some architecture properties of the model (number of hidden layers, dropout rate, width, number of filters in CNN) Here I assume, that the current network is sensible choice - not 1-hidden layer narrow MLP - for solving some complicated task. Minimization of loss function doesn't always lead to improvement of the quality of the model on the test data (well known problem of overfitting), but here let us assume, that one has lot enough data, and the training and test set are organized in a such way, that good score on the training examples would also lead to good accuracy on the test data. AI: The short answer, is that you want the fastest way to reach the performance you expect/desire. This would mean first playing with some hyperparameters like learning rate, initialisation strategy, trying different optimizers, etc. Having said that, if you managed to write your training process such that it is trivial to swap out the model completely for another one, you can also give that a go. If you have a model that is known to work well for a given type of task, but is really under-performing (e.g. not converging or a large difference to results in the literature), then I would first check for some common issues in the training process: data quality: are the inputs scaled correctly? Should you add some data volume: if you don't have many samples, can you get more or use augmentations? loss function: are the outputs of the model being correctly consumed and the losses correctly computed? are you even using the right loss formulation? This is essentially debugging your training setup. I would convince myself as far as possible that none of these exist before building a different architecture or a radically different approach. The main reason for this, is that I assume you did some research into the problem and the type of models that are known to perform well - so if this doesn't work, what makes you think another architecture will? :)
H: How to convert input numpy data to tensorflow tf.data to train model in tensorfow? I am working on an image classification problem using TensorFlow. I have converted my input image dataset and label into NumPy data but it takes more time and more ram to load all the data into memory because I have 90K images. I would like to use TensorFlow data API using tf.keras.preprocessing.image_dataset_from_director Here's my current code to train NumPy data. I want to convert this code into tf.data(tf.keras.preprocessing.image_dataset_from_director) to train my huge dataset. INIT_LR = 1e-4 EPOCHS = 20 BS = 32 # grab the list of images in our dataset directory, then initialize # the list of data (i.e., images) and class images print("[INFO] loading images...") imagePaths = list(paths.list_images(args["dataset"])) data = [] labels = [] # loop over the image paths for imagePath in imagePaths: # extract the class label from the filename label = imagePath.split(os.path.sep)[-2] # load the input image (224x224) and preprocess it image = load_img(imagePath, target_size=(224, 224)) image = img_to_array(image) image = preprocess_input(image) # update the data and labels lists, respectively data.append(image) labels.append(label) # convert the data and labels to NumPy arrays data = np.array(data, dtype="float32") labels = np.array(labels) # perform one-hot encoding on the labels lb = LabelBinarizer() labels = lb.fit_transform(labels) labels = to_categorical(labels) # partition the data into training and testing splits using 75% of # the data for training and the remaining 25% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.20, stratify=labels, random_state=42) # construct the training image generator for data augmentation aug = ImageDataGenerator( rotation_range=20, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True, fill_mode="nearest") # load the MobileNetV2 network, ensuring the head FC layer sets are # left off baseModel = MobileNetV2(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3))) # construct the head of the model that will be placed on top of the # the base model headModel = baseModel.output headModel = AveragePooling2D(pool_size=(7, 7))(headModel) headModel = Flatten(name="flatten")(headModel) headModel = Dense(128, activation="relu")(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(2, activation="softmax")(headModel) # place the head FC model on top of the base model (this will become # the actual model we will train) model = Model(inputs=baseModel.input, outputs=headModel) # loop over all layers in the base model and freeze them so they will # *not* be updated during the first training process for layer in baseModel.layers: layer.trainable = False # compile our model print("[INFO] compiling model...") opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the head of the network print("[INFO] training head...") H = model.fit( aug.flow(trainX, trainY, batch_size=BS), steps_per_epoch=len(trainX) // BS, validation_data=(testX, testY), validation_steps=len(testX) // BS, epochs=EPOCHS) # make predictions on the testing set print("[INFO] evaluating network...") predIdxs = model.predict(testX, batch_size=BS) # for each image in the testing set we need to find the index of the # label with corresponding largest predicted probability predIdxs = np.argmax(predIdxs, axis=1) # show a nicely formatted classification report print(classification_report(testY.argmax(axis=1), predIdxs, target_names=lb.classes_)) # serialize the model to disk print("[INFO] saving mask detector model...") model.save(args["model"], save_format="h5") AI: You could try using the flow_from_directory() method on your ImageDataGenerator class, which does the augmentation - only a small change is necessary: H = model.fit( aug.flow_from_directory(trainX, trainY, batch_size=BS), ...) If you start using a tf.data.Dataset directly, you will get more control over how the data is read from disk (caching, number of threads etc.), but you will lose the easy image augmentation you get with the ImageDataGenerator.
H: How to detect misclasification data after multiclass classification? I have trained a neural network multi-class classification model with around 150 classes having around 85% accuracy. Once the model is trained and deployed, it's predicting on new data and I am saving the logs. Now I have to detect those data-points which is wrongly predicted by the model. For example, The model predicted on 10 data-points, out of which there might be 3 data-points which are wrongly classified by the model. Is there any way to get those data-points? I have the following data. The deployed model. The data-points. Corresponding predicted classes by the model. Confidence scores on the prediction. I know that using confidence score I might get some idea on wrong prediction, but I am thinking is there any other way to get that? AI: Sadly, this isn't possible. Simply put, if you could know for sure that your model made a mistake, then, your model will never make mistakes. This is the painful reality of deploying models in practice. You will need manual validations to confirm whether your model performed correctly. In some systems, you may be able to outsource this to the user itself (i.e. give them an option to say the model was wrong), but you'll likely want to run independent validation. Now, there are indeed ways to make this part less painful. First, as you said, you should take a look at the confidence scores. It should indeed be less necessary to verify predictions where the model is highly confident. You should also use your confusion matrix to figure out which predictions are more crucial to verify. Indeed, some mistakes may be less harmful than others. What you could also do is experiment with building auxiliary binary models that take in the input, the model output, and predicts whether the model made a mistake. This isn't guaranteed to work, but it may give you better confidence scores for your model.
H: Converting paragraphs into sentences I'm looking for ways to extract sentences from paragraphs of text containing different types of punctuations and all. I used SpaCy's Sentencizer to begin with. Sample input python list abstracts: ["A total of 2337 articles were found, and, according to the inclusion and exclusion criteria used, 22 articles were included in the study. Inhibitory activity against 96% (200/208) and 95% (312/328) of the pathogenic fungi tested was described for Eb and [(PhSe)2], respectively. Including in these 536 fungal isolates tested, organoselenium activity was highlighted against Candida spp., Cryptococcus ssp., Trichosporon spp., Aspergillus spp., Fusarium spp., Pythium spp., and Sporothrix spp., with MIC values lower than 64 mug/mL. In conclusion, Eb and [(PhSe)2] have a broad spectrum of in vitro inhibitory antifungal activity."] Code: from spacy.lang.en import English nlp = English() sentencizer = nlp.create_pipe("sentencizer") nlp.add_pipe(sentencizer) # read the sentences into a list for doc in abstracts[:5]: do = nlp(doc) for sent in list(do.sents): print(sent) Output: A total of 2337 articles were found, and, according to the inclusion and exclusion criteria used, 22 articles were included in the study. Inhibitory activity against 96% (200/208) and 95% (312/328) of the pathogenic fungi tested was described for Eb and [(PhSe)2], respectively. Including in these 536 fungal isolates tested, organoselenium activity was highlighted against Candida spp., Cryptococcus ssp., Trichosporon spp., Aspergillus spp., Fusarium spp., Pythium spp., and Sporothrix spp., with MIC values lower than 64 mug/mL. In conclusion, Eb and [(PhSe)2] have a broad spectrum of in vitro inhibitory antifungal activity. It works fine for normal text but fails when there are dots (.) present in the sentence elsewhere other than at the end, which breaks the whole sentence as shown in the above output. How can we address this? Are there any other proven methods or libraries to perform this task? AI: Spacy's Sentencizer is very simple. However, Spacy 3.0 includes Sentencerecognizer which basically is a trainable sentence tagger and should behave better. Here is the issue with the details of its inception. You can train it if you have segmented sentence data. Another option is using NLTK's sent_tokenize, which should give better results than Spacy's Sentencizer. I have tested it with your example and it works well. from nltk.tokenize import sent_tokenize sent_tokenize("A total....") Finally, if for some abbreviations sent_tokenize does not work well and you have a list of abbreviations to be supported (like "spp." in your examples), you could use NLTK's PunktSentenceTokenizer: from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters punkt_param = PunktParameters() abbreviation = ['spp.'] punkt_param.abbrev_types = set(abbreviation) tokenizer = PunktSentenceTokenizer(punkt_param) tokenizer.tokenize("A total ....")
H: Problem reading python code Can someone explain the following python code. value_geojson["features"][0]["properties"]["title"] value_geojson is a geojson variable. I like to think that I'm not a total python newbie but these are too many [] for me. I am a GeoJSON newbie though. Would appreciate help on that one, eventhough I know it's probably to simple for SE AI: See sections 3.2 and 3.3 of the geojson standard. It looks like value_geojson is a FeatureCollection object, which have a single member "features" which is a list of Feature objects. So value_geojson["features"][0] is now just the first Feature object. Those have a few members, including "properties", which is an arbitrary JSON object. At that point we're outside of the geojson standard, so whatever "title" means is more context-specific.
H: Error when trying .transform for OrdinalEncoder from Scikit Learn I'm having a lot of issues using scikit learn recently and was hoping someone could help me with my problem. I can use other methods to ordinal encode but i want to figure this one out. for i in range(len(ordinal_orders)): ord_en = OrdinalEncoder(categories = {0:ordinal_orders[i]}) X_train.loc[:,ordinal_features[i]] = ord_en.fit_transform(X_train.loc[:,ordinal_features[i]].values.reshape(-1,1)) This works fine but when i try and apply this transformation to the test set i get an error. for i in range(len(ordinal_orders)): ord_en = OrdinalEncoder(categories = [ordinal_orders[i]]) X_test.loc[:,ordinal_features[i]] = ord_en.transform(X_test.loc[:,ordinal_features[i]].values.reshape(-1,1)) i get the value error AttributeError Traceback (most recent call last) <ipython-input-45-24cb27da6829> in <module> 1 for i in range(len(ordinal_orders)): 2 ord_en = OrdinalEncoder(categories = [ordinal_orders[i]]) ----> 3 X_test.loc[:,ordinal_features[i]] = ord_en.transform(X_test.loc[:,ordinal_features[i]].values.reshape(-1,1)) 4 /opt/anaconda3/lib/python3.8/site-packages/sklearn/preprocessing/_encoders.py in transform(self, X) 698 Transformed input. 699 """ --> 700 X_int, _ = self._transform(X) 701 return X_int.astype(self.dtype, copy=False) 702 /opt/anaconda3/lib/python3.8/site-packages/sklearn/preprocessing/_encoders.py in _transform(self, X, handle_unknown) 105 X_mask = np.ones((n_samples, n_features), dtype=np.bool) 106 --> 107 if n_features != len(self.categories_): 108 raise ValueError( 109 "The number of features in X is different to the number of " AttributeError: 'OrdinalEncoder' object has no attribute 'categories_' The ordinal features and orders are ordinal_features=['LotShape','ExterQual','ExterCond','BsmtQual','BsmtCond', 'BsmtExposure','BsmtFinType1','BsmtFinType2','HeatingQC','KitchenQual', 'FireplaceQu','GarageQual','GarageCond','GarageFinish','Fence','PoolQC'] ordinal_orders=[ #LotShape ['Reg','IR1' ,'IR2','IR3'], #ExterQual ['Fa','TA','Gd','Ex'], #ExterCond ['Po','Fa','TA','Gd','Ex'], #BsmtQual ['None','Fa','TA','Gd','Ex'], #BsmtCond ['None','Po','Fa','TA','Gd','Ex'], #BsmtExposure ['None','No','Mn','Av','Gd'], #BsmtFinType1 ['None','Unf','LwQ', 'Rec','BLQ','ALQ' , 'GLQ' ], #BsmtFinType2 ['None','Unf','LwQ', 'Rec','BLQ','ALQ' , 'GLQ' ], #HeatingQC ['Po','Fa','TA','Gd','Ex'], #KitchenQual ['Fa','TA','Gd','Ex'], #FireplaceQu ['None','Po','Fa','TA','Gd','Ex'], #GarageQual ['None','Po','Fa','TA','Gd','Ex'], #GarageCond ['None','Po','Fa','TA','Gd','Ex'], #GarageFinish ['None','Unf','RFn','Fin'], #Fence ['None','MnWw','GdWo','MnPrv','GdPrv'], #PoolQC ['None','Fa','Gd','Ex'] ] train and test set is just a generic dataset from kaggle, theyre not an issue here. Any sort of clarification would be nice! AI: It is really hard to figure out the logic behind what you are doing, it look odd But assuming you are trying to apply a preprocessing step to a data frame I would go as follows: from sklearn.compose import make_column_transformer from sklearn.preprocessing import OrdinalEncoder ordinal_features=['LotShape','ExterQual','ExterCond','BsmtQual','BsmtCond', 'BsmtExposure','BsmtFinType1','BsmtFinType2','HeatingQC','KitchenQual', 'FireplaceQu','GarageQual','GarageCond','GarageFinish','Fence','PoolQC'] ordinal_orders=[ #LotShape ['Reg','IR1' ,'IR2','IR3'], #ExterQual ['Fa','TA','Gd','Ex'], #ExterCond ['Po','Fa','TA','Gd','Ex'], #BsmtQual ['None','Fa','TA','Gd','Ex'], #BsmtCond ['None','Po','Fa','TA','Gd','Ex'], #BsmtExposure ['None','No','Mn','Av','Gd'], #BsmtFinType1 ['None','Unf','LwQ', 'Rec','BLQ','ALQ' , 'GLQ' ], #BsmtFinType2 ['None','Unf','LwQ', 'Rec','BLQ','ALQ' , 'GLQ' ], #HeatingQC ['Po','Fa','TA','Gd','Ex'], #KitchenQual ['Fa','TA','Gd','Ex'], #FireplaceQu ['None','Po','Fa','TA','Gd','Ex'], #GarageQual ['None','Po','Fa','TA','Gd','Ex'], #GarageCond ['None','Po','Fa','TA','Gd','Ex'], #GarageFinish ['None','Unf','RFn','Fin'], #Fence ['None','MnWw','GdWo','MnPrv','GdPrv'], #PoolQC ['None','Fa','Gd','Ex'] ] transformer = make_column_transformer((OrdinalEncoder(categories= ordinal_orders),ordinal_features), remainder = "passthrough").fit(X_train) Then you can apply the transformer to both X_train and X_test transformer.transform(X_train) or transformer.transform(X_test)
H: What is the difference between cache() vs prefetch() in tensorflow? I have gone through the TensorFlow documentation. What is the difference between cache() vs prefetch() in TensorFlow? When should I use the cache() function and when should I use the prefetch() function? AI: The tf.data.Dataset.cache transformation can cache a dataset, either in memory or on local storage. This will save some operations (like file opening and data reading) from being executed during each epoch. The next epochs will reuse the data cached by the cache transformation. Prefetch overlaps the preprocessing and model execution of a training step. While the model is executing training step s, the input pipeline is reading the data for step s+1. Doing so reduces the step time to the maximum (as opposed to the sum) of the training and the time it takes to extract the data. Source
H: Will images modification get me a better machine learning model? Will images modification get me a better machine learning model? I have the following scenario. Camera is fixed and does photos of a process. The process has a few states. Now I want to train a model given a photo to classify to which state, does the photo belong. I.e. what is the current state of the process. For the model training I was given only one photo for each of the process state. Now my question is whether I should train my model with one photo in each class or should I take that one photo and do random modifications to the photo so that to get a few photos for each class? I am thinking that it is better for the training to have just one but real photo. Instead of a lot modified. Am I right? I use the ML.NET plugin for a Visual Studio to train the image classification model. AI: One can never 100% say that a data preprocessing approach will yield positive results. So, if you are trying different things, always test and use the metrics to see what works best. With that said, what you described is often referred to data augmentation, namely the generation of more data points, typically from existing data points. It is very common practice and so I encourage you to try it on your problem. Common techniques include flipping, shifting, or rotating the original images. The problem of having few examples per class is called zero- or few-shot learning. Specific techniques such as siamese networks have been devised to tackle these. Here is a good blog to use as reference.
H: Trying to run a kaggle notebook I'm found an interesting problem on kaggle and more or less solved it with my limited knowledge of machine learning. I was curious how other people solved and checked the solution with the highest vote. The solution includes a line where is implementing and training a model. params = {'num_leaves': 8, 'min_data_in_leaf': 42, 'objective': 'binary', 'max_depth': 5, 'learning_rate': 0.01, 'boosting': 'gbdt', 'bagging_freq': 5, 'feature_fraction': 0.8201, 'bagging_seed': 11, 'reg_alpha': 1, 'reg_lambda': 4, 'random_state': 42, 'metric': 'auc', 'verbosity': -1, 'subsample': 0.81, 'num_threads': 4} oof_lgb, scores = train_model(X, y, params=params, folds=folds, model_type='lgb', plot_feature_importance=True) But it is nowhere mentioned what kind of model he is using, which causes the error. NameError: name 'train_model' is not defined Does someone of you know which model he is using, so that i can run his notebook on my computer. AI: The actual code is included in the notebook, if you click on the 'Code' button just above the cell with the code you provide you will see the code used to define the train_model function. If you look in that function you will see that this function can use different models based on the model_type argument, and in the line of code you linked 'lgb' means that a gradient boosting model from the lgb library is used.
H: Machine Learning for medical researchers My friend is a medical researcher and he want to use machine learning for prediction. Is there any one who is not a computer science person and he learnt programming and machine learning in a very short time? And how? AI: He can use no-code ML platforms such as: RapidMiner Studio, Google ML Kit, Orange, and BigML. Also, this article is very good article for learning RapidMiner https://medium.com/analytics-vidhya/machine-learning-for-programmers-and-non-programmers-f8568d357750
H: How to retrieve column names from applying a wrapper method in feature selection? This question probably has a simple answer to it, so I will get to the point... How do I retrieve the names of the columns from applying a wrapper method in feature selection? Code I have used: from mlxtend.feature_selection import SequentialFeatureSelector as SFS X = df[['A','B','C','D']].values y = df[['F']].values classifier = KNeighborsClassifier(n_neighbors=7) code = SFS(classifier, k_features=5, forward=True, floating=False, verbose=8, scoring='accuracy' ) code.fit(X, y) code.k_feature_names_ Output: Features: 3/3 -- score: 0.78 ('1' '2', '3') Therefore, how do I retrieve the names of the features ('1' '2', '3')? AI: Based on this scikit-learn documentation, you can get a boolean mask (in the same order) of the input features, via the get_support method:
H: Transformer model: Why are word embeddings scaled before adding positional encodings? While going over a Tensorflow tutorial for the Transformer model I realized that their implementation of the Encoder layer (and the Decoder) scales word embeddings by sqrt of embedding dimension before adding positional encodings. Notice that this is different from scaling the dot product attention. I'm referring to the 3rd line of the call method of the Encoder class here: https://www.tensorflow.org/tutorials/text/transformer#encoder def call(self, x, training, mask): seq_len = tf.shape(x)[1] # adding embedding and position encoding. x = self.embedding(x) # (batch_size, input_seq_len, d_model) x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) x += self.pos_encoding[:, :seq_len, :] x = self.dropout(x, training=training) for i in range(self.num_layers): x = self.enc_layers[i](x, training, mask) return x # (batch_size, input_seq_len, d_model) I could not find any mention of this scaling in the papers I've read so far. People always show the input to the encoder as WE + PE, that is word embedding plus positional encoding. But this implementation seems to use sqrt(d_model) * WE + PE. My questions: Have you ever seen this extra scaling step mentioned in a paper? I didn't find it in "Attention is all you need" (Vaswani et. al.). What is this additional scaling trying to achieve? AI: This is specified in the original Transformer paper, at the end of section 3.4: Transcription: 3.4 Embeddings and Softmax Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension model. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [24]. In the embedding layers, we multiply those weights by √model This aspect is not justified by the authors, either on the paper or anywhere else. It was specifically asked as an issue in the original implementation by Google with no response. Other implementations of the Transformer have also wondered if this was actually needed (see this, this and this). Some hypothesithed arguments (source) are: It is for the sharing weight between the decoder embedding and the decoder pre-softmax linear weights. It is not actually needed. It is to make the positional encoding relatively smaller. This means the original meaning in the embedding vector won’t be lost when we add them together. For reference, there are other StackExchange questions discussing this (see this and this).
H: Is it ok to fill a pandas dataframe with NaN values? Is it correct to fill a pandas dataframe with NaN values? In specific: if I have a dataframe with a user name and his age is it ok to fill the age column with int and NaN values. Names Age Lisa 25 Jack NaN Tom 32 Later on I want to work with this dataframe, will I get any problems if I have NaN values in it? AI: It depends on what problems you are afraid of: regarding "technical" issues, it should be ok having NaNs in your dataframe and, afterwards, applying the pd.isna(column_name) per attribute to get a boolean mask to find those unknown values per column, more info here in case your problem is not knowing the actual values, one option is imputation, but not all types of variables are easily imputable (neither it makes sense to impute all of them)
H: ML/NN as Function Evaluator for further Optimization (maximization) - Practical Example I am working on a production optimization problem; a very similar idea to what is described by Vegard Flovik How to use machine learning for production optimization. The following image, taken from the referred post, summarizes it very well: First step is obvious, and I do have a model in the form of machine learning or neural networks model. How would I go about the second step? How can I use the trained model as the function evaluator for further multi-dimensional nonlinear optimization (e.g. maximization) either via Scipy, Bayesian Optimization etc.? I cannot seem to find a practical example. Having a closed-form analytical function as the objective of an optimization problem is well-established. The article Optimization with SciPy and application ideas to machine learning by Tirthajyoti Sarkar gives a few examples using Scipy, & introduces packages that do optimizations with bound constrains and more. Yet examples are quite simple (a closed-form mathematical function) and he only glosses over the extension of such idea to use NN as the objective function, I'm quoting: You are free to choose an analytical function, a deep learning network (perhaps as a regression model), or even a complicated simulation model, and throw them all together into the pit of optimization. Any leads/hints/links are appreciated! [Appendix] In order to have a concrete example, let's imagine we have a dummy data set with a set of feature and a imaginary ProductionYield that is a nonlinear combination of input variables: import numpy as np import pandas as pd df = pd.DataFrame(columns=['Pressure','Temprerature','Speed','ProductionYield']) df['Pressure'] = np.random.randint(low= 2, high=10, size=2000) df['Temprerature'] = np.random.randint(10, 30, size=2000) df['Speed'] = np.random.weibull(2, size=2000) df['ProductionYield'] = (df['Pressure'])**2 + df['Temprerature'] * df['Speed'] + 10 df['ProductionYield']= df['ProductionYield'].clip(0, 100) Pressure Temprerature Speed ProductionYield 0 7 20 1.810557 95.211139 1 2 29 0.674221 33.552409 2 8 17 0.537533 83.138065 3 3 24 1.945914 65.701938 4 6 23 0.514679 57.837610 1.Predictive Algorithm (a simple Neural Network): ## Train/Test Split from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(df[['Pressure','Temprerature','Speed']].values, df['ProductionYield'].values, test_size=0.33, random_state=42) ## Build NN Model import tensorflow as tf from tensorflow.keras import layers def build_model(): # create model model = tf.keras.Sequential() model.add(layers.Dense(64, input_dim=3, kernel_initializer='normal', activation='relu')) model.add(layers.Dense(128, kernel_initializer='normal', activation='relu')) model.add(layers.Dense(1, kernel_initializer='normal')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam') return model model = build_model() model.fit(x_train, y_train, validation_split=0.2, verbose=0, epochs=1000) 2.Otimization [Core of the Problem]: Problem lies herein, when a ML/NN is trained, I do not get to see (export as I wish) the mathematical form of the function (here in this example NN) and its variables (which should be my feature variables) to do the optimization as we do with closed-form explicit mathematical functions. [UPDATE 15.01.2021]: Following Valentin's great answer, I've put pieces together in a practical example showcasing how one can use a ML/NN model as an input function for further optimization (herein via scipy.optimize) using the dummy data set shown in the Appendix. Please see this notebook for more details. AI: This post seems similar to yours and may help. It seems that what you are looking for is a derivative-free optimization method. The Wikipedia page for the concept lists such methods. Intuitively, these techniques will sample the function (the network in your case) with various inputs (pressure, temperature, speed) and will figure out which inputs optimize it. Where they differ is in their sampling strategy, as it may be impractical or expensive to sample. You can use scipy.optimize.minimize to do that. Pass your network as func and use an initial guess, which can be the last values of your variables. Scipy expects a function with the following signature: fun(x, *args) -> float where x is a one-dimensional numpy array. This might mean you will need to wrap your network in something like this: def wrapper(x, *args) -> float: network_input = _numpy_to_valid_network_input(x) network_output = network.predict(network_input, *args) scipy_output = _network_output_to_float(network_output) return scipy_output And then, you can pass wrapper as your func. Negating the output of scipy_output will turn the minimization problem into a maximization problem. If your input variables are bounded, i.e. [0, 100], you can do two things: Use algorithms that allow you to define these bounds explicitly (i.e. L-BFGS-B using the bounds parameter) Implicitly bound your inputs by using a sigmoid function for instance. By doing something like I did below, you can create a function bounded_value that always returns values within the range of your choice, even if the optimization algorithm might try any float. import numpy as np def sigmoid(x): return 1 / (1 + np.exp(-x)) def bounded_value(x, min_value=0, max_value=100): return min_value + (sigmoid(x) * (max_value - min_value)) If you want to bound your output, that's even easier. If your goal is to minimize, return a very large value if your network outputs any value outside of your range. Obviously, ensuring that your network always returns sensible values could be achieved by tweaking the loss used during training (which is difficult with AutoKeras). If you want to implement your own method, I would recommend coordinate descent as you don't have many input dimensions (3) and it is quite simple to implement from scratch. Obviously, a brute-force approach where you randomly sample the space and choose the inputs that yielded the best function value is even easier to implement.
H: Flipping the labels in a classification problem Let us say A- we have a binary classifier with labels 1 as healthy and 0 as sick. The precision we got is 100% and the recall is 70%. Now let us say B-we flip the labels where 0 is healthy and 1 as sick. Are Precision and recall get flipped in their values if you flip the labels? So in the new case, recall is 100% and precision is 70%? Or the (70,100) values in the first case belonged to class 1 aka, healthy people and I should calculate the precision and recall for class 0? My understanding was that we were able to recall 70% of sick people and out of every people who we detected as sick, no person was healthy (precision as 100%). But I am confused now AI: To begin with, I don't quite get it why you want to flip them. In the binary case, you flip Negatives and Positive, so True Negative becomes True Positive and so do FP/FN. Hence you flip specificity/true negative and sensitivity/recall values, so overall accuracy and F1 stay the same.
H: How to Connect Convolutional layer to Fully Connected layer in Pytorch while Implementing SRGAN I was implementing the SRGAN in PyTorch but while implementing the discriminator I was confused about how to add a fully connected layer of 1024 units after the final convolutional layer My input data shape:(1,3,256,256) After passing this data through the conv layers I get a data shape: torch.Size([1, 512, 16, 16]) Code: class Discriminator(nn.Module): def __init__(self): super(Discriminator,self).__init__() self.sm = nn.Sigmoid() self.net = nn.Sequential( nn.Conv2d(3,64,3,padding=1), nn.BatchNorm2d(64), nn.LeakyReLU(0.2), nn.Conv2d(64,64,3,2,padding=1), nn.BatchNorm2d(64), nn.LeakyReLU(0.2), nn.Conv2d(64,128,3,padding=1), nn.BatchNorm2d(128), nn.LeakyReLU(0.2), nn.Conv2d(128,128,3,2,padding=1), nn.BatchNorm2d(128), nn.LeakyReLU(0.2), nn.Conv2d(128,256,3,padding=1), nn.BatchNorm2d(256), nn.LeakyReLU(0.2), nn.Conv2d(256,256,3,2,padding=1), nn.BatchNorm2d(256), nn.LeakyReLU(0.2), nn.Conv2d(256,512,3,padding=1), nn.BatchNorm2d(512), nn.LeakyReLU(0.2), nn.Conv2d(512,512,3,2,padding=1), nn.BatchNorm2d(512), nn.LeakyReLU(0.2), nn.Linear(<ADD AN INPUT SHAPE HERE>,1024), nn.LeakyReLU(0.2), nn.Linear(1024,1) ) def forward(self,x): x = self.sm(self.net(x)) x = (x) return x AI: You simply reshape the tensor to (batch_size, n_nodes) using tensor.view(). In your specific case this would be x.view(x.size()[0], -1).
H: Selecting a boundary on a binary classifier to optimal precision and recall I have a logistic regression classifier that shows differing levels of performance for precision and recall at different probability boundaries as follows: The default threshold for the classifier to decide which class something belongs to is 0.5. However, am I right in understanding that in order to get the best performance trade-off I should set the decision boundary to be about 0.82 below? That can be done in Scikit-Learn, but I want to make sure that I am drawing the correct conclusions. Any advice would be appreciated. AI: The intersection of the precision and recall curves is certainly a good choice, but it's not the only one possible. The choice depends primarily on the application: in some applications having very high recall is crucial (e.g. a fire alarm system), whereas in some other applications precision is more important (e.g. deciding if somebody needs a risky medical treatment). Of course if your application needs high recall you'd choose a threshold before 0.6, if it needs high precision you'd choose a threshold around 0.85-0.9. If none of these cases apply, people usually choose an evaluation metric to optimize: F1-score would be a common one, sometimes accuracy (but don't use accuracy if there is strong class imbalance). It's likely that the F1-score would be optimal around the point where the two curves intersect, but it's not sure: for example it might be a bit before 0.8, when the recall decreases slowly and the precision increases fast (this is just an example, I'm not sure of course). My point is that even if it's a perfectly reasonable choice in this case, in general there's no particular reason to automatically choose the point where precision and recall are equal.
H: Get data from intermediate layers in a Pytorch model I was trying to implement SRGAN in PyTorch and I have to write a Content loss function that required me to fetch activations from intermediate layers for both the Generated Image & Original Image. I'm using pretrained VGG-19 and according to the paper I need the ReLU activations Can anybody guide me on how can I achieve this? AI: You can save the activations for intermediate layers into a variable within your forward method and then return them in addition to the output value.