text
stringlengths 83
79.5k
|
---|
H: How to use two different datasets as train and test sets?
Recently I started reading more about NLP and following tutorials in Python in order to learn more about the subject. The problem that I've encountered, now that I'm trying to make my own classification algorithm (the text sends a positive/negative message) regards the training and the testing datasets. In all the examples that I've found, only one dataset is used, a dataset that is later split into training/testing. I have two datasets, and my approach involved putting together, in the same corpus, all the texts in the two datasets (after preprocessing) and after, splitting the corpus into a test set and a training set.
datasetTrain = pd.read_csv('train.tsv', delimiter = '\t', quoting = 3)
datasetTrain['PN'].value_counts()
datasetTest = pd.read_csv('test.tsv', delimiter = '\t', quoting = 3)
datasetTest['PN'].value_counts()
corpus = []
y = []
# some preprocessing
y.append(posNeg)
corpus.append(text)
from sklearn.feature_extraction.text import TfidfVectorizer
transf = TfidfVectorizer(stop_words = stopwords, ngram_range = (1,1), min_df = 5, max_df = 0.65)
X = transf.fit_transform(corpus).toarray()
# Splitting the dataset into the Training set and Test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.11, random_state = 0)
The reason why I've done this is because I'm working with the Bag of Words model and if I'm creating from the beginning X_train and X_test (y_train, y_test respectively) and not using the splitting function, I get an error when running the classification algorithm:
X_train = transf.fit_transform(corpustrain).toarray()
X_test = transf.fit_transform(corpustest).toarray()
...
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
ValueError: Number of features of the model must match the input. Model n_features is 2770 and input n_features is 585
I'm kind of new at this and I was wondering if anyone could please guide me in the right direction?
AI: You may want to use a pipeline to do this operation. Specifically, you do NOT want to train the TFIDFVectorizer the entire corpus- doing so gives your model hints about what features may be in the test set that don't exist in the training set- a concept frequently referred to as "leakage" or "data snooping".
The correct pattern is:
transf = transf.fit(X_train)
X_train = transf.transform(X_train)
X_test = transf.transform(X_test)
Using a pipeline, you would fuse the TFIDFVectorizer with your model into a single object that does the transformation and prediction in a single step. It's easier to maintain a solid methodology within that pattern.
In the example code, you're both fitting and transforming in the same step fit_transform, which is creating different features each time and is the source of your error. |
H: Can we remove features that have zero-correlation with the target/label?
So I draw a pairplot/heatmap from the feature correlations of a dataset and see a set of features that bears Zero-correlations both with:
every other feature and
also with the target/label
.Reference code snippet in python is below:
corr = df.corr()
sns.heatmap(corr) # Visually see how each feature is correlate with other (incl. the target)
Can I drop these features to improve the accuracy of my classification problem?
Can I drop these features to improve the accuracy of my classification problem, if it is explicitly given that these features are derived features?
AI: Can I drop these features to improve the accuracy of my classification problem?
If you are using a simple linear classifier, such as logistic regression then yes. That is because your plots are giving you a direct visualisation of how the model could make use of the data.
As soon as you start to use a non-linear classifier, that can combine features inside the learning model, then it is not so straightforward. Your plots cannot exclude a complex relationship that such a model might be able to exploit. Generally the only way to proceed is to train and test the model (using some form of cross-validation) with and without the feature.
A plot might visually show a strong non-linear relationship with zero linear correlation - e.g. a complete bell curve of feature versus target would have close to zero linear correlation, but suggest that something interesting is going on that would be useful in a predictive model. If you see plots like this, you can either try to turn them into linear relationships with some feature engineering, or you can treat it as evidence that you should use a non-linear model.
In general, this advice applies whether or not the features are derived features. For a linear model, a derived feature which is completely uncorrelated with the target is still not useful. A derived feature may or may not be easier for a non-linear model to learn from, you cannot easily tell from a plot designed to help you find linear relationships. |
H: How can I calculate AUC from the ROC curve for the classification?
Based TPR and FPR, I have generate ROC curve for my binary classification model. I do not know, how to calculate AUC value. I would be very help for me if you can help me to calculate AUC value.
AI: Welcome to the community!
As you know, AUC is just the area under ROC curve. So the question is more about numerical methods as you have a set of points and you would like to calculate the area under it.
Riemannian Sum
Trivial solution. Simply make rectangles from points you have. The area of each rectangle is simply the product of edges. Then sum them up! You probably don't like it do you?!
Trapezoidal Method
After Riemannian, the simplest and most naive algorithm to do this. You simply have a set of points and you just calculate the trapezoidal area between each pair and sum them up like what you see in the figure below. It has the maximum computation error as it simplifies the problem a lot.
Simpson (1/3) Method
Much better when we are talking about curves! Let's keep it simple and to the point. You can model your function in every interval using a quadratic ($y=ax^2+bx+c$) and having 3 data points. Using your three data points, you can calculate $a$, $b$ and $c$. Then the area under curve is not that difficult, but we have a better solution! Trust me or not, the value of this integration is simply
$$\frac{b-a}{6} (f(a)+4\times f(m)+f(b))$$
where $(a,f(a))$ and $(b,f(b))$ are endpoints of interval and $(m,f(m))$ is the midpoint. See the image below from here to compare these methods.
Romberg Methods
Simpson and/or Trapezoidal methods can be recursively applied to achieve a more accurate calculation. It's called Romberg method. Accuracy of these methods were in the length of interval. Smaller intervals give more accurate integration. Romberg uses this fact to iteratively get closer to more accurate answer.
And of course tones of more algorithms to do that.
PS: You certainly have libraries and functions in different languages to calculate it for you. Scipy offers for Python for instance.
Hope it helps! Good Luck! |
H: Standardizing Vegas odds for a randomForest
I'm sorry I don't have reproducible code, but I have a pretty specific question that I can't find an answer to.
I'm using randomForest to project NBA statistics. Vegas-odds are incredibly useful because it's provides the wisdom of the crowd. Intuitively I feel like they need to be standardized for analysis, but maybe randomForest is good enough.
The reason why I feel it needs to be standardized is because it's disjoint. If a team has a moneyline of -125, that means that you must pay \$125 to win \$100 (payout of \$225). If a team has a betting line of +110, that means you need to bet \$100 to win \$110 (payout of \$210). Therefore, it's disjoint in that there would never be scores in (-100, 100) since +100 or -100 are both even odds.
With that said, would you recommend reshaping the vector in some way so that the random forest can learn "better"? E.g. -125 is a (125/(100 + 125) 55.6% chance of winning and a +110 is a (100/110+100) 47.6% chance of winning. Would changing the moneylines to percentages help performance? I know the only surefire way to check would be to run models, but I really don't have time for it at the moment, and this question will help me to determine in general if/when standardizing is necessary.
AI: This is directly related to the idea of calibrating probability values produced by a random forest. Aside from the link, there is substantial literature on how to do this. The simplest approach basically amounts to fitting a logistic regression on the outputs of the random forest to change the response surface into a logistic form.
Once the predictions have this form, you've demonstrated that you have the knowledge to turn the probability estimate into Vegas odds, so that should be a straightforward process for you. |
H: Meaning of stratify parameter
I'm training a Neural Network and I'm trying to divide my data into training and testing sets. I have a lot of output classes and for some of them I have as little as 2 examples, so I would like to have, in that case, 1 example in training and 1 example in testing. From what I've read, this is using the stratify parameter, but what does stratify mean?
I'm divifing my data into training and testing:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1, random_state=42, stratify=y)
So, from my understanding, this divides into two sets: training (90% of data) and testing (10% of data) but making sure that there are at least 1 of each class in each set?
AI: stratify parameter will preserve the proportion of target as in original dataset, in the train and test datasets as well.
So if your original dataset df has target/label as [0,1,2] in the ratio say, 40:30:30. That is, for every 100 datasets, you can find 40, 30 and 30 observations of target 0,1 and 2 respectively.
Now when you split this original using the train_test_split(x,y,test_size=0.1,stratify=y), the methods returns train and test datasets in the ratio of 90:10. Now in each of these datasets, the target/label data proportion is preserved as 40:30:30 for the classes [0,1,2].
Often, we want to preserve the dataset proportions for better prediction and reproduceability of results |
H: Extracting Useful features from large convolutional layers
I have been training a convolutional neural network on emotion detection. Now, I would like to extract features for my data to train an LSTM layer. In my case, the top convolutional layers in the network has the following dimensions: [None, 4, 4, 512] and [None, 4, 4, 1024]. Therefore, this will give a total of 8192 and 16384 dimensional vectors. This is too large to train an LSTM layer. Therefore, I would like to know what is the best possible way to reduce the dimensionality of this vector? In other words, should I apply global average pooling to the conv layer after obtaining the activation or any other dimensionality reduction technique? In this case, my features will be a vector of 512 or 1024 dimensions, which makes sense.
Any help is much appreciated!!
AI: Applying a pooling layer following a convolution is a standard way to reduce the size of the input matrix and get the invariant features. You might also want to consider adding a dense layer with a smaller number of output neurons. |
H: Why my model can't recognise my own hand written digit?
Currently i am working on digit recognizer[0-9]. My model train accuracy 100% and test accuracy 90%. But when i train to feed my own written digit, it always give me wrong prediction.
I know test and train images should come from same source. But how could i feed different source data?
AI: You have to remember that machine learning model do not understand any concepts as we do, humans. It cannot generalize something it hasn't seen. And yours hasn't seen black digits on white background, so it has no way of predicting a digit right.
The only two things you can do is :
Train your model from scratch with both datas : black digits on white background and white digits on black background. These datas have to be balanced (as much as possible). Maybe the model need to be more complex, maybe you need more data, and maybe your accuracy will decrease. You can then predict both type of datas.
Uses a test dataset only with similar datas of the train set, white digits on black background.
You can also do a data augmentation with your previous train dataset :
as the digits are white and background black, you can reverse images color, then you will have black digits on white background. That way, you have your train set doubled in size, without writing any more digits by yourself ! |
H: Sentimental Analysis on Twitter Data
What are best ways to perform sentimental analysis on Twitter Data which I dont have labels for?
AI: You should look at literature on unsupervised sentiment analysis. The paper by Peter Turney could be a good starting point.
Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews Turner 2002
You can also check this if you use R https://datascienceplus.com/unsupervised-learning-and-text-mining-of-emotion-terms-using-r/ |
H: How does combining neurons create non-linear boundaries?
I have been working with NNs for a while, but haven't dug too deep into this unfortunately.
By looking at the three neurons below, in each of their boxes we can see that they are really just making linear separations in the x1, x2 plane (of course not taking into account the third and upward Y dimension that make the sigmoid) and combining these into the decision boundary we see at the right.
I understand we need non-linear activation functions, but why?
How can combining non-linear perceptrons can make this three walled decision boundary?
Mathematically, can it be illustrated how we are allowed to go from a simple linear function, to a more complex non-linear function using multiple neurons?
I have looked around and articles just says that they can, but not how?
AI: I think what you are referring to is how neural nets work as universal function approximators. Check out this link for an intuitive explanation. |
H: What Base Should Be Used For Negative Log Likelihood?
When calculating the negative log likelihood loss, what base of log are we supposed to use?
AI: Typically it is implemented as the natural logarithm, base e. Other bases can be used for the same effect though. |
H: neural network training algorithms
When I first read about neural networks, I learned that Backpropagation is the algorithm used to train the neural network. I am interested if there are other alternatives (or better?) to BP.
What are the other training algorithms used in NN? And is BP is the best one, and that's why almost everyone uses it for training the NN model?
AI: Yes, some alternatives are feedback-alignment (FA), Direct Feedback Alignment (DFA) and Indirect Feedback Alignment (IFA). |
H: Transformation of categorical variables (binary vs numerical)
When using categorical encoding, I see some authors use arbitrary numerical transformation while others use binary transformation. For example, if I have a feature vector with values A, B and c. The first method will transom A,B and C to numeric values such 1,2 and 3 respectively, other researches use (1,0,0), (0,1,0) and (0,0,1).
What is the difference between the first method and the second one?
The only difference I can think of is, if you use binary values, the size of the training/testing data will increase linearly according to how many values you have, which may slow down the performance, while the first one will keep the size unchanged.
Does either of these methods will effect the accuracy of your machine learning model (or classifier)?
AI: While using one-hot (binary) encoding certainly takes more space, it also implies an independence assumption among the data. On the other hand, using integers such as 1, 2 and 3 implies some kind of a relationship between them.
The problem that you mention of linear increase in size with one-hot encoding is common and can be treated by using something such as an embedding. An embedding also helps define a sense of distance among different datapoints.
https://en.wikipedia.org/wiki/Word_embedding |
H: What is the advantage of using log softmax instead of softmax?
Are there any advantages to using log softmax over softmax? What are the reasons to choose one over the other?
AI: There are a number of advantages of using log softmax over softmax including practical reasons like improved numerical performance and gradient optimization. These advantages can be extremely important for implementation especially when training a model can be computationally challenging and expensive. At the heart of using log-softmax over softmax is the use of log probabilities over probabilities, which has nice information theoretic interpretations.
When used for classifiers the log-softmax has the effect of heavily penalizing the model when it fails to predict a correct class. Whether or not that penalization works well for solving your problem is open to your testing, so both log-softmax and softmax are worth using. |
H: Replacing column values in pandas with specific column with multiple database operation?
Need to replace database of the column in specific refine query with multiple operations as mention in below image. Trying such operation as an individual, but can't understand which method to use can make in one column with multiple operations.
Using Boolean methods to justify results but how can I do in one line code of python to get a replacement of refining/ categorized values to a specific column.
import pandas as pd
db = pd.read_csv('Database Path')
db.head(2)
stoppage time in minutes | Activity
-------------------------120 | Stopped
-------------------------240 | Stopped
#Need to refine below code for specific requirement as mention above:
db['New Column(Time Category)'] =
db['Stoppage time in minutes'] < 120 OR
db['Stoppage time in minutes'] > 120 & db['Stoppage time in minutes'] < 240 OR
db['Stoppage time in minutes'] > 240 & db['Stoppage time in minutes'] < 360 OR
# So Need Result like as below:
stoppage time in minutes | Activity | New Column(Time Category)
-------------------------120 | Stopped | <2 HRS
-------------------------240 | Stopped | 2 - 4 HRS
Require this solution due to a pool of high numbers of databases.
AI: You can use the python ternary operator with list comprehension:
df['Buckets(HRS)'] =
['<2hrs' if x<=120 else '2-4' if 120>x<=240 else '4-6' if 240>x<=360 else '>6' for x in db['Stoppage time in minutes']] |
H: Twitter Retweet Network Visualization
I am trying to visualize retweet network in order to find out which users are most likely to have most influence on other users. Here is my code:
import networkx as nx
G_retweet = nx.from_pandas_edgelist(translated_iranian_tweets,
source = "userid",
target = "retweet_userid",
create_using = nx.DiGraph())
print('There are {} Nodes inside Retweet Network'.format(len(G_retweet.nodes())))
print('There are {} Edges inside Retweet Network'.format(len(G_retweet.edges())))
import matplotlib.pyplot as plt
#Size varies by the number of edges the node has (its degree)
sizes = [x[1] for x in G_retweet.degree()]
nx.draw_networkx(G_retweet,
pos = nx.circular_layout(G_retweet),
with_labels = False,
node_size = sizes,
width = 0.1,
alpha = 0.7,
arrowsize = 2,
linewidths = 0)
plt.axis('off')
plt.show()
There are 18631 nodes and 35008 edges inside this network. Visualization is horrible, you cannot see anything. Does anyone have any suggestions what should I do about it? Should I try to extract specific type of users with specific tweets in order to reduce size of my dataset and then try to visualize the network, or something else?
AI: For answering questions from graph, you should not visualize it. Visualizing graphs is for sake of having an overview on how it looks like in general. There are Graph Visualization techniques that show graph for sake of getting some initial insight (e.g. if there are visually obvious communities or not).
Your question has an analytic answer. After constructing your graph object, you can get the adjacency matrix as a Python array or matrix. This matrix will be asymmetric as outgoing and incoming degrees are different of course. In Networkx you can get in-degree and out-degree information as seen below. Then the main story starts!
The most influential of a social network can be seen as the most central node. Centrality measures capture them for you. The simplest one is degree centrality. It simply says the node with highest degree has most influence in graph. In your case be sure how you model "the guy who is retweeted"as he has the most influence. If being retweeted is in-degree in your modeling then this code get them for you:
import networkx as nx
import operator
g=nx.digraph.DiGraph()
g.add_edges_from([(0,1),(1,2),(2,3),(4,2),(4,3)])
dict_deg = {ii:jj for (ii,jj) in g.in_degree}
print('In-Degree Dictionary\n',dict_deg)
m = max(list(dict_deg.values()))
print('The node(s) number',[i for i, j in enumerate(list(dict_deg.values())) if j == m],'have the most influence!')
Output:
In-Degree Dictionary
{0: 0, 1: 1, 2: 2, 3: 2, 4: 0}
The node(s) number [2, 3] have the most influence!
And the graph itself:
pos = nx.spring_layout(g)
_=nx.draw(g,label=True,pos=pos)
_=nx.draw_networkx_labels(g,pos=pos)
I hope it answered your question. If not please drop a comment so I can update. |
H: Why can distributed deep learning provide higher accuracy (lower error) than non-distributed one with the following cases?
Based on some papers which I read, distributed deep learning can provide faster training time. In addition, it also provides better accuracy or lower prediction error. What are the reasons?
Question edited:
I am using Tensorflow to run distributed deep learning (DL) and compare the performance with non-distributed DL. I use the number of dataset 1000 samples and step size 10000. The distributed DL uses 2 workers and 1 parameter server. Then, the following cases are considered when running the code:
Each worker and non-distributed DL use 1000 samples for training sets, same mini-batch size 200
Each worker uses 500 samples for training sets (first 500 samples for worker 1 and the rest 500 samples for worker 2), non-distributed DL use 1000 samples for training sets, same mini-batch size 200
Each worker uses 500 samples for training sets (first 500 samples for worker 1 and the rest 500 samples for worker 2) with mini-batch size 100, non-distributed DL use 1000 samples for training sets with mini-batch size 200
Based on the simulation, for all cases, distributed DL has lower RMSE than non-distributed DL. In this case, the RMSEs of distributed DL are as follows: Distributed DL in Case 2 < Distributed DL in Case 1 < Distributed DL in Case 3 < Non-distributed.
In addition, I also add the training time (i.e., the number of steps is 2 x 10000) for non-distributed DL, the results are still not as good as distributed DL.
One reason can be the mini-batch size, however, I wonder the other reasons why the distributed DL has better performance using the aforementioned cases?
AI: About the accuracy: Going with the strongest reason; memory problems will diminish due to the distribution of the computation. That will allow you to increase your training batch size which will reduce the gradient noise due to small mini-batch sizes. The steeper gradient moves will be towards the minima, with less noise.
You can refer to this video for deeper understanding: https://www.youtube.com/watch?v=-_4Zi8fCZO4&list=PLkDaE6sCZn6Hn0vK8co82zjQtt3T2Nkqc&index=16
About the speed: It is more obvious I think. You distribute your gradient descent computations to multiple machines or CPUs/GPUs/TPUs, so a faster training speed you acquire as a result. |
H: Train a deep reinforcement learning model using two computers
I would like to know if there is a way to train a deep rl model using two different computers. The first one would execute the game and send requisitions to the second computer which would store and train the model itself.
Obs: The computers aren't in the same LAN.
Thanks!!
AI: Distributed RL is very much a thing. Google have created a distributed setup called IMPALA for this, and there are multiple instances of A3C, PPO etc available if you search. I don't know much about IMPALA, but the basic idea of the scalable policy gradient methods is to run multiple environments, collecting gradients on each server, then collating them together to create improved policy and value networks every few steps.
There are a couple of variations in strategy based on which stage of the data is shared - observations or gradients. Gradient calculation is CPU intensive, so above a certain scale it is worth having that occur on the distributed devices, depending on how intensive it is to collect experience in the first place.
Obs: The computers aren't in the same LAN.
This may prevent you implementing anything with low-level observation or gradient sharing, unless the bandwidth between the machines is high.
The simplest way to use two computers in this case is to perform basic hyper-parameter searches by running different tests on each computer and tracking which computer has done which experiments.
The first one would execute the game and send requisitions to the second computer which would store and train the model itself
This could work with an off-policy value-based method, such as DQN. You still need a reasonable bandwidth between the two machines, especially if the observation space is large. DQN is a reasonable choice since you don't need the environment-running machine to follow the current optimal policy - although you will still want to update the policy on the first computer some of the time.
The basic algorithms of DQN do not require much changing to support this kind of distribution. Just comment out or make conditional a few parts:
On the first machine:
Comment out or logically block sampling and learning from experience table
Maintain a "behaviour-driving" Q-value network instead of the learning network, in order to run $\epsilon$-greedy policy
Send experience to second machine instead of storing it in local experience-replay table (this is the bandwidth-intensive part)
Asynchronously receive updates to behaviour-driving Q-value network
On the second machine:
Comment out or logically block interactions with the environment
Asynchronously receive experience from first machine and add to experience-replay table
Every so many mini-batches, send updated current network to second machine
You will need to have some experience handling multi-threading or multi-process code in order to cover the asynchronous nature of the updates. If that seems too hard, then you could have the two machines attempt to update each other synchronously, or via a queueing mechanism. The disadvantage of that is one or other machine could end up idle waiting for its partner to complete its part of the work. |
H: Non-prediction Applications of Machine learning
Prediction seems to be the dominant theme of machine learning. Most algorithms have fit and predict functions so that model can be created which can predict outcomes or other parameters of interest from new set of features.
What are non-prediction applications of machine learning?
AI: The most obvious applications are indeed the supervised learning approaches (surrogate models, prediction). But there is much more than that! Other usual applications include:
Clustering: This can be seen as a different kind of prediction, but not in the classic supervised learning fashion. For instance, I have been using a clustering algorithm on a 3D geometry (CAD file) to make almost adjacent elements become actually aligned.
Anomaly detection
Study of extreme values (how to learn the behaviour of extreme values without actually observing such values)
Feature importance (determine which sets of features impacts the result the most) or, more generally, data mining
Inference (use a dataset to improve the prior model prediction)
Of course, most of these applications are more or less close to prediction, but not trivial prediction. |
H: not quite sure about the difference between RNN and feed forward neural net
I'm a bit confused after reading this paper: https://arxiv.org/abs/1705.09851
on page 22, the author writes
response:
\begin{equation}
Y = softmax(Z^{L-1})
\end{equation}
and hidden state
\begin{equation}
Z^\ell = max(W^\ell *Z^{\ell-1} + b^\ell, 0)
\end{equation}
which is a relu
But, to me, this looks like a regular feed forward neural net- you multiply your input by a matrix, add a bias unit, then activate. Alternatively, your hidden layer is equal to the activation of the sum of a bias and the previous hidden layer times a weight matrix.
What am I missing?
AI: The authors state that's the formulation for a feed forward deep learner, so you're exactly right. The two equations at the bottom of the page are where they formulate their recurrent neural net
The response is
$\hat{Y} = \text{softmax}(W^2Z_t+b^2) $
and the hidden state is
$Z_{t-j} = \text{tanh}(W^1[Z_{t-j-1},X_{t-j}] + b^1), j \in\{k,...,0\}$
The authors punt to this paper for implementation details, but the recurrent nature here comes from directly articulating your hidden state off of your most recent hidden state $Z_{t-j-1}$ (which involves the input $X_{t-j-1}$ from the last time step).
Hope this helps! |
H: What could be a dataset in which the presence of an outlier dramatically affects the performance of Ordinary Least Squares (OLS) regression?
I am tasked with giving an example of a dataset in which the presence of an outlier dramatically affects the performance of
Ordinary Least Squares (OLS) regression. I've searched and searched the web and I understand that OLS has a hard time dealing with outliers, but I'm having a hard time figuring out why, and finding a dataset to prove this.
AI: If you are looking for a real-world data set here is one on Harvard's Dataverse that examines state social politics research for outliers for the same purpose you are looking for. If you are looking for one for more illustrative purposes one data set worth knowing is Anscombe's quartet for demonstrating how misleading some descriptive statistics can be. For your own investigations, many data sets with and without outliers can be found on Google's beta dataset search and are worth exploring if you are curious! |
H: How do I use a custom stopwords filter in the Java Weka API?
I am using the Java Weka API to build a classification model. I can use the builtin stopwords filter. However, I need to use a custom filter for my problem. I do not know how to use a custom stopwords filter in the Java Weka API.
AI: You can try the following code.
import weka.core.converters.ConverterUtils.DataSource;
import weka.filters.unsupervised.attribute.StringToWordVector;
import weka.core.Instances;
Instances data = DataSource.read(".../document.txt"); //Your document .
filter.setInputFormat(data);
StringToWordVector filter = new StringToWordVector();
filter.setStopwords(new File(".../stopwords.txt")); //stop words file.
Instances data = Filter.useFilter(data,filter);
You can also read the following document for better understanding of the Weka API for Java.
http://weka.sourceforge.net/doc.stable/ |
H: How can I use Machine learning for inter-relationship between Features?
Machine learning is used mostly for prediction and there are numerous algorithms and packages for this.
How can I use machine learning for studying inter-relationships between features? What are major packages and functions for this? Are there any packages for graphics in this area? Can artificial neural networks also be used for this purpose? If so, any particular type is specifically suited for this?
I do not want to limit to any particular language like Python or R.
AI: I use pair plots to study the inter-relationship between features. Pair plot gives the first level information of the features. Seaborn library has pairplot function and even matplotlib has the same.
Another thing you can use heatmap which gives the co-relation between features, by using heat map we can see features that are high co-related and may eliminate one of them. As a word of caution you must good reason or domain knowledge to drop a feature |
H: How to use SMOTE in Java Weka API?
I am trying to build classification model using Java Weka API. My training dataset have class imbalance problems. For this reason, I want to use SMOTE to reduce class imbalance problem. But, I do not know how to use it in Java Weka API.
AI: Welcome to the community.
You can use the following code:
import weka.filters.supervised.instance.SMOTE;
import weka.core.Instances;
import weka.core.converters.ConverterUtils.DataSource;
import weka.filters.Filter;
Instances data = DataSource.read(".../file.arff"); //Dataset
SMOTE smote=new SMOTE(); //create object of SMOTE
smote.setInputFormat(data);
Instances data_smote = Filter.useFilter(data, smote); //Apply SMOTE on Dataset |
H: How to calculate TPR and FPR for different threshold values for classification model?
I have built a classification model to predict binary class. I can calculate precision, recall, and F1-Score. Now, I want to generate ROC for better understanding the classification performance of my classification model. I do not know how to calculate TPR and FPR for different threshold values.
AI: To calculate TPR and FPR for different threshold values, you can follow the following steps:
First calculate prediction probability for each class instead of class prediction.
Sorting the testing cases based on the probability values of positive class (Assume binary classes are positive and negative class).
Then set the different cutoff/threshold values on probability scores and calculate $TPR= {TP \over (TP \ + \ FP)}$ and $FPR = {FP \over (FP \ + \ TN)}$ for each threshold value. |
H: does xgb multi-class require one-hot encoding?
I was trying an xgboost from python with a multiclass single-label problem and assumed the label can be an integer indicating my class (as opposed to eg one-hot) .
params = {'eta': 0.1,
# 'objective': 'binary:logistic',
'objective': 'multi:softmax',
'scale_pos_weight':9,
'eval_metric': 'auc',
'nthread':25,
'num_class':6}
dtrain = xgb.DMatrix(df_train_x,label= df_train_y)
dvalid = xgb.DMatrix(df_val_x,label= df_val_y)
watchlist = [(dtrain, 'train'), (dvalid, 'valid')]
model = xgb.train(params, dtrain, 500,watchlist, maximize=True, verbose_eval=50,early_stopping_rounds=20)
However I hit an error
(1353150 vs. 225525) label size predict size not match
and I note that my sample size is 225525 , number of classes is 6 , and 6*225525 is 1353150 so it appears that xgb is looking for one-hot ... however when i use one hot I get an error hinting that one-hot can't be used -
dtrain = xgb.DMatrix(df_train_x,label= df_train_y)
ValueError: DataFrame for label cannot have multiple columns
!!!
AI: As Majid stated in the comment, using AUC is causing this error as normally ROC curves are calculated for binary classification. Try removing the eval_metric line and your code will run properly. That or removing the watchlist and early stopping options.
The eval metrics you need to use for multiclass are either merror or mlogloss which are the only ones specific for multiclass in the xgboost documentation. |
H: Wilcoxon W value different from python
I use data from https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test, W value is 9.
But, for the following code W value is 18,
what is the reason?
# Wilcoxon signed-rank test
from numpy.random import seed
from numpy.random import randn
from scipy.stats import wilcoxon
# seed the random number generator
seed(1)
# generate two independent samples
a=np.array([125,115,130,140,140,115,140,125,140,135])
b=np.array([110,122,125,120,140,124,123,137,135,145])
# compare samples
stat, p = wilcoxon(a,b)
print('Statistics=%.3f, p=%.3f' % (stat, p))
# interpret
alpha = 0.05
if p > alpha:
print('Same distribution (fail to reject H0)')
else:
print('Different distribution (reject H0)')
AI: If you look at the wikipedia page of Wilcoxon signed-rank test, under the section of the original test, it's mentioned that
The original Wilcoxon's proposal used a different statistic. Denoted by Siegel as the $T$ statistic, it is the smaller of the two sums of ranks of given sign; in the example given below, therefore, $T$ would equal $3+4+5+6=18$ |
H: Should the minimum value of a cost (loss) function be equal to zero?
We know optimization techniques search in the space of all
the possible parameters for a parameter set that minimizes the cost function of the model. The most well-known loss functions, like MSE or Categorical Cross Entropy, has a global minimum value equal to zero, in the ideal case.
For example, the Gradient Descent, $\theta_j \leftarrow \theta_j - \alpha \frac{\partial}{\partial \theta_j}J(\theta)$, updates parameters based on the derivation of the calculated cost function value, $J(\theta)$.
I was wondering what will happen if we design a cost function that has a non-zero global minimum in its ideal case. Does it make a difference, e.g. in the convergence rate or other aspects of the optimization process, or not?
AI: Saying that the well-known loss functions, like MSE or Categorical Cross Entropy, has a global minimum value equal to zero is flawed . The idea behind loss function is to measure how near the model predictions are to the actuals(in case of a regression). Now ideally , you would want your model to predict exactly equal to the actuals . In that case only , we get loss equal to zero. Otherwise , loss is non zero almost all the time . If you remember the loss function for a linear regression setting ,
We need to minimise so that the predictions can be as close to the actuals as possible . For that the derivative of should be zero . It doesn't matter if is zero or non zero . Graphically , for a cost function like this
, you want to reach the point where the derivative is zero. |
H: sales price prediction
I have to find make a classifier for price prediction of a item. The question I have is which columns I should choose for price prediction.
Also which machine learning classifier would be good to perform this, at present I choose random forest.
Do I need to use time series concept in here?, I think No
AI: So firstly, what do you mean by "classifier for price prediction"? You can predict the price as a number, that would like be different for different cars, but if you want to predict a class of price (like, high, low and medium for instance), you would need a column for that (and you can ignore the column for price, as you are not predicting the price, you're predicting the price class).
Stage 1. Pre-processing the data
Assuming you have the column in the dataset which you want to predict for, you first want to do feature selection. That is, not all features in the data would be important or relevant for predicting the price. For example, in your dataset, the first column/feature ("index") is irrelevant for the price of the car. But how do we prove that? Or, how do we computationally select them (using some measure), especially when they're not as trivial as "index"?
We generally check the statistical properties of the features for that. I copied the data you provided in the question, and here's some things for you to start with:
import pandas as pd
data = pd.read_csv('ex.csv')
data
data.describe() # to check the statistical properties of the features, like mean, std dev, etc
Then, you could do a simple percentage count of the unique observations in each feature, and maybe you could get some insight about the features that way:
for column in data.select_dtypes(include=['object']).columns:
display(pd.crosstab(index=data[column], columns='% observations', normalize='columns'))
Then you could do a histogram analysis of the features and hopefully that gives you some more insight. For example, assuming you have sufficiently enough data, you'd normally expect the histogram of a feature to follow the normal or gaussian distribution. But if its doesn't, then you can further drill down into those features to understand why, and that might lead you to keep or discard those features from the model you're going to build.
hist = data.hist(figsize=(10, 10))
Then we can do correlation analysis of the features:
data.corr().style.background_gradient()
Or, if you want a more fancy visualization:
import seaborn as sns
sns.heatmap(data.corr(), annot=True)
After doing all these, hopefully you have figured out which features to discard and which to keep for your model. These are of course "manual" methods of feature selection; there are other more complex methods for feature selection like SHAPLEY values, etc, which you can explore.
Stage 2 - Building a model and training it
Firstly, you need to pick a technique/method using which you want to do the prediction. The simplest one, since you have only one target variable (i.e., only one feature you're predicting, which is the price or the price class), the simplest one would be linear regression, and the most complicated ones would be some deep learning model build with CNN or RNN. So, instead of showing you how to make predictions with the simplest one, i.e., linear regression, let me show you a middle-of-the-road algorithm in terms of complexity which is quite popular and a widely used method in many machine learning tasks, the accelerated gradient boost, or xgboost, algorithm.
We need to import some libraries for this:
from sklearn.model_selection import train_test_split
import xgboost
import numpy as np
X = data.drop(['price'], axis=1) # take all the features except the target variable
y = data['price'] # the target variable
Then, we create a train/test split with 80-20 split randomly. That is, we randomly take 80% data for training and 20% for testing:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
You can of course do a 70-30 split if you want, and definitely try out different splits at both ends of the spectrum to see what happens - that way you'll learn more about why a 70-30 or 80-20 split is good and, say, a 50-50 split is not that good.
Then, if there are missing values in your data, fill them with a high negative value so that it doesn't have any impact in the model. You can also choose to fill them with something else, depending on your goal.
X_train.fillna((-999), inplace=True)
X_test.fillna((-999), inplace=True)
Some more preprocessing steps:
# Some of values are float or integer and some object. This is why we need to cast them:
from sklearn import preprocessing
for f in X_train.columns:
if X_train[f].dtype=='object':
lbl = preprocessing.LabelEncoder()
lbl.fit(list(X_train[f].values))
X_train[f] = lbl.transform(list(X_train[f].values))
for f in X_test.columns:
if X_test[f].dtype=='object':
lbl = preprocessing.LabelEncoder()
lbl.fit(list(X_test[f].values))
X_test[f] = lbl.transform(list(X_test[f].values))
X_train=np.array(X_train)
X_test=np.array(X_test)
X_train = X_train.astype(float)
X_test = X_test.astype(float)
d_train = xgboost.DMatrix(X_train, label=y_train, feature_names=list(X))
d_test = xgboost.DMatrix(X_test, label=y_test, feature_names=list(X))
Finally, we can make our model and train it:
params = {
"eta": 0.01, # something called the learning rate - read up about optimization and gradient descent to understand more about this
"subsample": 0.5,
"base_score": np.mean(y_train)
}
# these params are optional - if you don't feed the train function below with the params, it will take the default values
model = xgboost.train(params, d_train, 5000, evals = [(d_test, "test")], verbose_eval=100, early_stopping_rounds=50)
You can check the root mean square error (RMSE) that this function returns at the end to see how good or bad the training has been (low RMSE is good, high RMSE is bad - but there's no max RMSE value, it can be arbitrarily high). There are other methods to check the error, and you can explore them (like MAE, etc), but this is probably the simplest one. Anyway, the above code will return something like this:
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[0] test-rmse:2275
Will train until test-rmse hasn't improved in 50 rounds.
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
Stopping. Best iteration:
[0] test-rmse:1571.88
It ran the algo iteratively 5000 times, printing out the result every 100 lines (that's what those numbers are in the train method). To see what each of the parameters mean, you can read here.
You can also use linear regression, if you want, with xgboost, like so:
xg_reg = xgboost.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 5, alpha = 10, n_estimators = 10)
xg_reg.fit(X_train,y_train)
preds = xg_reg.predict(X_test)
print(preds) # these are the predicted prices for the test data
>>> array([2293.7073, 2891.9692, 3822.3757], dtype=float32)
And we can check the RMSE like so:
from sklearn.metrics import mean_squared_error
rmse = np.sqrt(mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
>>> RMSE: 1542.541395
Note that RMSE in the 2 methods is quite close (1571.88 vs 1542.54). This is like a sanity check for us that no matter which method we use, if we use it correctly, we should get similar results.
Stage 3 - testing and evaluation of the model - k-fold Cross Validation
Finally its time to see how our model performs on test data:
params = {"objective":"reg:linear",'colsample_bytree': 0.3,'learning_rate': 0.1,
'max_depth': 5, 'alpha': 10}
cv_results = xgboost.cv(dtrain=d_train, params=params, nfold=3,
num_boost_round=50,early_stopping_rounds=10,metrics="rmse", as_pandas=True)
This will again give you quite a few lines of output like when training:
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2
[17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1
This is how it looks in each of the rounds of the boosting:
print(cv_results)
So, that's it. We have the predicted values.
P.S. Stage 2.5 - Visualizing the model (Optional)
Did you know that we can also visualize the model?
import matplotlib.pyplot as plt
xgboost.plot_tree(xg_reg,num_trees=0)
plt.show()
It shows the tree structure following which the model you trained made its decisions.
You can also see the importance of each feature in the dataset with respect to the model:
xgboost.plot_importance(xg_reg)
plt.show()
These visualizations are of course not required for making the predictions, but they may sometimes give you useful insights about your predictions. |
H: Exceptionally high accuracy with Random Forest, is it possible?
I need your help to find a flaw in my model, since it's accuracy (95%) is not realistic.
I'm working on a classification problem using Randomforest, with around 2500 positive case and 15000 negative ones, 75 independent variables. Here's the core of my code:
# Splitting the dataset into the Training set and Test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# Fitting Random Forest Classification to the Training set
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 900, criterion = 'gini', random_state = 0)
classifier.fit(X_train, y_train)
# Predicting Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
I've optimized the hyperparameters through grid search and performed a k-fold cross validation, reporting 0.9444 as accuracies mean.
Confusion matrix:
[[3390, 85],
[ 101, 516]]
showing 97.6% accuracy.
Did I miss something?
NOTE: the database is composed by 2500 Italian mafia firms' financial reports, and 15000 lawful firms randomly sampled from the same regions as negative cases.
Thank you guys!
EDIT: I upload the metrics and cm. The model is actually performing well, but looking at the metrics and cm, it shows more realistic values regarding logloss and recall, so I assume it is fine.
AI: I summarise below several ways that would help you train and validate your model with as less bias as possible:
Usually a good way to assess the classification performance is to compare with some very basic models. If your validation metrics are worse than (or close to) those, it is obvious that the model needs to improve. E.g. in your case you could compare with:
random model (each observation is randomly classified to each class with probability 1/2)
model that always predicts negative class
Another way to ensure that the high validation numbers you get aren't biased by the way training set and test set are separated, is to use cross-validation. In cross-validation, the data is split in training and test set multiple times though an iteration process and the end validation metrics are calculated as average over the iterations. Here is an example of how you can perform cross-validation in python using scikit-learn.
In addition to accuracy I would also try to calculate and compare other validation metrics in order to get a more complete picture about the model's performance (e.g. precision, recall or more concise ones as F-score). Accuracy is not a recommended metric when most of the observations belong to one class. You can read more about performance metrics here and here. Scikit-learn can calculate automatically some of them (see here), but you can calculate any using the confusion matrix.
SMOTE is a library popularly used with unbalanced datasets like yours - it applies resampling to create a new balanced dataset. You can read more here. |
H: What is the classification accuracy of a random classifier?
I have a build a classification model using machine learning technique (SVM). I want to compare the classification accuracy of my model with a random classifier. My data set contains only two classes(1 or 0). The ratio of 1 and 0 instances are 35% and 65%. That means, 35% instances belong to 1 and 65% belong to 0 class. In that case, what will be the classification accuracy of random classifier (Random Guess)?
AI: The equation of the classification accuracy for a random classifier (Random guess) is as follows:
Accuracy = 1/k (here k is the number of classes).
In your case, the value of k is 2.
So, the classification accuracy of the random classifier in your case is 1/2 = 50% |
H: TD Learning formula
This is something I cannot get my head around and initially I thought is a typo but it is not.
Essentially in TD learning, we are trying to learn the Value Function. A value function tells me how favourable a state/observation is. Assuming ~~~discount/decay/lambda~~~ factor of 1, if V(s) is 10 and I make a move (action a) and V(s') becomes 5 then I expect reward to be -5:
R(a) = V(s') - V(s)
Hence in the TD learning formula, when it converges ignoring lambda (regardless of alpha or the learning rate) I expect the α(r + V(s') - V(s)) to be 0. But if V(s') - V(s) is equal to the reward, then I ended with r + r => 2r!!
So I expect to see -r in the formula and not r.
So where am I wrong in my thinking?
Thanks in advance
AI: R(a) = V(s') - V(s)
This is not correct. The value of a state is based on all its future rewards. Your formula would be correct for a value function $V$ that accumulated past rewards, but that is not directly useful for action selection in reinforcement learning. The agent needs to choose an action that makes it the most reward in future. It cannot make any action to change what happened in the past, so value functions are forward-looking only. For instance, the value of a state where an agent has completed its task successfully (or failed) is always $0$, because the agent can no longer act, and has no chance of any future reward.
Without discounting, then:
$$V(S_t) = R_{t+1} + V(S_{t+1})$$
Therefore:
$$R_{t+1} = V(S_t) - V(S_{t+1})$$
i.e. the opposite sign than you thought, but compatible with the TD update rule. |
H: Is the ultimate challenge in ML simply computational power?
I am stuck on a theoretical roadblock in learning about machine learning, because I have not seen this explicitly addressed anywhere. In my studies, it seems as if Cross-validation (or some variant thereof, like LOOCV, or potentially another, but similar, validation scheme like bootstrapping) is the be-all-end-all of model selection. Choosing models and their parameters via exhaustive CV to maximize fit but also balance overfitting seems the optimal way to create models, and computational power is only getting cheaper. So what is there left to do for the human analyst?
I apologize in advance for this amateurish question, but could anyone fill in this gap for me, and potentially suggest some sources on model selection?
AI: There is something known as the VC dimension of a hypothesis class. This refers to the maximum number of datapoints with arbitrary binary labels that can be correctly classified by a model from the hypothesis class.
https://en.wikipedia.org/wiki/VC_dimension
If your number of datapoints is larger than the VC dimension for the hypothesis class you have chosen (say set of 2d hyperplanes), then no matter how much you tune your model using cross validation you can never achieve complete accuracy. Hence, the analyst has the important job of choosing the correct hypothesis class while making sure it doesn't overfit. In the case of deep learning this would mean coming up with a specific architecture and is often one of the most difficult tasks. |
H: why nobody uses matlab
I wonder why most people now don't use matlab. I guess the reason is matlab is not free, so companies don't want to use it, then interviewees don't use it, then schools don't encourage it, then nobody ends up using it.
But I like it, compared to C++, Python, for it's convenient to plot figures without needing to import any libraries or header files, let alone those powerful ODE solvers.
I might be wrong on any of the above points, as I'm not most familiar with other languages. Any comments?
AI: I beg to differ. I've seen a multitude of researchers using MatLab in their respective research initiatives. MatLab, as many will soon point out, is closed source. In addition to its code proprietary legalities, it has a hefty price tag. Almost all educational institutions, in the US for example, that are ranked as "Highest Research Activity" and "Higher Research Activity" universities by Carnegie Classification of Institutions of Higher Education will typically have bulk licensing. This means that their professors, masters(thesis), and doctoral students will have MatLab availability(at their request) from their local office of information technology (OIT) in their respective universities(typically the College of Engineering at a university)
At a personal level, you may buy MatLab, but you are likely to migrate to R and Python soon. This unfortunate and monetary painful process is due to the widespread adoption and development of Python libraries and R packages specifically for data science research and analytics.
The short answer after this lengthy response comes down to MatLab's adoption and its upfront cost. With Python and R, you can be learning and producing in a shorted period of time. Plus, it's free. |
H: What's the difference between Sklearn F1 score 'micro' and 'weighted' for a multi class classification problem?
I have a multi-class classification problem with class imbalance. I searched for the best metric to evaluate my model. Scikit-learn has multiple ways of calculating the F1 score. I would like to understand the differences.
What do you recommending when there is a class imbalance?
AI: F1Score is a metric to evaluate predictors performance using the formula
F1 = 2 * (precision * recall) / (precision + recall)
where
recall = TP/(TP+FN)
and
precision = TP/(TP+FP)
and remember:
When you have a multiclass setting, the average parameter in the f1_score function needs to be one of these:
'weighted'
'micro'
'macro'
The first one, 'weighted' calculates de F1 score for each class independently but when it adds them together uses a weight that depends on the number of true labels of each class:
$$F1_{class1}*W_1+F1_{class2}*W_2+\cdot\cdot\cdot+F1_{classN}*W_N$$
therefore favouring the majority class.
'micro' uses the global number of TP, FN, FP and calculates the F1 directly:
$$F1_{class1+class2+class3}$$
no favouring any class in particular.
Finally, 'macro' calculates the F1 separated by class but not using weights for the aggregation:
$$F1_{class1}+F1_{class2}+\cdot\cdot\cdot+F1_{classN}$$
which resuls in a bigger penalisation when your model does not perform well with the minority classes.
The one to use depends on what you want to achieve. If you are worried with class imbalance I would suggest using 'macro'. However, it might be also worthwile implementing some of the techniques available to taclke imbalance problems such as downsampling the majority class, upsampling the minority, SMOTE, etc.
Hope this helps! |
H: Is there any way how to make samples balanced?
I have a dataset which consists of attributes on breakdown of machines.The target variable is machine status which are populated with ones and zeros. The distribution of ones and zeros are given below
0 - 19628
1 - 225
0 - signifies the machine is running good and 1 signifies there was a breakdown.
Now, should I go by splitting the dataset using scikit train_test_split method ?. or introduce artificial rows to mitigate the tradeoff between ones and zeros and then split the dataset.
Well, What do I mean by artificial rows ?
Populate some random data with having target variable as 1's
But that would ultimately mislead the system. I don't see any other options or alternatives.
Is there any way how to make samples balanced?
AI: SMOTE is a python library popularly used with unbalanced datasets like yours - it applies resampling to create a new balanced dataset. You can find some example implementation here.
Maybe you will also find this question and answers relevant, as it contains advice on limiting bias during modeling and validation of unbalanced data classifier. |
H: Why is sklearn.metrics.roc_auc_score() seemingly able to accept scores on any scale?
I had input some prediction scores from a learner into the roc_auc_score() function in sklearn. I wasn't sure if I had applied a sigmoid to turn the predictions into probabilities, so I looked at the AUC score before and after applying the sigmoid function to the output of my learner. Regardless of sigmoid or not, the AUC was exactly the same. I was curious about this so I tried other things like multiplication by arbitrary numbers and applying arbitrary log or exp functions and the score was still the same.
Assuming I haven't made some other error somewhere, why is the sklearn function for ROC AUC able to work on any scale of scores?
AI: The documentation says
Target scores, can either be probability estimates of the positive
class, confidence values, or non-thresholded measure of decisions
(as returned by "decision_function" on some classifiers).
(My emphasis)
This is possible because the implementation only requires that the y_true can be sorted according to the y_score.
The false positive rate that is returned is given as follows:
A count of false positives, at index i being the number of negative
samples assigned a score >= thresholds[i]. The total number of
negative samples is equal to fps[-1] (thus true negatives are given by
fps[-1] - fps).
The false positives can be calculated with only taking into account the order of the predicted scores/thresholds.
Multiplying the scores by a scalar has no effect on the order of the values, thus the resulting score remains the same!
See the sklearn GitHub page, the relevant code is under the definition of _binary_clf_curve. |
H: Does discretization of continuous features also lose information about distance?
During discretization, it "squashes" nearby values into one bin, losing a little bit of information along the way.
But doesn't it also lose information about distances of features? For example, if we have height as continuous feature, we can e. g. create bins very small, small, medium, large and very large. Isn't it problem, that once we have these categories, we lose information that very small and small are closer than small and very large?
This would be even worse for predicted feature - I would assume that regression that tries to predict height number would be much more successfull than classifier that tries to predict category of height discretized, because cost function of regression can account for a distance from correct height, but cost function of classifier could only answer "correct" or "not correct".
Yet I haven't found any mention about this when I've searched about discretization.
Are my assumptions incorrect?
AI: You are absolutely correct. Also, sometime the binning could be arbitrary as well, but sometimes it depends on how you use the insight.
Using marketing as an example, we often see age grouped into buckets like 18-24, 25-34, etc.
Would a 34 year and 35-year old behave dramatically differently? Probably not. But then, it might still make sense to use such grouping, because many ad platform targeting tools use the same bin definition. It doesn't make sense logically but practically.
Of course, it really depends on the situation and the kind of analyses you need to do. For some distribution, you might want to do a log scale before you bin, etc.
Back to your example, if you feel that height should be a continuous feature and most models can handle it, is there a particular reason why you would want to discretize it? |
H: How to model & predict user activity/presence time in a website
I need to make a prediction model based on some historical data from a website's user login system. Suppose my dataset has some features like user login time and logout time for each day for a specific user. Login and logout times can be multiple in a day for a specific user. Suppose, If the user login 5 times in the website in a day, there will be five entry points as rows to the dataset for that user, logout also works like this. Now from the login and logout time, I need to find out the active time that user was logged in to the website as well as predicting the inactive time in which user is not available/present in the website. How can I do this? Which algorithm should I use and which prediction model (Linear regression/Logistic Regression/Time series) need to choose in this case? It will be very much helpful if you can suggest me for this specially to implement in R. Thanks.
Edit:
Actually I need to find out/predict a time in which the user is active in a website during the day. I have a dataset with 3 columns listed as "user_id", "login_time" and "logout_time". Now I am trying to make another column "active_time" in which I'm trying to compute the user's active time in the website by subtracting the login time from logout time and it can be multiple as user can access website multiple times in a day. Now I need to predict the time in which the user is active in the website where active time is the target variable and login, logout time as predictor. I also trying to make a linear regression model for this prediction. But I don't know whether my process is correct or not for this problem. Can anyone please let me know which type of model I need to build for this prediction? Is it will be Linea regression, logistic regression or time series ?
AI: I suggest approaches based on neural networks for time to event data. Depending on your data I think you can also use
https://www.slideshare.net/mobile/datasciencelondon/survival-analysis-of-web-users
https://arxiv.org/abs/1807.04098
https://arxiv.org/abs/1801.05512
https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/ |
H: NEAT XOR Example gives different results at each run
I have just started learn to use NEAT algorithm. I was thinking I understood the basics of NEAT when I read "Evolving Neural Networks through
Augmenting Topologies" paper and current python documentation. However, in practice I m in trouble.
I have run XOR examples from source (https://github.com/CodeReclaimers/neat-python.git). However, when I run any of "evolve-feedforward.py", "evolve-feedforward-parallel.py" or "evolve-minimal" codes more than once, results are changing. For instance,
When I run "evolve-feeedforward.py" it gives : 1
when I run same code for second time, it gives: 2
and it changes with each run. I have also deleted generated neat-checkpoint files before each run, but structure still changes. I would like to understand reason what cause this, I think I m missing some essential points here.
AI: Provided each network implements the XOR function approximately, but within reasonable error bounds, then this is NEAT behaving as designed.
Neural networks allow for multiple equivalent solutions. Even using a fixed architecture and stochastic gradient descent to learn a simple function, there is a good chance of ending up with very different weights each time. With NEAT, it also explores alternative architectures, and the search includes many calls to random number generators in order to make decisions.
The main deterministic step in NEAT is the selection process - comparing population members with each other to rank them in terms of fitness. However, as multiple designs of neural network can all solve the XOR problem roughly equally well, it is unlikely this will find the same one each time.
If you need to exactly reproduce an experiment in NEAT, you can achieve that by setting the RNG seed. I think this could just be a call to random.seed() for the library you are using, but some libraries may have other RNG instances that need setting up too. |
H: What is the the cost of combining categorical variables?
I have 2 categorical variables e.g. state and city. Missing are only in city. As opposed to throwing out all observations with missing values for city or throwing out city all together I was considering making a variable location that is a concat of the two columns e.g. state:california city:LA becomes California_LA & state: california city: Missing becomes california_None.
I am building a NN and I want to know what the cost of doing this is. Will I be required to do more computation because the unique values in the location variable will be much greater than state or city?
AI: It depends on whether you retain the original columns or not. You are not providing any additional information to the NN either way, so it's just a matter of how many features to compute in each batch. That said, for the same reason, you might as well just use a value (e.g. -1 or 0) for the missing cities. Depending on your implementation language, this is probably quicker easier to implement in terms of data engineering, and will amount to the same thing in terms of results. Obviously the NN will need numerical values, so effectively you could just label encode the city feature and assign missing values to one of the labels.
There are some very minor considerations, such as if you combine state and city, you will end up with a larger label set for the single column, which, if you normalise to an appropriate range (e.g. 0 to 1), which NN's often benefit from and which can help to avoid exploding gradients, you will have a finer graduation than a column each for city and state. Depending on the size and complexity of your data, this could have an impact on the NN's performance (prediction-wise) but it is likely to be a very minor effect compared to all of the other decisions you'll need to make in the process. |
H: Logistic regression cost function
In Aurelien Geron's book I found this line
This cost function makes sense because –log(t) grows very large when t approaches
0, so the cost will be large if the model estimates a probability close to 0 for a positive instance, and it will also be very large if the model estimates a probability close to 1
for a negative instance. On the other hand, – log(t) is close to 0 when t is close to 1, so
the cost will be close to 0 if the estimated probability is close to 0 for a negative
instance or close to 1 for a positive instance, which is precisely what we want.
What I dont get is, How will the cost will be large if the model estimates a probability close to 0 for a positive instance, and it will also be very large if the model estimates a probability close to 1
for a negative instance?
AI: The cost function of the Logistic Regression derived via Maximum Likelihood Estimation:
If y = 1 (positive): i) cost = 0 if prediction is correct (i.e. h=1), ii) cost $\rightarrow \infty $ if $h_{\theta}(x)\rightarrow 0$.
If y = 0 (negative): i) cost = 0 if prediction is correct (i.e. h=0), ii) cost $\rightarrow \infty$ if $(1-h_{\theta}(x))\rightarrow 0$.
The intuition is that larger mistakes should get larger penalties.
Further readings, 1,2,3,4. |
H: help with Keras sequential model output
I have trained a sequential model with keras for MNIST dataset and this is the code I've used.
# Create the model
model = Sequential()
# Add the first hidden layer
model.add(Dense(50, activation='relu', input_shape = (X.shape[1],)))
# Add the second hidden layer
model.add(Dense(50, activation='relu'))
# Add the output layer
model.add(Dense(10, activation = 'softmax'))
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics = ['accuracy'])
# Fit the model
model.fit(X, y, validation_split=.3)
Output:
Train on 1750 samples, validate on 750 samples
1750/1750 [==============================] - 0s - loss: 0.1002 - acc: 0.9811 - val_loss: 0.3777 - val_acc: 0.8800
Can you explain what is loss, acc , val_loss, val_acc? How can I know my model performance from these metrics in output. Please explain if possible.
AI: First will explain the terms loss, acc, val_loss, val_acc and then get into evaluating model performance.
Loss - This metric is used to understand the prediction by the network on training data vs the actual value. Given that you are using categorical crossentropy as your loss mechanism, your loss is actually telling you how near or far your predicitions are from the true classification values.
Acc - Is the accuracy of the network on the training data. After training on 1750 samples, your network can accurately predict with 0.98 accuracy.
Val_loss, Val_acc - The metrics are the same but are evaluated on validation data, that is the data that is not part of the training dataset, this will help you evaluate how your network performs on data that it is not trained on, sort of mimicking how your network would work when deployed in the real world on the chosen dataset.
The above metrics help you understand how your network is performing every epoch, giving you an idea on how to improve the performance.
How to perform model performance:
Model performance requires you to create/manage training dataset that would depict what you can expect in the real world when deploying your model to actually predict.
You would look at test accuracy, precision, recall, f1 scores, confusion matrix and other metrics based on the problem you are solving to truly evaluate your model performance.
Hope this answers your question. Have fun with deep learning networks :) |
H: normalization of probabilities in predicting a poly-neuron output in neural nets
When predicting a poly-neuron output in neural nets, say, predicting multiple handwritten digits and giving an output neuron vector (0.1,...,0.9,0.1,...), many use sth like softmax (or sth like the energy dependent probability exponential formula in statistical mechanics) to normalize the output vector such that all the components of the output vector sum up to 1, and that the normalized output vector becomes a probability vector. I doubt the necessity of this normalization, for without which I can equally well predict as per the biggest vector component. Is there anything I overlooked?
AI: You are correct, but this is not called normalization. You can simply use the highest probability output for category. This is what softmax does for you. For example 2 output neurons can have 0.1 for dog and 0.9 for cat as the loss. Softmax will it just convert it to [0,1] meaning no dog but a cat is on the image. |
H: How to fill missing numeric if any value in a subset is missing, all other columns with the same subset are missing
There is a clear pattern that show for two separate subsets (set of columns); If one value is missing in a column, values of other columns in the same subset are missing for any row.
Here is a visualization of missing data
My tries up until now, I used ycimpute library to learn from other values, and applied Iterforest.
I noted, score of Logistic regression is so weak (0.6) and thought Iterforest might not learn enough or anyway, except from outer subset which might not be enough? for example the subset with 11 columns might learn from the other columns but not from within it's members, and the same goes for the subset with four columns.
This bar plot show better quantity of missings
So of course, dealing with missings is better than dropping rows because It would affect my prediction which does contain the same missings quantity relatively.
Any better way to deal with these ?
[EDIT]
The nullity pattern is confirmed:
AI: You should try all of:
Using a classifier that can handle missing data. Decision trees can handle missing features both in input and in output. Try xgboost, which does great on kaggle competition winners. See this answer.
Off the shelf imputation routines
Writing your own custom imputation routines ( this option will probably get you the best performance)
Given your pattern of missing values, splitting the problem into four parts and learning classifiers for each.
Custom routines for imputation
Let's call your sets of columns A,B,C and D.
Looking at this explanation of MICE, it seems to benefit from random patterns in missing values. In your case, the chained equations will go only one way and repeated iterations as in MICE may not help. But the highly regular nature of your missing values may make implementing your own variant of MICE easier.
Use rows in set A to fill B. You can write this as a matrix problem $XW = Z$, $X$ are the rows filled in $A\cap B$, $Z$ are the rows filled in $A - B$. These two sets don't intersect and since $B \subseteq A$, this covers all the rows. Learn $W$, crossvalidating and use it to impute B.
Use A and B to impute C.
You're double-dipping on A, but I don't think it's a problem overall. Any errors in A will get double the influence on the result.
Use A,B,C to impute D.
Learn with A,B,C,D with imputed values. Unlike MICE, your error will not be equal for all imputed values, so maybe you want to offset errors due to A by using the four data sets with different weights. "Rows with A are all original data, so this gets a higher weight". "Rows with B get a small penalty, because I have less data."
These four weights will be learned by another "stacked" classifier, sort of similar to the next section.
Stacked classifiers
A possible disadvantage for imputing is that imputing may be inaccurate, and in the end you have different errors on different data points. So, skip imputing and just predict.
Instead of sorting columns in the order of most filled to least filled, sort the rows, i.e. data points, in the order: most columns to least columns.
Then you have four sets of data. Train a classifier for each one, one that uses all the data but fewer features, then one that uses more features but less data, until the last one which uses the most features but the least data. Which is individually the best? More Data or more Features. That's an empirical answer based on your dataset
After getting the four classifiers, combine them with another linear classifier on top (the "stacked" classifier). It may decide to give more weight to the classifier with the most features, or the classifier with the most data, we'll see (I'm betting on most data). But, you ARE using all the features and ALL the data, and in the optimal ways. I think this should be a reasonable baseline, at least.
You could also try chaining them, start from the last classifier using (least data, most features). The next classifier uses more data but fewer features. For some (the common) data, it has a new feature, 0 if the data point is "new" and $y_0$ if it comes from the old set.
There are three kinds of ensemble methods, Bagging which randomly samples less data to train classifier (helps with very noisy data and gives lower variance), methods like Random Forests which randomly throw away columns, and boosting which chains learning. You predict the values (with anything including Bagging and Random forest), then train another model (of the above type) to predict the residuals and so on.
You can look up the literature of these but honestly, those four classifiers (max data, min features),..., (min data, max features) can be easily generated with any library. Then use emsemble learning techniques to chain or stack classifiers. |
H: Higher dimension data visualization in Matlab/Octave
I am working on sparse recovery for a classification task. I use Pine hyperspectral dataset which is a freely available dataset and this image contains 200 Dimension (Depth/channels/bands). In addition, this dataset contains labels (Ground truth information). Thus I extract the corresponding pixels for each class and now I want to visualize them to see really how they look like. But I do know now how to visualize them for all 200 channels. I did this visualization for 3D but I want to know how can I expand this for 3D.
Here is my code for 3D, I would appreciate any help for visualizing my data in 3D
Not: I simulate the dataset.
clear all
% Number of classes = 5
B = [1 1 2; 2 3 4; 2 1 3; 4 3 4; 5 5 3];
B_vec = reshape(B, 5*3, 1);
% Image with 200 depth
A = rand(5,3, 200)*100;
A_vec = reshape(A, 5*3, 200);
%Indexing the desire classes.
indx = find(or(B==1, B==4, B==3));
class_num = B_vec(indx);
%Extracting the corresponding pixels (base on indx)from 200D image.
for k = 1: length(indx)
img_value(k,:) = A_vec(indx(k),:);
end
G_idx_1 = (find(class_num==1));
for k = 1:length(G_idx_1)
C_1(k,:) = img_value(G_idx_1(k),:);
end
G_idx_3 = (find(class_num==3));
for k = 1:length(G_idx_3)
C_3(k,:) = img_value(G_idx_3(k),:);
end
G_idx_4 = (find(class_num==4));
for k = 1:length(G_idx_4)
C_4(k,:) = img_value(G_idx_4(k),:);
end
X1 = [C_1'];
X2 = [C_3'];
X3 = [C_4'];
plot3(X1(1,:), X1(2,:), X1(3,:),'bo'); axis on; grid on; hold on
plot3(X2(1,:), X2(2,:), X2(3,:),'go');
plot3(X3(1,:), X3(2,:), X3(3,:),'ro'); hold off;
title('class1, class2, class3')
AI: You can use t-SNE to perform dimensionality reduction to create a 2-D or 3-D embedding space for data visualization purposes.
For Matlab you can check the implementation and the example. |
H: How to sort list by parameter in python?
I have a list of employee records. Each tuple of the list represent a person's record, which includes his name, ID, and age.
For example,
emp_records = [('Karim',100, 45), ('Rahim',10, 30),('Salim', 300,60),('Abu',50,35)]
Now, I want to sort emp_records based on ID.
AI: You can use lambda function to pass argument by which you need to sort the records.
For example,
sorted_list=sorted(emp_records, key=lambda emp : emp[1]) //Here 1 means ID
print(sorted_list)
You can also use the operator module. The operator module has the itemgetter function, which takes item as argument and sort based on the argument
For example,
from operator import itemgetter
sorted_list=sorted(emp_records, key=itemgetter(1)) // Here 1 means ID
print(sorted_list) |
H: How to sort two dimensional array in Java?
I have a two dimensional double array. I need to sort my based column 1 in descending order. How can I sort my two dimensional array in descending order.
Sample data and code are the following:
package mypro.com;
public class SortDoubleArrary {
public static void main(String agrs[]) throws Exception{
double[][] testdatset=new double[5][2];
testdatset[0][0]=20.0;
testdatset[0][1]=0.20;
testdatset[1][0]=100.0;
testdatset[1][1]=0.50;
testdatset[2][0]=10.0;
testdatset[2][1]=0.95;
testdatset[3][0]=220.0;
testdatset[3][1]=0.35;
testdatset[4][0]=140.0;
testdatset[4][1]=0.10;
}
}
Expected output looks like the following
10.0 0.95
100.0 0.5
220.0 0.35
20.0 0.2
140.0 0.1
AI: There is an overloaded sort method in java.util.Arrays class which takes two arguments: the array to sort and a java.util.Comparator object.
You can add the following lines of code with your program to get the expected result.
import java.util.Arrays;
import java.util.Comparator;
Arrays.sort(testdatset, new Comparator<double[]>() {
@Override
public int compare(double[] o1, double[] o2) {
return Double.compare(o2[1], o1[1]);
}
}); |
H: For text classification that has innumerable features, how do I choose the number of neurons and layers for MLPClassifier?
In my use case of text classification (identify the author from a subset of 10 authors), I find that post all processing with trigrams, there are a 100 thousand and odd features with nearly 50k observations/samples to be trained.
It leaves me baffled to decide on the number of neurons and layers to configure for my MLPClassifier of scikit-learn. If I try a number like 250 neurons with just a single layer, my laptop hangs in about 5 minutes :(
Your advice based on your experience is highly appreciated.
AI: As you know, you can basically reduce the number of features by choosing best discriminant ones.
If you used scikit-learn vectorizers like TF-IDF, they have the parameter max_features which chooses $n$ best features for you. But my point is something else:
All BoW models which I guess you used are unsupervised. I strongly recommend that you use mutual information between features and targets (you may also search "Supervised TF-IDF" to get more insight) and choose your best features more effectively. At the end you need to choose $n$ neurons as input layer but with this way you may be able to get the most out of less number of features.
If you are familiar with autoencoders, they are also a pretty strong way to reduce the number of features effectively.
Hope it helps! |
H: Free Service for Alpha Zero training
I'm an AI student I need to train a deep neural network using the Alpha Zero (Silver et al) for a simple game using this implementation: http://web.stanford.edu/~surag/posts/alphazero.html. I was wondering if any cloud provider like Google or Amazon offers a free trial which suffice to train the model and supports the implementation mentioned above. The game is an Android app called Soccer Stars which is a fairly simple game with simple strategies.
Thank you very much for your time.
AI: You might be able to fit your training into a Google Colab session. You can search for tutorials using different libraries, here is one for PyTorch. Google Colab is essentially a free online Jupyter notebook, and makes K80 GPU available for acceleration.
Sessions on Colab are limited to ~12 hours maximum, so you would need to start with some simple variants and build up to max allowed training time (this is good practice anyway). There is no guarantee that this will be enough for your game, but you should learn a lot even if the resulting agent is not perfect.
It is not clear whether you have already figured this out, but you most likely will not be able to host the Android App natively to run as the environment, and will need to code some kind of simulation or re-implement the game. I would also assume it is a card, board or turn-based strategy game with perfect information, if you are hoping to use an AlphaZero-based learning agent with it (AlphaZero could probably be adapted to video games, but I would not expect it be as efficient as simpler algorithms without a planning phase, such as A3C in that case). |
H: Classification vs Regression Algorithms - Should exists algorithms only for Classification and/or Regression
Dummy question:
There exists algorithms that should only be used for Classification or Regression problems?
For example, should Random Forest should only be apply on Classification problems and Neural Networks for Regression problems?
Thank you for your time :)
AI: Well you can think of a regression as being a classification with a very large number of ordinal classes, therefore algorithms can be used for both.
Indeed you can build a regression with a random forest, and a neural network can be used both as a regressor or a classifier depending on the activation function of its output layer. |
H: YOLO layers size
According to the original paper, the input size of the YOLO network layer is 448x448x3 and after the filter (7x7x64-s-2) is applied the output shape is to be 221x221x192 as I suppose. Some sources assert that the output shape is 224x224x192 but how is it possible if we don't use the kernel (2x2x64-s-2)?
And I want to implement it using keras. But my code doesn't allow to obtain
correct size of the next layer it gives (None, 221, 221, 64)
model = Sequential()
# The 1st layer
model.add(Conv2D(filters=64, kernel_size=7,
strides=2, input_shape=(448,448,3)))
model.add(LeakyReLU(alpha=0.1))
AI: Usually, when we use a CNN, we apply padding with convolution, that way, activation maps have the same size as the inputs.
Look at this video to understand how padding works :
Andrew Ng course on Coursera about padding (you need an account to watch the full video )
In Keras, Conv2D layer have an argument called 'padding', here is a link to the documentation :
Convolutional layer documentation of Keras
Be aware because they say that setting padding to 'same' when using a stride different than 1 (it's your case here) can be inconsistent depending on the backends you use. I let you try to see if the second layers have the right shape.
That way, your conv layer output should be 224x224 like in the paper (and 112x112 after the maxpool layer)
Note : Be careful with your filters number. You set the number of filters to 64, but in the paper, the filters number is 192 (I guess it's 64*3 as there are 3 channels ?) |
H: YOLO pretraining
I'm implementing YOLO network and have some questions. In the original paper the authors say: "For pretraining we use
the first 20 convolutional layers from Figure 3 followed by a
average-pooling layer and a fully connected layer". And also they report that they use ImageNet 1000 classes dataset and 224x224 input size instead of 448x448
My questions are the following:
1) What is the size of average-pooling layer kernel? 2x2?
2) How do authors reduce the input size to 224x224? Do they omit the 1st layer?
AI: 1) The goal of using Average pooling layer (at least here), is to have a vector after it. That way you have a fully connected layer vector.
In Yolo, the layer previous the fully connected one seems to be 7x7x1024. The next layer, the fully connected one, is 4096 (or 1x1x4096). That means you need an average pooling layer with a kernel of 7x7, and 4096 filters (7x7x4096).
Maybe look this explanation of Global Average Pooling by Alexis Cook.
2) I don't really understand your second question, so feel free to comment if I am answering wrongly :
The dimension of 224x224 is for the pretraining of the network. First, they trained their network for image classification, with imagenet, like network as VGG, Inception or densenet. When the training is done, they add a new layer, at the begining, with an input size of 448x448. They trained the network again with this new layer for image recognition. |
H: Does image resizing lower the prediction accuracy of MLP?
I am implementing a vanilla neural network (MLP) to do image classification in python using tensorflow on images of honey bees to detect their health status. The images in my dataset are of different shapes and sizes, so I decided to do image resize using cv2. All my images are now of the same size (64 by 64) but some of them have been stretched/shrieked due to resizing. Does this have an effect on the low prediction accuracy I am getting from my MLP?
AI: Generally speaking, it highly depends on your data. If you have images of numbers for each image, it may not be that much bad but for images of cats or dogs, you completely put your information away by resizing to that size.
To answer your question, yes. The reason is that it leads to high Bayes error. It simply means that you as an expert can not say what they are. Consequently, it is not possible for the network to learn them. You can easily see the images and figure out that there is not anything to be learned. For instance, in that case, what is the difference between the sky and sea? Can a $64\times64$ image represent them? Can you as an expert find it out without any previous knowledge? |
H: Outputs of an LSTM Cell
from each cell of lstm, what are the output's and what does they signify? i understand that there will be three outputs. A long term memory, short term memory and a output. But, i am little confused from colah blog which can be found here. Here he shows that there will be three outputs, one is long term state and other two outputs are exactly same. What is the use of two outputs being same?
AI: The outputs are the "cell state" which is transferred only to the next LSTM cell and the "hidden state" which goes to the next cell and also as an output of the layer (in the case LSTM should return sequences). |
H: Does policy optimization learn policies to make better actions with higher probability?
When I talk about policy optimization, it is referred to the following picture, and it is linked to DFO/Evolution plus Policy Gradients.
I would like to know is it correct to say: Policy Optimization learns policies to make better actions with higher probability?
Also, what is the location of Proximal Policy Optimization in the picture?
AI: The image in your question looks to me like a loose hierarchy explaining how various Reinforcement Learning methods relate to each other. At the top are broad categories of algorithm based on whether they are value-based or policy-based, towards the bottom are more specific methods.
There is more than one way to categorise and split RL algorithms, and it would be messy to try and include all the ways that they relate to each other. Just bear in mind that it is a very rough guide.
The main difference between value-based and policy-based methods is:
Value-based methods learn a value function (by interacting with the environment or a model of it), and consider the optimal policy to be one that takes actions with maximum value.
Policy-based methods learn a policy function directly, and many can do so without considering the estimated value of a state or action (although they may still need to calculate individual values or returns within episodes).
Note Actor-Critic is linked to both headings, as it learns both a policy function (the "actor") and a value function (the "critic").
I would like to know is it correct to say: Policy Optimization learns policies to make better actions with higher probability?
Yes that is broadly correct, although you don't define "better". A policy function typically returns some probability distribution over possible actions for any given state. When learning, it will tend to increase probabilities of actions that resulted in better returns (discounted sums of rewards) and reduce probabilities of actions that did not. This may be a very random, high variance process though, depending on the environment.
There are exceptions. Some policy-based methods learn a deterministic policy $a=\pi(s, \theta)$, and adjust the value based on adding some noise to this function to explore and adjusting the function based on the results of the different action. These don't behave like your statement (because there is no probability to make higher).
Also, what is the location of Proximal Policy Optimization in the picture?
Proximal Policy Optimization is definitely a policy-based method, and it does use an estimate of a value function too for updates (in this case a value called advantage which you also see used in Advantage Actor-Critic).
In the diagram I would probably place it under Actor-Critic Methods in a new row as a specific example of one. However, it does have some significant variations from "vanilla" Actor-Critic, based on how it restricts large changes to the policy function. |
H: How can I augment my image data?
What are the correct and common ways to normalize image for CNN?
I used to work with text and it was pretty straightforward. Removing stop words, clean text from noise, tokenization, stemming etc. There is no problem with length we can easly add padding. I am not sure how to treat images.
Grey scale is much better, less data as input.
Should I rotate each image for random angle? How to pick right size of image in my traing dataset? What about adding additional filters?
(my goal is to classify 2 classes with tensorflow)
AI: You begin by asking about image normalisation, but then refer to other techniques, which I believe all fall under "image augmentation". So I will answer the more general question:
how can I perform image augmentation to improve my model?
I would generally say that the more augmentation you can apply, the better. A caveat to that statement is that the augmentations must make sense for your target case. I will explain at the end what I mean by that.
Let's list some ways to augment an image, along with some common values:
Augmentations
1. Flipping
Starting with perhaps the easiest to implement, you can (easily) flip images:
horizontally: makes an image of an arrow pointing left point in the opposite direction - to the right, and vice-versa
vertically: makes an arrow pointing down point up, and vice-versa.
It is more difficult, due the general rectangular shape of images, but you could flip an image diagonally, but I haven't seen anyone do that really. It is getting close to the idea of rotation, I suppose.
2. Rotation
In this case, we simply rotate the image about its center by a random number of degrees, usually within a range of [-5, +5] degrees. Even tiny small angles, which we can maybe hardly perceive, this is a simple trick that can add a lot of rubustness to a model.
3. Crop
We take "chunks" of an image, either at random or using a pre-defined pattern. This helps the model perhaps focus on certain areas of images and not be overwhelmed by perhaps unimportant features. It also simply creates more data.
You might also incorporate this augmentation method as a means to resize your input images to fit into a pre-trained model, which took input images of a different size to your data.
4. Normalisation
This is more of a numerical optimisation trick that one which can really be interpreted from a visual perspective. Many algorithms will be more stable whilst working with smaller numbers, as values are less likely to explode. Numbers closer together (in the sense of continuous space with a linear scale) are also more liekly to produce a smoother optimisation path.
One way of performing normalisation is:
first computing a mean and standard deviation of the images, then
(from all images) we subtract the mean and divide by the standard deviation.
This will produce images whose pixel values that lie over some range, but have a mean value of zero. Have a look here for some other methods and more discussion. You can normalise e.g. over whole images or just over the separate colour channels (RedGreenBlue).
One thing to keep in mind, is that the values used for optimisation must come from the training dataset itself, and must not be computed over the entire dataset including the validation/test datasets. This is because that information should not be passed to the model in any way - it is kind of cheating.
5. Translations
SImply move the image up, down, left or right by some amount, such as 10 pixels, or again within a range of [-5, +5] % of the image size. This will produce pixels along either one or two axes, which must be filled by another colour because the picture has moved out of the frame. It is common to use black or white for these "empty" parts, but you could also use the average pixel value or even crop the shifted image to remove them.
6. Rescaling
This is a less obvious augmentation step, but can add value because the model will be able to extract different features (more or less general) from larger and smaller images. A small image without much detail would only allow the model to learn higher level, or hazier features and not be able to focus on specific details that are available in high resolution images.
Methods such as progressive resizing, commonly used for training GAN architectures, start with smaller images and slowly work up to larger images. This can be done to keep training as stable as possible (in GANs it can increase robustness against mode collapse), but also coincides with the notion that early layers in a network learn high-level features, and layers deeper into e.g. a convolutional network learn representations that are more detailed. Why do the early layers require high resolution images? Let's give them low resolution images, train quicker and hopefully generalise slightly better!
7. Be creative!!
I once made a model for a self-driving car. I knew that the track where the model would be tested had many trees and quite a few high walls, which oth created shadows over the road. These shadows looked a lot like straight lines representing the edges of the road! So I added shadows to my images, at random intensities and random angles with random sizes. The car learnt that a change in brightness over a straight line did not necessarily mean the edge of the road and hence stopped freaking out when it reached one!
Think about the artefacts of your training data and the situations your model will face in the validation data, and try to incorporate them. This is really like feature engineering, incorporated into a model via augmentation... and it can help a lot!
8. Examples
Just to add a nice picture, here is a great display of how interesting variations can be made from a single image, taken from the Keras tutorial, linked below. The original image is top left and the remaing 7 are the output of combinations of a set of random augmentations:
You can have a look at an article like this, which explains augmentation quite well, with examples.
Another great example here, part of a series of articles, goes through augmentation possibilities in some detail.
Implementations
1. Keras: ImageDataGenerator
This is a single class, where you can say which augmentation steps to apply and (if relevant) how much. It allows very easy integration of augmentation into a model.
Check out the official Keras tutorial for some great examples with images.
2. PyTorch: Transforms
You can build a small pipeline of transformations, gathering them together into a Compose object, to have fine-grained control over each augmentation step, its parameters and with which probability it is applied.
Have a look at this tutorial for an example of combining Rescale and RandomCrop.
3. Augmentor: Augmentation library:
This standalone library allows you build up great augmentation pipelines, which then hand off the images into a model. There are a great number of possibilities implemented, such as random warping/distortion, which looks really funky:
It also look as easy to use as the Keras solution shown above!
Going back to the start
Now going back to my opening statement, the reason we want to apply as many augmentations as possible, is because it is synthetically creating more data for our model to learn from. We are trying to teach the model a distribution of possible data inputs and make it learn how to produce the correct output. We usually do this on a training set and then ask for its predictions on a validation set, which contains unseen data. The more data that the model has seen from the distribution/function that generates the data, the less surprised and better it will perform on the "unseen" data. We want our model to be as robust as possible to make kinds of alterations to data, and exposing it to more of that helps.
The caveat, that we only use relevant augmentations, is necessary because as soon as we start using augmentations that skew the input distribution somehow (e.g. showing the model images that make no sense1).
So use as much augmentation as possible, within the constraints of reality for your problem; as well as the time it takes to train the model ;-)
1 - Imagine training a model to detect cars on a road. If we perform a vertical flip, images shown to the model will contain cars that are essentially driving upside down; "on a grey the sky". This really doesn't make sense and will never occur in a realistic validation set or in the real world. |
H: What does the embedding mean in the FaceNet?
I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
P.S. English isn't my native language.
AI: Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 \times 28$ pixels, then $n$ would be $28 \times 28 = 784$.
Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.
This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels. |
H: Do we need to add the sigmoid derivative term in the final layer's error value?
I have been studying professor Andrew Ng's Machine Learning course on Coursera. Currently, I am trying to prove the formulas for backpropagation, which is mentioned in Week 5 (in this document). Clearly, it mentions that
δ(L) = a(L) − y.
However, I have also came across this video on Youtube. In 4:30, it states that δ(3) = -(y - y^) * f'(z(3)), where f is the sigmoid function. Since L = 3 and y^ = a(L), this means that δ(L) = (a(L) - y) * f'(z(L)). I've also derived the equations and got exactly like in the video. But this has an additional "f'(z(L))" term, which is not the same as in Coursera. I have even implemented this equation in the programming assignment for Week 5, but it only works without the term.
So my question is why Coursera's formula didn't have the last term "f'(z(L))" and why the formula works without it? It seems quite irrational to not have the term in the formula. Can anyone help me to explain this, or correct me if I'm wrong? Any help would be greatly appreciated, thanks.
AI: Both courses are correct.
For the output layer only, and depending on which activation function and which loss function are in use, sometimes the effects of $f'(z^{(L)})$ are perfectly cancelled out by the calculation of $\frac{\partial J}{\partial \hat{y}}$
This is often arranged deliberately to simplify calculations. So:
MSE loss $(\hat{y} - y)^2$ is paired to linear activation function $f(x) = x$
Binary cross entropy loss $-y\text{log}(\hat{y}) - (1-y)\text{log}(1-\hat{y})$ is paired to sigmoid activation function $f(x) = \frac{1}{1+e^{-x}}$
Multi-class cross entropy loss $-\mathbf{y}\text{log}(\hat{\mathbf{y}})$ is paired to softmax activation function $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
In all these cases, the end result is that, for the output layer logits only:
$$\frac{\partial J}{\partial z^{(L)}} = \hat{y} - y$$
Each of these results can be verified with some differentiation. E.g. it is possible to show that $\frac{d}{dx}(-y\text{log}(\hat{y}) - (1-y)\text{log}(1-\hat{y})) = \frac{1}{\hat{y}(1-\hat{y})}$
For all the other layers, you absolutely need to include the factor of $f'(z^{(l)})$ for whatever activation function is in use on the layer you just back propagated through. |
H: Why would a fake feature with random numbers get selected in feature importance?
I'm using a sklearn.ensemble.RandomForestClassifier(n_estimators=100) to work on this challenge:
https://kaggle.com/c/two-sigma-financial-news
I've plotted my feature importance:
I created a fake feature called random which is just numbers pulled from np.random.randn(). Unfortunately, it seems to have quite significant feature importance.
How am I supposed to interpret this? I had expected it to be at the bottom.
PS xgboost seems to discard this feature, as it should.
AI: Scikit-learn's random forest feature importance is based on mean decrease in impurity, which is fast to compute and faithful to the original creation of the Random Forest method. In short, the default feature_importances_ gives a numerical justification of the Random Forest's representation of the feature importance using its native metric of construction. Like you saw, this metric has the drawback that it can say noise is an important feature. So you may want to consider other feature importance methods, like permutation feature importance, which will give you a more apples-to-apples comparison with other models you will test. There are pros and cons to many of these methods so be aware of them. |
H: What kind of "vector" is a feature vector in machine learning?
I'm having trouble understanding the use of Vector in machine learning to represent a group of features.
If one looks up the definition of a Vector, then, according to wikipedia, a Vector is an entity with a magnitude and direction.
This can be understood when applying Vectors to for example physics to represent force, velocity, acceleration, etc...: the components of the Vector represent the components of the physical property along the axes in space. For example, the components of a velocity vector represent the velocity along the x, y and z axes
However, when applying Vectors to machine learning to represent features, then those features can be totally unrelated entities. They can have totally different units: one feature can be the length in meters of a person and another can be the age in years of the person.
But then what is the meaning of the Magnitude of such a Vector, which would then be formed by a summation of meters and years? And the Direction?
I do know about normalization of features to make them have similar ranges, but my question is more fundamental.
AI: I'm having trouble understanding the use of Vector in machine learning
to represent a group of features.
In short, I would say that "Features Vector" is just a convenient way to speak about a set of features.
Indeed, for each label 'y' (to be predicted), you need a set of values 'X'. And a very convenient way of representing this is to put the values in a vector, such that when you consider multiple labels, you end up with a matrix, containing one row per label and one column per feature.
In an abstract way, you can definitely think of those vectors belonging to a multiple dimensions space, but (usually) not an Euclidean one. Hence all the math apply, only the interpretation differs !
Hope that helps you. |
H: Does fine-tuning of transferred layers perform better than frozen transferred layers?
I recently learned concepts of transfer learning. Is it necessarily true that fine-tuning of transferred layers perform better than frozen transferred layer? why?
AI: Transfer learning means to apply the knowledge that some machine learning model holds (represented by its learned parameters) to a new (but in some way related) task. Whereas fine-tuning means taking some machine learning model that has already learned something before (i.e. been trained on some data) and then training that model (i.e. training it some more, possibly on different data).
From this we can conclude that if we are to use the learning of one model and concentrate only on some specific part of it, we can use transfer learning and train the network again with only a little amount of data. Whereas if we have sufficient data that we want the model to be trained upon, fine-tuning the model will come to the rescue. Refer to the previous question and this article |
H: Branch of data science that covers event based time series?
Let's say I have discrete events in time, e.g. patients getting sick, and I want to predict whether theses events are indicators of some other underlying event, e.g. a disease outbreak. Usually, one would transform the event based time series into a regular time series by for example aggregating the counts of patients over each week. However, I feel that by this aggregation a lot of information is lost and it is also hard use to multiple features of events, e.g. patients' age, patients' sex, etc. My question is very general: is there a branch of statistics / data science that treats time series as composed of events at arbitrary times rather that aggregated values at evenly distributed intervals?
I tried googling it, but it seemed hard to phrase the question in a way that a search engine understands.
AI: "rather than aggregated values at evenly distributed intervals?" I think that part defines your question very well; i.e that is usually what we generally do in usual time-series problems, merging the events in some discrete time intervals.
What you can do is that, you can give the time distance between the events as features, without giving evenly distributed intervals. For example, the exact time distances between events of every disease outbreak or every patient getting sick and etc. Moreover, you can do some feature engineering; squared time differences or percentage changes of time difference between the differences between the sequential doubles (x, x-1) and (x-1, x-2) where x-n,.....,x-2, x-1, x are the sequential events, not the time.
The relevant concept here seems to be "unevenly spaced time-series", the wiki page Wikipedia: Unevenly spaced time series and this researcher's website http://eckner.com/research.html seems to give enough material to get started with. |
H: Understanding Exclusive-OR predictions in Elman network
I have been reading Elman network paper, which can be found Here. in page 185, under Exclusive-OR section it was written as follows.
Notice that, given the temporal structure of this sequence, it is only sometimes
possible to predict the next item correctly. When the network has
received the first bit-1 in the example above-there is a 50% chance that
the next bit will be a 1 (or a 0). When the network receives the second bit (0),
however, it should then be possible to predict that the third will be the XOR,
1. When the fourth bit is presented, the fifth is not predictable. But from
the fifth bit, the sixth can be predicted, and so on.
So, to give a context, author was explaining how we can use networks with memory to form a XOR Gate.
What i don't understand here is this sentence
"When the network receives the second bit (0),however, it should then
be possible to predict that the third will be the XOR,1"
How can we be sure that the third element is 1, given second element is 0. And again, why can't we predict 4th element.
AI: I have watched This video and then realized that i missed the way the data is represented. So, in elman networks, the data is represented by 2-bit inputs at a time followed by 1-bit output. So, the first element is always unpredictable(so, 50% chance) and so is the 2nd term, but, once we get both 1st and 2nd term, we can for sure predict the third element as its a XOR gate. |
H: How to include class features to linear SVM
I am planning to do a simple classification with a linear SVM. One feature I have is another classification of some sort done previously.
Can I just use this class feature as a 1-hot encoded array? So, e.g. for 3 different classes, I'd have 3 binary features being 0 or 1?
The problem I see is that this feature is not linear but binary. Will this pose a problem? And if yes, how can I somehow transform a binary feature into a linear one?
AI: The quick answer is yes, you can use a liner SVM in presence of an encoded categorical variable.
Short explanation: The linearity of the model has nothing to do with the features of model, and actually linear feature doesn't mean anything. The linearity refers to the model, i.e. the equation that links the target to the features.
The equation
$$y = b*x + q$$
means that y is linear with respect to x, not that the variable x is linear. x is x thats it.
What you should check when dealing with linear models is if your data is linearly separable, or in other words if you can separate the different classes with a straight line. If that is not the case you are in trouble and should probably think of changing the model. |
H: How to use a NN architecture that is too big for GPU?
Initially posted in Stack Overflow.
I would like to implement a model which is actually 2 neural networks stacked together. However the size of these 2 architecture is too big to fit in GPU at the same time.
My idea was the following :
Load the first model and run it for 1 batch
Unload the first model from GPU, load the second model
Run the second model from the output of the first model
Unload the second model from GPU
Repeat for every batch
I actually don't need to train the first model, since it's pre-trained. But I need to train second model.
Is it possible to do something like this ? Is my approach correct ? What are the pitfalls I should be aware of ? What about performance ?
Edit
I already tried the idea of computing the output of the first model for the whole dataset at first, and then use it as input for the second dataset. However the output of the first dataset is really big, and I don't have the available space for storing the whole pre-processed dataset. That is why I wanted to do it each batch.
Edit 2
After the very nice answer from @Gal Avineri, just one more precision :
I would like to implements my architecture using only one GPU.
AI: I suggest a method similar to what @ignatius offered.
Since you don't need to train the first model and only the second one you could do the following:
Use the first model over the entire dataset and save the extracted results in your memory.
Train the second model using the results from the previous step.
This way you will only have to load one model at a given time.
In addition this will make the training of the second model to be faster.
Edit 1
(the op has specified he does not have enough memory for the inference results)
In this case I can suggest to parallelize between the inference of the first model and the training of the second model.
This could be achieved through prefatching.
Let's denote the original dataset as data1 and the corresponding inference results from the first model as data2.
Let's also denote the first model as M1 and the second model as M2.
Thus you can do the following:
load M1 to cpu, and M2 to gpu.
Draw a batch from data1 and use M1 to prepare a batch of data2 for M2.
Use the gpu to train M2 on the batch received from the previous step
while step 3 is being executed, prepare the next batch of data2 by executing step 2.
In this way, while M2 is being trained on a batch, the next batch is being prepared. This is called prefetching the next batch.
This will parallelize the training of M2 and the making of data2.
The method i described above has a very simple way to implement using the tensorflow "Data" api.
Here is a code example:
import numpy as np
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.data import Dataset
with K.device('cpu0'):
input = Input((10,))
x = Dense(100)(input)
m1 = Model(input_shape=input, outputs=x)
with K.device('gpu0'):
input = Input((100,))
x = Dense(5)(input)
m2 = Model(input_shape=input, outputs=x)
features = np.random.rand(10, 2000)
labels = np.random.randint(2, size=2000)
dataset = Dataset.from_tensor_slices((features, labels))
def preprocess_sample(features, label):
inference = m2.predict(features, batch_size=1)
return inference, label
dataset = dataset.shuffle(2000).repeat().map(preprocess_sample).batch(32)
dataset = dataset.prefetch(1)
m2.fit(dataset, epochs=100, steps_per_epoch=2000//32) |
H: What is meant by Distributed for a gradient boosting library?
I am checking out XGBoost documentation and it's stated that XGBoost is an optimized distributed gradient boosting library.
What is meant by distributed?
Have a nice day
AI: It means that it can be run on a distributed system (i.e. on multiple networked computers).
From XGBoost's documentation:
The same code runs on major distributed environment(Hadoop, SGE, MPI) and can solve problems beyond billions of examples. The most recent version integrates naturally with DataFlow frameworks(e.g. Flink and Spark). |
H: Why isn't my model learning?
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
from keras import backend as k
from keras.models import Sequential
(Xtr,Ytr),(Xte,Yte)=cifar10.load_data()
Xtr = Xtr.astype('float32')
Xte = Xte.astype('float32')
Xtr = Xtr.reshape(50000, 3072)
Xte = Xte.reshape(10000, 3072)
Ytr = np_utils.to_categorical(Ytr, 10)
Yte = np_utils.to_categorical(Yte, 10)
model=Sequential()
model.add(Dense(100, input_shape=Xtr.shape[1:]))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer="sgd", metrics=['accuracy'])
model.fit(Xtr, Ytr, batch_size=200, epochs=30, shuffle=True,verbose=1)
scores = model.evaluate(Xte, Yte, verbose=0)
print("Accuracy: %.2f%%" % (scores[1] * 100))
And the result:
I am trying to creat a pretty basic 2 layer NN on cifar-10. I know that the data is not preprocessed. But that can't be the reason for learning nothing. Where am I making the mistake?
AI: I'll go through an example that will help you get started. It should get approximately 50% accuracy.
So I keep the code the same as yours for loading the data. The only difference is that I normalize the data to lie between 0 and 1. This is usually recommended to bound the weights more tightly.
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
from keras import backend as k
from keras.models import Sequential
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.callbacks import ModelCheckpoint
from keras.models import model_from_json
from keras import backend as K
(x_train, y_train), (x_test, y_test)=cifar10.load_data()
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 71s 0us/step
(x_train, y_train), (x_test, y_test)=cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape(50000, 3072)
x_test = x_test.reshape(10000, 3072)
# The known number of output classes.
num_classes = 10
# Channels go last for TensorFlow backend
x_train_reshaped = x_train.reshape(x_train.shape[0], x_train.shape[1],)
x_test_reshaped = x_test.reshape(x_test.shape[0], x_test.shape[1],)
input_shape = (x_train.shape[1],)
# Convert class vectors to binary class matrices. This uses 1 hot encoding.
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)
Now let's make our model. Here I do make the hidden layers have more neurons per layer.
model = Sequential()
model.add(Dense(32,
activation='relu',
input_shape=input_shape))
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
Now we can train the model
epochs = 4
batch_size = 128
# Fit the model weights.
model.fit(x_train_reshaped, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test_reshaped, y_test_binary))
50000/50000 [==============================] - 3s 58us/step - loss: 1.6716 - acc: 0.4038 - val_loss: 1.6566 - val_acc: 0.4094 |
H: Multiple formulas for r squared eval metric for regression
I was came across different formulas for R squared on different articles.
R Squared = 1 - RSS/TSS
R Squared = ESS/TSS
RSS -Residual sum of squares.
TSS - Total sum of squares.
ESS - Explained sum of squares.
Can any one explain which one should be the correct one.
AI: Both are correct. R squared tells you how many variance of the dependent variable is explained by your model. This can be written in different ways.
It is like to say that the probability of being alive is 1 minus the probability of dying.
ESS=TSS-RSS
https://en.m.wikipedia.org/wiki/Coefficient_of_determination
[1]: https://i.stack.imgur.com/QlM6O.jpg |
H: Pandas: How can I merge two dataframes?
I found (How do I merge two data frames in Python Pandas?), but do not get the expected result.
I have these two CSV files:
# f1.csv
num ano
76971 1975
76969 1975
76968 1975
76966 1975
76964 1975
76963 1975
76960 1975
and
# f2.csv
num ano dou url
76971 1975 p1 http://exemplo.com/page1
76968 1975 p2 http://exemplo.com/page10
76966 1975 p2 http://exemplo.com/page100
How do I merge these for to get the result given below?
# Expected result
num ano dou url
76971 1975 p1 http://exemplo.com/page1
76969 1975
76968 1975 p2 http://exemplo.com/page10
76966 1975 p2 http://exemplo.com/page100
76964 1975
76963 1975
76960 1975
AI: f1.merge(f2, left_on='num', right_on='num', how='outer')
see https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html |
H: prediction for a linear sum
I am learning about SVMs in particular linear SVMs through many questions here. However, one problem i faced is that there seems to be no indepth explanation on how does linear SVM works in terms of predicting new data.
I understand that the main purpose of SVM is to find linear separating hyperplane $w^Tx+b$ and a linear SVM is actually a set of super long equation.
Let's consider a 2 class problem : A and B. Suppose $(w^*,b^*)$ are the minimizing hyperplane parameters for a fixed choice of $\lambda$.
Then how we classify a new, unlabeled test point $x_{test}$? Simple way that I thought would be reasonable is to have $(w^*)^Tx+b^*>0$ to class A and $(w^*)^Tx+b^*<0$ to class B. But how do we assign if it's 0 and what if there is an outlier in other class for example? IS this a good way of labeling test data?
AI: If it's exactly zero, you might classify it in either class. When using floating point numbers, there's a nearly 0% probability that an observation would ever have a predicted value of exactly zero. There's always some chance that observations will get the wrong label - that's how statistical models work, if you can find some rule that will with 100% certainty give you the right result, you shouldn't be using a statistical model in the first place.
I'm not sure why you want to "label test data". Test data is by definition labeled data that you set apart in order to get a hopefully unbiased evaluation of your model performance. Perhaps you mean making new predictions? |
H: Does this type of classification exist?
Im fairly new to data science and trying to see if a type of classification exists for my needs.
I understand that a classification into 2 categories will look something like this:
You have 2 desired outcomes and you try to build a model that classifies as 0 or 1. If these models are not 100% accurate then you will:
a) Miss some true values (outside edges of circles)
b) Get some of the wrong values in each category (overlaps between circles)
However, I am looking for something more like this:
In this case, I want to predict only 1, and I dont mind if some 0s are included but want to make sure that as many as possible 1s are predicted.
In my mind, this is effectively just widening the orange circle (the classification of 1s) in the picture.
How can i achieve this?
AI: Instead of formulating the problem with Venn diagrams you could also look at a simple two by two table. Usually the problem is formulated graphically in a different way (see picture below from Wikipedia page). If you are interested in just predicting the occurrence of the value 1 you are just focusing on the sensitivity (or true positive rate) of your classification algorithm.
This is rater straightforward within a ROC analysis framework, you could just select a minimun value for the classifier threshold. However, this comes at the cost of very low specificity. You should also consider the cost-benefit ratio of your sensitivity/specificity results.
https://en.m.wikipedia.org/wiki/Sensitivity_and_specificity |
H: The mix of leaky Relu at the first layers of CNN along with conventional Relu for object detection
First of all, I know the usage of leaky RELUs and some other relevant leaky activation functions as well. However I have seen in a lot of papers on object detection tasks (e.g YOLO) to use this type of activators only at the first layers of the CNN and afterwards a simple RELU follows at the end. Regarding this, how we end up with a model which uses a leak at the first layers and then a conventional RELU at the end?
Secondly, as far as I'm concerned because of the vanishing gradient problem the neurons at the beginning tend to fall into zero more often than those at the top of the network and then it is very difficult (or even impossible) to activate again; Wouldn't be correct to allow the negative gradient at the whole pipeline of the Neural Network?
AI: how we end up with a model which uses a leak at the first layers and then a conventional RELU at the end?
What matters is to add a non-linearity to the outputs of a neuron. Consequently, by employing each function that adds non-linearity, the network will work due to the derivative that can backpropagate the differentiation of the error term. If you use Relu or leaky Relu, the update terms will change but your network will work fine. The reason that leaky version is used in the mentioned papers is due to avoiding dying relu problem which can happen a lot for regression problems.
About your second question, consider that it usually suffice to use leaky relu in the first layers and due to them, the chance of deep neurons to be stuck at zero is not very much as the results of those papers show. You can use leaky version all over the network but by experience, relu is very fast to be trained! |
H: How to apply Stacking cross validation for time-series data?
Normally stacking algorithm uses K-fold cross validation technique to predict oof validation that used for level 2 prediction.
In case of time-series data (say stock movement prediction), K-fold cross validation can't be used and time-series validation (one suggested on sklearn lib) is suitable to evaluate the model performance. In this case no prediction shall be made on first fold and no training shall be made on last fold. How do we use stacking algorithm cross validation technique for time-series data?
AI: TL;DR
Time-series algorithms assume that data points are ordered.
Traditional K-Fold cannot be used for time series because it doesn't take into account the order in which data points appear.
One approach to validate time series algorithms is with Time Based Splitting.
K-Fold vs Time Based Splitting
The two graphs below show the difference between K-Fold and Time Based Splitting. From them, the following characteristics can be observed.
K-Fold always the all data points.
Time Base Splitting uses a fraction of all data points.
K-Fold lets the test set be any data point.
Time Base Splitting only allows the test set to have higher indexes than the training set.
K-Fold will use the first data point for testing and the last data point for training.
Time Base Splitting will never use the first data point for testing and never use the last data point for training.
Scikit-learn implementation
Scikit-learn has an implementation of this algorithm called TimeSeriesSplit.
Look at their documentation, you find the following example:
from sklearn.model_selection import TimeSeriesSplit
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
y = np.array([1, 2, 3, 4, 5, 6])
tscv = TimeSeriesSplit(n_splits=5)
for train_index, test_index in tscv.split(X):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
>> TRAIN: [0] TEST: [1]
>> TRAIN: [0 1] TEST: [2]
>> TRAIN: [0 1 2] TEST: [3]
>> TRAIN: [0 1 2 3] TEST: [4]
>> TRAIN: [0 1 2 3 4] TEST: [5] |
H: Building Stacking machine learning model using three base classifiers
I did a stacking using three base classifiers RF, NB, KN N and metamodel random forest or SVM using sklearn library
But which is strange each time i change the metamodel i got the same results. Is it normal ?????
AI: No, in generally speaking, even minor changes should affect your performance. Changing your meta-model should normally have a visible impact in your model's performance.
Two things you can try:
Check for any problems in your code.
Maybe your test set size is really small. For example if you have 5 test samples, it isn't difficult for all models to get 4/5 (i.e. 80% accuracy). As your test size increases so should the variance of your models' performance. |
H: How to find the count of consecutive same string values in a pandas dataframe?
Assume that we have the following pandas dataframe:
df = pd.DataFrame({'col1':['A>G','C>T','C>T','G>T','C>T', 'A>G','A>G','A>G'],'col2':['TCT','ACA','TCA','TCA','GCT', 'ACT','CTG','ATG'], 'start':[1000,2000,3000,4000,5000,6000,10000,20000]})
input:
col1 col2 start
0 A>G TCT 1000
1 C>T ACA 2000
2 C>T TCA 3000
3 G>T TCA 4000
4 C>T GCT 5000
5 A>G ACT 6000
6 A>G CTG 10000
7 A>G ATG 20000
8 C>A TCT 10000
9 C>T ACA 2000
10 C>T TCA 3000
11 C>T TCA 4000
What I want to get is the number of consecutive values in col1 and length of these consecutive values and the difference between the last element's start and first element's start:
output:
type length diff
0 C>T 2 1000
1 A>G 3 14000
2 C>T 3 2000
AI: Break col1 into sub-groups of consecutive strings. Extract first and last entry per sub-group.
Something like this:
df = pd.DataFrame({'col1':['A>G','C>T','C>T','G>T','C>T', 'A>G','A>G','A>G'],'col2':['TCT','ACA','TCA','TCA','GCT', 'ACT','CTG','ATG'], 'start':[1000,2000,3000,4000,5000,6000,10000,20000]})
df['subgroup'] = (df['col1'] != df['col1'].shift(1)).cumsum()
col1 col2 start subgroup
0 A>G TCT 1000 1
1 C>T ACA 2000 2
2 C>T TCA 3000 2
3 G>T TCA 4000 3
4 C>T GCT 5000 4
5 A>G ACT 6000 5
6 A>G CTG 10000 5
7 A>G ATG 20000 5
df.groupby('subgroup',as_index=False).apply(lambda x: (x['col1'].head(1),
x.shape[0],
x['start'].iloc[-1] - x['start'].iloc[0]))
0 ([A>G], 1, 0)
1 ([C>T], 2, 1000)
2 ([G>T], 1, 0)
3 ([C>T], 1, 0)
4 ([A>G], 3, 14000)
Tweak as needed.
UPDATE: for pandas 1.1+ replace the last part with:
def func(x):
result = {"type":x['col1'].head(1).values[0], "length": x.shape[0], "diff": x['start'].iloc[-1] - x['start'].iloc[0]}
return pd.Series(result, name="index")
df.groupby('subgroup',as_index=False).apply(func) |
H: Which regularization in convolution layers (conv2D)
I am using Keras for a project. I would like to know if it makes any sense to add any kind of regularization components such as kernel, bias or activity regularization in convolutional layers i.e Conv2D in Keras.
If yes, then which regularization is most useful for conv2d layers
Kernel
Bias
Activity
As explained here the regularization techniques are useful for the fully connected(dense) layers. Any such intuition/logic for conv2D?
AI: Nowadays, people don't tend to add much regularisation like $L_2$ to convolutional networks. Usually, regularisation is achieved through the use of techniques like dropout and batch normalisation (although exactly how this regularisation actually happens is not well understood).
Having said that, it may be different for your particular problem. I recommend that after experimenting with dropout and batch norm, you should experiment with $L_2$ and see how the predictive performance is affected on a validation set. |
H: How max_features parameter works in DecisionTreeClassifier?
What is the parameter max_features in DecisionTreeClassifier responsible for?
I thought it defines the number of features the tree uses to generate its nodes. But in spite of the different values of this parameter (n = 1 and 2), my tree employs both features that I have. What changes so?
max_features = 2
max_features = 1
You can see x1 and x2 are used in both cases
AI: Max_feature is the number of features to consider each time to make the split decision. Let us say the dimension of your data is 50 and the max_feature is 10, each time you need to find the split, you randomly select 10 features and use them to decide which one of the 10 is the best feature to use.
When you go to the next node you will select randomly another 10 and so on.
This mechanism is used to control overfitting. In fact, it is similar to the technique used in random forest, except in random forest we start with sampling also from the data and we generate multiple trees.
So even if you set the number to 10, if you go deep you will end up using all the features, but each time you limit the set to 10.
If you compare the definition of the max feature in the decision tree and random forest, you will see that they are the same.
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html |
H: Why xgboost can not deal with this simple sentence case?
There is only 1 feature dim. But the result is unreasonable. The code and data is below. The purpose of the code is to judge whether the two sentences are the same.
In fact, the final input to the model is: feature is [1] with label 1, and feature is [0] with label 0.
The data is quite simple:
sent1 sent2 label
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
import pandas as pd
import xgboost as xgb
d = pd.read_csv("data_small.tsv",sep=" ")
def my_test(sent1,sent2):
result = [0]
if "我想说" in sent1 and "我想说" in sent2:
result[0] = 1
if "我想听" in sent1 and "我想听" in sent2:
result[0] = 1
return result
fea_ = d.apply(lambda row: my_test(row['sent1'], row['sent2']), axis=1).tolist()
labels = d["label"].tolist()
fea = pd.DataFrame(fea_)
for i in range(len(fea_)):
print(fea_[i],labels[i])
labels = pd.DataFrame(labels)
from sklearn.model_selection import train_test_split
# train_x_pd_split, valid_x_pd, train_y_pd_split, valid_y_pd = train_test_split(fea, labels, test_size=0.2,
# random_state=1234)
train_x_pd_split = fea[0:16]
valid_x_pd = fea[16:20]
train_y_pd_split = labels[0:16]
valid_y_pd = labels[16:20]
train_xgb_split = xgb.DMatrix(train_x_pd_split, label=train_y_pd_split)
valid_xgb = xgb.DMatrix(valid_x_pd, label=valid_y_pd)
watch_list = [(train_xgb_split, 'train'), (valid_xgb, 'valid')]
params3 = {
'seed': 1337,
'colsample_bytree': 0.48,
'silent': 1,
'subsample': 1,
'eta': 0.05,
'objective': 'binary:logistic',
'eval_metric': 'logloss',
'max_depth': 8,
'min_child_weight': 20,
'nthread': 8,
'tree_method': 'hist',
}
xgb_trained_model = xgb.train(params3, train_xgb_split, 1000, watch_list, early_stopping_rounds=50,
verbose_eval=10)
# xgb_trained_model.save_model("predict/model/xgb_model_all")
print("feature importance 0:")
importance = xgb_trained_model.get_fscore()
temp1 = []
temp2 = []
for k in importance:
temp1.append(k)
temp2.append(importance[k])
print("-----")
feature_importance_df = pd.DataFrame({
'column': temp1,
'importance': temp2,
}).sort_values(by='importance')
# print(feature_importance_df)
feature_sort_list = feature_importance_df["column"].tolist()
feature_importance_list = feature_importance_df["importance"].tolist()
print()
for i,item in enumerate(feature_sort_list):
print(item,feature_importance_list[i])
train_x_xgb = xgb.DMatrix(train_x_pd_split)
train_predict = xgb_trained_model.predict(train_x_xgb)
print(train_predict)
train_predict_binary = (train_predict >= 0.5) * 1
print("TRAIN DATA SELF")
from sklearn import metrics
print('LogLoss: %.4f' % metrics.log_loss(train_y_pd_split, train_predict))
print('AUC: %.4f' % metrics.roc_auc_score(train_y_pd_split, train_predict))
print('ACC: %.4f' % metrics.accuracy_score(train_y_pd_split, train_predict_binary))
print('Recall: %.4f' % metrics.recall_score(train_y_pd_split, train_predict_binary))
print('F1-score: %.4f' % metrics.f1_score(train_y_pd_split, train_predict_binary))
print('Precesion: %.4f' % metrics.precision_score(train_y_pd_split, train_predict_binary))
print()
valid_xgb = xgb.DMatrix(valid_x_pd)
valid_predict = xgb_trained_model.predict(valid_xgb)
print(valid_predict)
valid_predict_binary = (valid_predict >= 0.5) * 1
print("TEST DATA PERFORMANCE")
from sklearn import metrics
print('LogLoss: %.4f' % metrics.log_loss(valid_y_pd, valid_predict))
print('AUC: %.4f' % metrics.roc_auc_score(valid_y_pd, valid_predict))
print('ACC: %.4f' % metrics.accuracy_score(valid_y_pd, valid_predict_binary))
print('Recall: %.4f' % metrics.recall_score(valid_y_pd, valid_predict_binary))
print('F1-score: %.4f' % metrics.f1_score(valid_y_pd, valid_predict_binary))
print('Precesion: %.4f' % metrics.precision_score(valid_y_pd, valid_predict_binary))
But result shows that xgboost do not fit the data:
TRAIN DATA SELF
LogLoss: 0.6931
AUC: 0.5000
ACC: 0.5000
Recall: 1.0000
F1-score: 0.6667
Precesion: 0.5000
TEST DATA PERFORMANCE
LogLoss: 0.6931
AUC: 0.5000
ACC: 0.5000
Recall: 1.0000
F1-score: 0.6667
Precesion: 0.5000
AI: I obtained 100% converge. Here are differences between the configurations:
I set min_child_weight to 0. It’s unreasonable to set it to 20 and expect XGBoost to find split.
I removed colsample_bytree, you only have 1 features, I don’t think sampling is a good choice. |
H: When using Data augmentation is it ok to validate only with the original images?
I'm working on a multi-classification deep learning algorithm and I was getting big over-fitting:
My model is supposed to classify sunglasses on 17 different brands, but I only had around 400 images from each brand so I created a folder with data augmented x3 times, generating images with these parameters:
datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
After doing so i got these results:
I don't know if it's correct to do the validation only using the original images or if I have to use also the augmented images for the validation, also is strange for me to get higher accuracy on the validation than the training.
AI: You should validate only on the original images. The augmentation is there so that it can help your model generalize better, but to evaluate your model you need actual images, not transformed ones.
To do this in keras you need to define two instances of the ImageDataGenerator, one for training and one for validating. To train the model you need to set both generators to the fit_generator function.
train_gen = ImageDataGenerator(aug_params).flow_from_directory(train_dir)
valid_gen = ImageDataGenerator().flow_from_directory(valid_dir)
model.fit_generator(train_gen, validation_data=valid_gen)
It is possible to achieve a higher validation accuracy than a train accuracy if you heavily augment the training data. |
H: Tagging documents for doc2vec
I am working on resume parsing script. I am trying to tag documents sentences with TaggedDocument function, provided by gensim.
What I have managed for now is to divide every text into sentence, put into one flat array and give every sentence an i (its order, basically) tag.
tagged_data = [TaggedDocument(words=word_tokenize(_d.lower()), tags=[str(i)]) for i, _d in enumerate(texts_flat)]
For the reason of possible improvement I want to tag every sentence not only with its order but with the name/order of text it is from. For that, I have made a list of lists, where every text is a list and every text contain list of sentences. i.e
texts = [text1 = [sent1, sent2, ...], text2, text3 ...]
How to iterate over this kind of document?
I came up with smth like
tagged_data = [TaggedDocument(words=word_tokenize(_d.lower()), tags=[str(i) + '.'+ str(j)]) for i, j, _d in enumerate(texts)]
but i get ValueError
ValueError: not enough values to unpack (expected 3, got 2)
Is it even going to do anything?
AI: I've managed to write a function that iterates over it for me:
class TaggedDocumentIterator(object):
def __init__(self, doc_list, labels_list):
self.labels_list = labels_list
self.doc_list = doc_list
def __iter__(self):
for idx, doc in enumerate(self.doc_list):
for idy in enumerate(doc):
yield TaggedDocument(words=idy[1].split(), tags=[str(self.labels_list[idx]) + '.' + str(doc.index(idy[1]))])
docLabels = list(range(len(texts)))
data = list(texts)
sentences = TaggedDocumentIterator(data, docLabels) |
H: How to rename columns that have the same name?
I would like to rename the column names, but the Data Frame contains similar column names. How do I rename them?
df.columns
Output:
Index([ 'Goods',
'Durable goods','Services','Exports', 'Goods', 'Services', 'Imports', 'Goods', 'Services']
Here, there are three goods columns that have similar names. How can I rename a specific column?
AI: You can use this:
df.columns = ['Goods_1', 'Durable goods','Services','Exports', 'Goods_2', 'Services', 'Imports', 'Goods_3', 'Services']
or if you have too many columns:
cols = []
count = 1
for column in df.columns:
if column == 'Goods':
cols.append(f'Goods_{count}')
count+=1
continue
cols.append(column)
df.columns = cols |
H: Confused about false positive and false negative in confusion matrix?
I am working on binary classification for classifying cancer=1 and no-cancer=0, I use confusion matrix from sklearn, this is my confusion matrix on test set:
# confusion matrix
[[18 0]
[ 7 15]]
# in my reading the order is:
TN=18
FP=0
FN=7
TP=15
but in some tutorials, I see different ordering for FP and FN, some said same as my reading,See here, but other said reverse of FP and FN, see here.
my question is which one is true in my case? please give me a reference to be sure about the answer.
AI: Think about the order of your test and prediction sets when constructing the confusion matrix. Here is a piece from one of my codes.
cm = confusion_matrix(y_test, y_pred)
print(cm)
Output:
[[TN FP]
[ FN TP]]
However, if I used:
cm = confusion_matrix(y_pred, y_test)
print(cm)
Output:
[[TN FN]
[ FP TP]]
That occurs since the predictions of the model are presented in the rows instead of columns now. Also confusion matrices can be NxN, or the classes may not be labeled as either 0 or 1. You can also change the places of TN and TP, think what should happen if you had classes named as 9 and 10. In other words, decision of Negative/Positive signs are our decision; we say what they are (hopefully in a reasonable way).
Hope I could help, please do not hesitate to ask more.
Regards. |
H: How to calculate $\phi_{i,j}$ in VGG19 network?
In the paper Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network by Christian Ledig et al., the distance between images (used in the loss function) is calculated from feature maps $\phi_{i,j}$ extracted from the VGG19 network,
where $\phi_{i,j}$ is defined as "feature map obtained by the j-th convolution (after activation) before the i-th maxpooling layer".
Can you elaborate on how to calculate this feature map, may be for VGG54 mentioned in the paper?
$\phi_{5,4}$ means 4th convolutional layer before 5th max-pooling layer right? But 4th layer has so 512 filters. So we would have 512 feature spaces. Which one to choose from this? Also what does "after activation" mean?
I found this answer related to the same issue, but the answer didn't explain much.
AI: In section 2.2.1 of the paper, they state that they use euclidean distance. I'm going to take your word that there are 512 filter activations in that layer; if I'm reading this right, there aren't 512 feature spaces, there is a 512-dimensional feature space that they are calculating euclidean distance in. So your distance function between two images $p$ and $q$ is just the standard Euclidean distance formula:
$$ d(\mathbf{p},\mathbf{q}) = \sqrt{\sum_{i=1}^{512}(p_i - q_i)^2}$$
where $\mathbf{p}$ and $\mathbf{q}$ are vectors holding the corresponding filter activations of $p$ and $q$.
Edit: Above the horizontal rule is my original answer which is wrong (or incomplete). What I think is happening is that the authors are taking the euclidean distance as above for each position in the feature maps at the $i,j$ layer, and averaging those distances to generate a scalar loss value. So for a 7x7 feature map, they'd be taking 49 512-dimensional euclidean distances and averaging them to get the VGG19 5,4 loss. This is how I read equation (5) in section 2.2.1 in their paper. I think the missing piece is that the authors don't bother with the square root in the euclidean distance formula. As discussed below, I think the notation is unclear. |
H: Subsequent convolution layers
Note: I've read How do subsequent convolution layers work? a few times, but it's still difficult to understand because of the parameters $k_1$, $k_2$, and many proposals (1, 2.1, 2.2) in the question. This seems to be complex for other people too, I think I'm not the only one (see a few comments like "I have just struggled with this same question for a few hours"). So here it is formulated with a particular specific example with no parameters, to grasp the idea more easily.
Let's say we have a CNN with:
input: 28x28x1 grayscale images (28x28 pixels, 1 channel)
1st convolutional layer with kernel size 3x3, and 32 features
2nd convolutional layer with kernel size 3x3, and 64 features
Keras implementation:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
Question: how does the 2nd layer work?
More precisely:
for the 1st layer:
input size: (1, 28, 28, 1)
weights size: (3, 3, 1, 32) (good to know: the number of weights doesn't depend on input pixel size)
output size: (1, 26, 26, 32)
for the 2nd layer:
input size: (1, 26, 26, 32)
weights size: (3, 3, 1 32, 64)
output size: (1, 24, 24, 64)
How is the latter possible? It seemed to me that, in the 2nd layer, every input 26x26 image will be convolved with the 3x3 kernels of each 64 feature maps, but this could done for all the 32 channels!
Thus I had the feeling the output of 2nd layer should be (1, 24, 24, 32*64)
How does it work here?
AI: I think I found the reason: the correct description of the 2nd layer is:
for the 2nd layer:
input size: (1, 26, 26, 32)
weights size: (3, 3, 32, 64)
output size: (1, 24, 24, 64)
So for each one of the 64 features, the (26,26,32) input is convoled with a (3,3,32)-sized kernel, producing a (24,24) output.
Since this is for each one of the 64 features, the output will finally be (1, 24, 24, 64).
Code to display the shape of the weights:
for l in model.layers:
if len(l.get_weights()) > 0:
print(l.get_weights()[0].shape) # ...[1].shape would be for the biases |
H: Loss is bad, but accuracy increases?
I have a multicategorial classification problem for images. There are 5 (imbalanced) classes for which i use different class weights. In general there are only a few training images per class: ~56-238
To classify them, I use a neural network with much data augmentation. I have a validation set with the same distribution as the train set (but it only has about 30% of the images per class).
The resulting loss / accuracy graphs look a bit weird (edit: the second graph contains the term "Test Loss", but it is the "Validation Loss"):
I'm not sure how I can interpret these two images: The validation accuracy cleary increases, but the validation loss doesn't change too much. Can anyone help me with the interpretetion of these graphs?
Thank you very much
AI: If you have an imbalanced dataset, you may want to take steps to resample. When evaluating your results of your algorithm, it may be a good idea to choose a metric that is better suited for class imbalance problems.
Accuracy is not a good metric for evaluating the performance of class imbalance problems because accuracy is rewarded for predicting the most commonly occurring label. Accuracy is the measurement of the the number of labels predicted correctly over the number of all the observations. For example, if you had 99% of the data with label 'a' and 1% of the data with label 'b', and you then labeled the entire dataset with label 'a', your accuracy would be 99%. However, your ability to label 'b' correctly was 0%. Log loss is also not particularly good for class imbalance problems. You may want to consider weighted log loss instead.
Here are some other metrics that you can investigate.
Precision and Recall
Precision is the ratio of correctly predicted positive labels to the total predicted positive observations. Recall is the ratio of correctly predicted positive observations to the all observations in the given class. You can read more about these metrics here. For a Binary Classification problem, you can view the predicted class vs the actual class in a confusion matrix.
F1 Score
F1 Score is the weighted average of precision and recall. F1 score takes false negatives and false positives into account. So, it is more informative than accuracy when evaluating class imbalance problems.
F1 Score exists as a Python package in sklearn. It also exists as a function in R.
Area Under Precision Recall Curve
You can also investigate Area under precision recall curve (PR), which the measurement under the area of precision recall curve plot. It can be used to evaluate large class imbalance problems. PR Curve exists as a python package in sklearn and an R package. |
H: Capture pattern in python
I would like to capture the following pattern using python
anyprefix-emp-<employee id>_id-<designation id>_sc-<scale id>
Example data
strings = ["humanresourc-emp-001_id-01_sc-01","itoperation-emp-002_id-02_sc-12","Generalsection-emp-003_id-03_sc-10"]
Expected Output:
[('emp-001', 'id-01', 'sc-01'), ('emp-002', 'id-02', 'sc-12'), ('emp-003', 'id-03', 'sc-10')]
How can i do it using python.
AI: You can also solve this problem by the following ways;
import re
regex = re.compile("(emp-.+)_(id-.+)_(sc-.+)")
strings = ["humanresourc-emp-001_id-01_sc-01","itoperation-emp-002_id-02_sc-12","Generalsection-emp-003_id-03_sc-10"]
print([regex.findall(s)[0] for s in strings]) |
H: Do anomalous input features to autoencoder result in high errors on the corresponding output features?
An autoencoder is trained by replicating each training instance to both input and output. However, when predicting for anomaly detection, will the output error be local to the same output feature(s) that were anomalous inputs e.g. if I have 10 features, and I predict with anomalous input on feature #4, will the high output error be limited to output feature #4 as well, or will high error appear on other outputs? The whole detection becomes more useful if localised to specific features.
AI: The output error will be propagated to the subsequent layers, so if your auto-encoder learns a function over all the features, the anomaly will be weaved across all the features you used as input. One trick is to map specific features (or set of features) to different auto-encoders, and train them as an ensemble. Whether you desire to integrate the specific features anomaly in a global outlier score, or maintain outlier scores for subset of features is then a design choice. For a good use case, see this recent paper. |
H: Is there an oriented clustering algorithm?
I'm looking for a clustering algorithm that will make cluster depending on a orientation. The DBSCAN algorithm cluster points based on a constant radius :
https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/DBSCAN-Illustration.svg/800px-DBSCAN-Illustration.svg.png
Is there a implementation of DBSCAN that is based on "ellipse instead of circle" ?
EDIT: MY SOLUTION
Ok so my solution was to work on my data set.
I had a set of 2D points and I wanted to favor the definition of clusters depending of a given orientation.
My solution was to center my set of point on the origin of the coordinate system, rotate them by the orientation you want and apply this vector field on the set of point : X(x, y) = (x-x*a, y) , where a is the factor that determine if the orientation should matter a lot or not (a ∈ [0, 1]) .
Then apply the DBSCAN of this modified dataset.
I hope I was clear enough, don't hesitate to ask me if it's not the case.
AI: If I remember correctly, non-negative matrix factorization (NMF) can be used as a clustering approach that can recover clusters that are along vectors, for example. It may work for your dataset. It factors a data matrix $D \in \mathbb{R}^{m * n}$ into two matrices $W \in \mathbb{R}^{m*k}$ and $H \in \mathbb{R}^{k * n}$. Effectively, $W$ contains the weights that are applied to each vector in $H$ to reconstruct the original data; one way of using this method is to interpret the $n$-dimensional vectors in $H$ as clusters (these vectors would be the 'directions' that your data is along) and the $k$-dimensional vectors in $W$ as the data-example-wise affinities for the different clusters. One method to cluster with this process is to simply place each of the $m$ data examples into the cluster with the index of the highest value in the $W$ vector.
There are implementations in several standard libraries, including sklearn, so it should be relatively easy to try it out. Good luck, and welcome to the site! |
H: Training and Test set
I was asked by my supervisor to replicate a result from a former graduate student. My supervisor believes the result of that paper is not accurate and he asked me to find out why! The paper was about conducting a random forest classifier to classify some sort of diseases. I was reading through the simulation process, and I realize that the training set and the test test were generated separately (the same set of parameters were used in the simulations). In other word, the training was simulated first, then another simulation was carried on to generate the test set. My understanding for training and test set is to simulate one data set then use the same data for training and test set.
My question, is it correct to simulate the training and test set separately? Does it affect the accuracy of the classifier? any reference will be truly helpful.
AI: In general, generating independently training and test sets is a legitimate option. The crucial aspect is that both the generating processes are equal. You can check this looking at this example from the author of the caret R package and the Applied Predictive Modeling book.
However, it is something that can be easily proven with simulations. In what follows, both generating independently training and testing data or splitting the same data in training and testing subsets give the same results. The glm has a median accuracy of 92%.
# simulations with training and test data generating at the same time
n <- 100
accuracy <- vector("numeric")
for (i in 1:1000){
#create data
x <- rnorm(n) # generate X
z <- 1 + 4*x + rnorm(n) # linear combination with error
pr <- 1/(1+exp(-z)) # inv-logit function
y <- pr > 0.5 # 1 (True) if probability > 0.5
df <- data.frame(y = y, x = x)
train <- sample(x = 1:n, size=n%/%2, replace = F) # sampling training data units
glm.fit <- glm(y ~ x, data = df[train,]) # fit on the training data
predicted <- predict.glm(glm.fit, newdata = df[-train,]) # predict on the other data units
accuracy=c(accuracy, sum(diag(table(predicted>0.5, df[-train,]$y)))/(n%/%2)) # collect accuracy
}
quantile(accuracy, probs = c(0.025, 0.5, 0.975)) # glm accuracy
# simulations with training and test data generating independently
n <- 100 # dataset size
accuracy <- vector("numeric")
for (i in 1:1000){
#create data
x <- rnorm(n%/%2) # generate X
z <- 1 + 4*x + rnorm(n%/%2) # linear combination with error
pr <- 1/(1+exp(-z)) # inv-logit function
y <- pr > 0.5 # 1 (True) if probability > 0.5
df.train <- data.frame(y = y, x = x)
glm.fit <- glm(y ~ x, data = df.train) # fit on the training data
# generating independent test data
x <- rnorm(n%/%2)
z <- 1 + 4*x + rnorm(n%/%2) # linear combination with error
pr <- 1/(1+exp(-z)) # inv-logit function
y <- pr > 0.5 # 1 (True) if probability > 0.5
df.test <- data.frame(y = y, x = x)
predicted <- predict.glm(glm.fit, newdata = df.test) # predict on the test data
accuracy=c(accuracy, sum(diag(table(predicted>0.5, df.test$y)))/(n%/%2)) # collect accuracy
}
quantile(accuracy, probs = c(0.025, 0.5, 0.975)) # glm accuracy |
H: Difference between sklearn’s “log_loss” and “LogisticRegression”?
I am a newbie currently learning data science from scratch and I have a rather stupid question to ask. I’m currently learning about binary classification, and I understand that the logistic function is a useful tool for this. I looked up the documentation and noticed that there are two logistic related functions I can import, i.e. sklearn.metric.log_loss and sklearn.linear_model.LogisticRegression. When and where should I use them, and what’s the difference?
On a broader note, what’s the difference between a metric and a model, and why is the log loss function a metric? Apologies if this question sounds completely nonsensical, but this is a genuine source of confusion for me!
AI: The metrics module by Scikit-learn implements functions assessing prediction error for specific purposes (Regression, classification, etc.). Model is the algorithm that actually does the classification/regression/clustering (as per need) for you.
Log-loss measures the accuracy of, say a classifier. It is used when the model outputs a probability for each class, rather than just the most likely class.
EDIT: Thanks to @mapto for suggesting documentation reference:
sklearn.metrics.log_loss,
and
sklearn.linear_model.LogisticRegression |
H: How can I create a pixel labelled image for Semantic Segmentation?
I am following the Semantic Segmentation Examples tutorial by MathWorks.
I understand that I can load pixel labeled images
pxDir = fullfile(dataDir,'buildingPixelLabels');
Define the class names.
classNames = ["sky" "grass" "building" "sidewalk"];
Define the label ID for each class name.
pixelLabelID = [1 2 3 4];
and create a pixelLabelDatastore.
pxds = pixelLabelDatastore(pxDir,classNames,pixelLabelID);
But, how do I create a custom pixel labelled image where every pixel value represents the categorical label of that pixel?
I would then proceed by writing:
pxDir = fullfile(dataDir,'myCustomPixelLabels');
If I understood correctly, imageDatastore holds the actual image and not the pixel labels for that image.
EDIT:
On my system pxDir points to 'C:\Program Files\MATLAB\R2017a\toolbox\vision\visiondata\buildingPixelLabels'. Since I am on Matlab2017a so this example is not included by default, and I cannot compare or view the file to get a better understanding of what I need to do to reproduce this example.
AI: The answer can be found here.
Go to the APPS tab, and search for Image Labeler, or Training Image Labeler
Click on Add Images to add your training images.
Click on Add ROI Labels to add class names for the regions of interest.
Proceed to select the regions of interest manually from the uploaded images. |
H: Convolutional neural networks for non-image applications?
If I remember correctly, Convolutional neural networks (CNN) have first been developed for image classification purposes (see work from LeCun et al.).
The convolution process that "slides" over the input image is helpful indeed to detect some features.
Question: do you have examples of successful application of CNN for non-image data?
AI: They can be employed wherever you can find meaningful adjacent patterns in the input. As an example, you can take a look at here for its use in NLP:
You can also read the well-known paper Convolutional Neural Networks for Sentence Classification which is about sentence classification. |
H: How does KNN work if there are duplicates?
I am currently debating with my friend about how KNN handles duplicates. Suppose K = 2, and we have a 1-dimensional set of data points to illustrate my dilemma
I = {1, 2, 2, 2, 2, 2, 6}
Thus is it correct to say that the K=2 nearest neighbours of data point 1 is simply {2, 2}? Also, same reasoning if we did the 2 nearest neighbours of data point 2 it would be {2, 2} as well not including itself?
AI: Your reasoning is correct - you should consider duplicate points as separate. You can see that this must be the case in several ways:
Introduction of small random noise to the data should not affect the classifier on average. This would not be the case if you removed duplicates.
Suppose that your input space only has two possible values - 1 and 2, and all points "1" belong to the positive class while points "2" - to the negative. If you remove duplicates in the KNN(2) algorithm, you would always end up with both possible input values as the nearest neighbors of any point, and would have to predict a 50% probability for either class, which is certainly not a consistent classification strategy.
The extra question to think about is how to deal with the situation when you have different Y labels assigned to several points with the same X coordinate.
You could mix all classes together and say that the label of each point in the set of duplicates is represented by the distribution of labels in the whole set of points with that coordinate. Alternatively, you could simply sample K random points from the set.
Both strategies should result in a consistent classifier, however in the second case your predictions may not be deterministic. Most practical implementations (including, for example, sklearn.neighbors.KNeighborsClassifier), however, use this simpler, nondeterministic strategy, as it is perhaps slightly more straightforward. |
H: What is the difference between SVM and logistic regression?
While reading the book by Aurelien Geron, I noticed that both logistic regression and SVM predict classes in exactly the same way, so I suspect there must be something that I am missing. In the Logistic regression chapter we can read:
$σ(t) < 0.5$ when $t < 0$, and $σ(t) ≥ 0.5$ when $t ≥ 0$, so a Logistic Regression model predicts $1$ if $θ^T · x$ is positive, and $0$ if it is negative.
Similarly, in the SVM chapter:
The linear SVM classifier model predicts the class of a new instance x by simply computing the decision function $w^T · x + b = w_1 x_1 + ⋯ + w_n x_n + b$: if the result is positive, the predicted class $ŷ$ is the positive class ($1$), or else it is the negative class ($0$).
I know that one way they could be different is because of the loss function they use: while log loss is used in logistic regression, SVM uses hinge loss to optimize the cost function. However, I would like to get this thing completely clear. How are the two models actually different?
AI: Both logistic regression and SVM are linear models under the hood, and both implement a linear classification rule:
$$f_{\mathbf{w},b}(\mathbf{x}) = \mathrm{sign}(\mathbf{w}^T \mathbf{x} + b)$$
Note that I am regarding the "primal", linear form of the SVM here.
In both cases the parameters $\mathbf{w}$ and $b$ are estimated by minimizing a certain function, and, as you correctly noted, the core difference between the models boils down to the use of different optimization objectives. For logistic regression:
$$(\mathbf{w}, b) = \mathrm{argmin}_{\mathbf{w},b} \sum_i \log(1+e^{-z_i}),$$
where $z_i = y_if_{\mathbf{w},b}(\mathbf{x}_i)$.
For SVM:
$$(\mathbf{w}, b) = \mathrm{argmin}_{\mathbf{w},b} \sum_i (1-z_i)_+ + \frac{1}{2C}\Vert \mathbf{w} \Vert^2$$
Note that the regularization term $\Vert \mathbf{w} \Vert^2$ may just as well be added to the logistic regression objective - this will result in regularized logistic regression.
You do not have to limit yourself to $\ell_2$-norm as the regularization term. Replace it with $\Vert \mathbf{w} \Vert_1$ in the SVM objective, and you will get $\ell_1$-SVM. Add both $\ell_1$ and $\ell_2$ regularizers to get the "elastic net regularization". In fact, feel free to pick your favourite loss, add your favourite regularizer, and voila - help yourself to a freshly baked machine learning algorithm.
This is not a coincidence. Any machine learning modeling problem can be phrased as the task of finding a probabilistic model $M$ which describes a given dataset $D$ sufficiently well. One general method for solving such a task is the technique of maximum a-posteriori (MAP) estimation, which suggests you should always choose the most probable model given the data:
$$M^* = \mathrm{argmax}_M P(M|D).$$
Using the Bayes rule and remembering that $P(D)$ is constant when the data is fixed:
\begin{align*}
\mathrm{argmax}_M P(M|D) &= \mathrm{argmax}_M \frac{P(D|M)P(M)}{P(D)} \\
&= \mathrm{argmax}_M P(D|M)P(M) \\
&= \mathrm{argmax}_M \log P(D|M)P(M) \\
&= \mathrm{argmax}_M \log P(D|M) + \log P(M) \\
&= \mathrm{argmin}_M (-\log P(D|M)) + (-\log P(M))
\end{align*}
Observe how the loss turns out to be just another name for the (minus) log-likelihood of the data (under the chosen model) and the regularization penalty is the log-prior of the model. For example, the familiar $\ell_2$-penalty is just the minus logarithm of the Gaussian prior on the parameters:
$$ -\log\left((2\pi)^{-m/2}e^{-\frac{1}{2\sigma^2}\Vert \mathbf{w} \Vert^2}\right) = \mathrm{const} + \frac{1}{2\sigma^2}\Vert \mathbf{w} \Vert^2$$
Hence, another way to describe the difference between SVM and logistic regression (or any other model), is that these two postulate different probabilitic models for the data. In logistic regression the data likelihood is given via the Bernoulli distribution (with $p$=sigmoid), while the model prior is uniform (or simply ignored). In SVM the data likelihood is modeled via some $\mathrm{exp}(-\mathrm{hinge})$ distribution (not sure it even has a name, but I hope you get the idea that undoing the minus-logarithm would always bring you back to $P(D|M)$, up to a constant), and the model prior is the Gaussian.
In practice, the two models have different properties, of course. For example, SVM has sparse dual representations, which makes it possible to kernelize it efficiently. Logistic regression, on the other hand, is usually well-calibrated (which is not the case with SVM). Hence, you choose the model based on your needs (or, if you are unsure, on whatever cross-validation tells you). |
H: Pandas DataFrame: Aggregating multi-level groups by matching keys
I have some data that looks like this;
data.head()
stock date binNum volume
0 stock0 d120 2 249500.0
1 stock0 d120 3 81500.0
2 stock0 d120 4 79000.0
3 stock0 d120 5 244000.0
4 stock0 d120 6 175000.0
I can get the average volume for each bin across all days for a particular stock with the following code;
stock0 = data[(data['stock'] == 'stock0')]
binGroups = stock0[['binNum', 'volume']].groupby('binNum', sort=False)
stock0vol = binGroups.aggregate({'volume': np.mean}).reset_index()
stock0vol.head()
binNum volume
0 2 174095.238095
1 3 100428.571429
2 4 79880.952381
3 5 73642.857143
4 6 69761.904762
I would like to apply this to all stocks. The result will be a table with a stock column but no date column (since it is an aggregation across all days). Something like this;
stock binNum volume
0 stock0 2 174095.238095
1 stock0 3 100428.571429
2 stock0 4 79880.952381
3 stock0 5 73642.857143
4 stock0 6 69761.904762
I can do this by putting the above code in a loop and bolting on the rows one by one, but I am sure there is a more elegant way to do it via grouping and aggregation. Can anyone shed some light please?
AI: Sorry, I realised all I needed to do was to add another level of grouping, but the aggregation is the same;
stockBinGroups = data.groupby(['stock', 'binNum'], sort=False)
binAveVol = stockBinGroups.aggregate({'volume': np.mean}).reset_index()
binAveVol.head()
stock binNum volume
0 stock0 2 174095.238095
1 stock0 3 100428.571429
2 stock0 4 79880.952381
3 stock0 5 73642.857143
4 stock0 6 69761.904762 |
H: why do we have to calculate the entropy of parent node in Information Gain?
Why do we need the entropy of parent node in the Information Gain.
Information Gain = entropy(parent) - w * entropy(children)
We can compare the entropy of the children without the need for the parent entropy.
AI: It's essential; you're computing gain from the parent to the same data split in the children! Not comparing children. A good split takes a high-entropy data set (lots of several different labels) and turns it into lower-entropy data sets (some labels on one child mostly, other labels on the other). |
H: Training images with multiple channels
So I have a set of images with 16 layers each.
Is there a good reason to split the channels in a format like
example : [images_length,16,image_width , image_height]
instead of creating something more simple like
example 2 : [images_length,1,image_width * 16 , image_height]
In the second example I use a single channel and lay each channel next to the other in order to create an image that has 16 times the length of the original image, but same height etc.
So here are the actual questions:
Are there any negative trade offs on using the second method ?
Which method is more memory efficient ( on the graphics card side of things ) ?
I tagged both keras and tensorflow as I use tf as backend for keras.
Thank you for your time!
AI: I don't know what kind of image you have, 16 channels?! oh boy :)
Anyway, if they are images, the first one is better. The reason is that in the second approach you are somehow unrolling the input signal. By doing so, you are removing the information about the locality. You are removing the information of near adjacent inputs. Convolutional neural nets attempt to find these kinds of features. As an example, consider the MNIST dataset. You can learn using CNN and MLP but the former is used due to the fact that CNNs care about patterns which are somehow replicated in different parts of inputs.
If they are not images and you are aware that adjacent pixels or inputs are related, again you should exploit CNNs. Consider that the convolutional layers are CNNs are for extracting appropriate features. The classification task is done using dense layers in CNNs.
About efficiency, consider two points. Graphic cards are called SIMD computers. It stands for single instruction multiple data. Matrix operations are done using GPUs very efficiently as the name of Graphic cards implies. Consequently, Dense layers are very fast in GPUs compared to CPUs. The other point is about parallel programming. Each filter in convolutional layers are independent; consequently, they can be applied using paralyzed instructions. Again appropriate GPUs are very good at this too. I know I've said two things but there are actually three things to be kept in mind. Forget all of the above mentioned points! The most important thing to consider is the memory and the bus. There are situations that you don't have a graphic card with more than 6 Gig of memory. In those situations, I really prefer the last generations of CPUs instead of GPUS. The reason is that you have to deal with the limitations of memory.
In your case consider that if you use dense layers exactly after inputs, the number of parameters will be science fictional although You are supposed to use CNNs. |
H: Handle 50,000 classes in OneVsRestClassifier
I'm new to data science and NLP. I'm trying to solve a problem that is having 1 million rows and some 50,000 distinct classes. The dataset has some text column as a predictor and the other one is the multilabel responses. I have been using tfidf to represent the text fields and MultiLabelBinarizer to transform the labels. But MultiLabelBinarizer is giving MemoryError.
And there is no way I can pass the legacy multi-label data representation using a sequence of sequences as it seems no more to be supported in the sklearn package. So, what should be my approach?
Any help is appreciated. Thanks in advance.
AI: As you already know, the main culprit here is the large number of classes (50k). For every sample data, you have a label of size 50k. Even if a sample only belongs to a single class, the label will still be of size 50k.
For example, sample1 has a label of A and B while sample2 has a label of A. Sample3 on the other hand has C for its label and sample4 has A. We can visualize it like this:
sample1 - [1, 1, 0, 0, 0, 0, ..., 0]
sample2 - [1, 0, 0, 0, 0, 0, ..., 0]
sample3 - [0, 0, 1, 0, 0, 0, ..., 0]
sample4 - [1, 0, 0, 0, 0, 0, ..., 0]
One solution here is to use Siamese neural network. This takes in two input at a time. Instead of training directly to learn classes as the output, we are training to learn the similarity between two samples. We first select random pairs. We use the label 1 if two samples belong to the same class; else 0.
sample1 & sample2 - 1
sample1 & sample3 - 0
sample2 & sample4 - 1
sample3 & sample4 - 0
This way, even if there is a lot of classes, you are only training with 2 labels, 1 or 0, thus, avoiding memory error.
The problem now is that you are doing a multi-classification task. For instance, sample1 and sample 2 are the same class A but sample1 also belongs to class B. My suggestion, but I'm not totally sure here, is to add multiple instance of that pair.
sample1 & sample2 - 1
sample1 & sample2 - 0
sample1 & sample3 - 0
sample2 & sample4 - 1
sample3 & sample4 - 0
As for the large number of training samples (1 million), this can be handled by using small batches during training. |
H: LSTM - divide gradients by number of timesteps IMMEDIATELY or in the end?
From this answer I know that
the gradient of an average of many functions, is equal to the average of the gradients of those functions taken separately.
The error gradient that you want to calculate for gradient descent is $\nabla_{\theta} C(X, \theta)$, which you can therefore write as:
$$\nabla_{\theta} C(X, \theta) = \nabla_{\theta}(\frac{1}{|X|}\sum_{x \in X} L(x, \theta))$$
The derivative of the sum of any two functions is the sum of the derivatives, i.e.
$$\frac{d}{dx}(y+z) = \frac{dy}{dx} + \frac{dz}{dx}$$
In addition, any fixed multiplier that doesn't depend on the parameters you are taking the gradient with (in this case, the size of the dataset) can just be treated as an external factor:
$$\nabla_{\theta} C(X, \theta) = \frac{1}{|X|}\sum_{x \in X} \nabla_{\theta} L(x, \theta)$$
Question:
When working with LSTM specifically, am I indeed allowed to apply this $\frac{1}{|X|}$ at the end of the backprop once I've summed-up gradients of weights across all timesteps?
Or should I always apply $\frac{1}{|X|}$ straight away, to any gradient flowing into the top layers of my network?
This bothers me because LSTM is not merely multiplying things like a basic RNN would. Instead, LSTM has summation, which then gets multiplied with other things as we descend to earlier timesteps.
For example, this happens when we have to add some gradient flowing from all the 4 gates the LSTM into grad_wrt_resultAtPreviosTimesep (It will be required to compute grad during previous timestep once we get there)
Using an overly-simplified algebra example:
I feel that multiplying immediately by $\frac{1}{X}$ would represent the left side of this expression: $(\frac{1}{X}a + 40)*100 \neq \frac{1}{X}
(a+40)*100$. On the other hand, multiplying by $\frac{1}{X}$ in the end of backprop would represent the right side.
Edit:
Just as a reminder, here is what LSTM looks like at each timestep, taken from this blog:
AI: Answer to your question is it doesn't matter. The gradient is just a product of Jacobians (because of the chain rule), so it doesn't matter if you multiply the result or the intermediate multiplicand, these give the same result. Or, better yet, is to multiply the function you're differentiating, since $\tfrac{d(\alpha f(x))}{dx} = \alpha f'(x)$. So instead of manipulating the gradients you should just average your loss over whatever you want it to be averaged over.
Note that I didn't assume anything about the function above. It's because these are fairly general basic rules, and they apply to all kinds of functions, RNNs and LSTMs included.
That said, you might not want to actually average your loss (remembers, it's the same as averaging the gradients) over the timesteps. Average over inputs x – yeah, sure. Over timesteps? Not necessarily a good idea.
If you do average over timesteps, then you essentially decrease importance of longer sequences, and thus your model will care less of making more mistakes in them, whereas if you do not average over timesteps (or if they are all of the same length), then every timestep-loss has the same weight, and the model cares about all of them equally. |
H: Choice of ML algorithm for problem
Working on a school project where we have to match some users based on common interests. Assuming I have a list of inputs like this:
Name Interest1 Interest2 Interest3 Interest4 Interest5
Name Interest4 Interest6 Interest7 Interest8 Interest9
Name Interest1 Interest2 Interest4 Interest3 Interest5
Name Interest7 Interest8 Interest9 Interest11 Interest12
And another user comes
Name Interest4 Interest6 Interest13 Interest12 Interest7
Closest match for him would be user 2. If I want to take a ML approach in solving this what algorithm would fit this kind of problem?
AI: This would be an example of Recommendation based system.
You can build a Popularity based recommendation system in this scenario. This type of algorithm works with trend, it basically uses the items which are in trend right now.
Else you can go for Collaborative recommendation system or Content based recommendation system |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.