text
stringlengths 83
79.5k
|
---|
H: Association between features
Given the anonymized dataset of features below, where:
"code" is a categorical variable.
"x1" and "x2" are continuous variables.
"x3" and "x4" are extracted features. They are the mean values of "x1" and "x2" respectively for each individual code.
code x1 x2 x3 x4
0 100 1 2 2 4
1 100 2 4 2 4
2 100 3 6 2 4
3 200 4 8 5 10
4 200 5 10 5 10
5 200 6 12 5 10
6 300 7 14 8 16
7 300 8 16 8 16
8 300 9 18 8 16
Looking at the columns, for each code, x3 and x4 features have similar values - when x3 is 2 or 5 or 8, the code would be 100 or 200 or 300 respectively, and when x4 is 4 or 10 or 16, the code would be 100 or 200 or 300 respectively.
Intuitively, leaving these columns as they are without dropping any would lead to redundant features while training a model. My question is how true is this my hypothesis? I'm not so confident about it. Does it really matter when training a model? Does it depend on the model type (tree based or otherwise)?
AI: 1) Those extracted features are "combination" of original features. Yes, it's additional information. And yes, it could be redundant, but I'd check that after creating those features by doing standard feature selection operations. Here's a great guide from Kaggle. At the same time, it might be a very good feature engineering decision - completely depends on a dataset.
2) Yes, it matters - redundant information can negatively influence model performance. But you find out to understand whether it's redundant or not. The minimum bad thing you can face - is how much time does training takes (more features - more computation resources). The more serious thing is overfitting.
3) Yes, and it depends on how your model is able to process categorical features. If it cannot do anything specific with them, you should remove the original categorical features. XGboost or CatBoost (those are tree-based both) are able to process categorical features from the box. Generally, tree-based models work good with a lot of features, from my experience. And linear models require right feature selection.
Also, here's a link to a great example on Target Feature encoding, it might be also useful, however, you must be very attentive to train/test splits and overfitting using it. |
H: Can I apply survival analysis to predict if a user will revisit the website?
I have one business problem in hand which is to predict if a user will revisit the website or not within 6 months. I need to majorly understand what are the factors which make the user return and also need to give business recommendations on what can be done to make a new user return to the website.
My initial idea was to do logistic regression. Lately, I read about survival analysis. I want to know if I can use survival analysis for this problem.
Also, my dataset has 20k users; each user having multiple transactions; the target variable was not given
I aggregated the dataset to one record per user and did some feature engineering to come up with a target variable.
If I want to use survival analysis in this problem, shall I consider only the last transaction of each user or shall I use the aggregated dataset?
AI: If you want to use survival analysis (which can be more flexible and insightful), I'd recommend this package and this great tutorial. Speaking shortly, as a result, you'll get "probability of being alive" for each customer.
If you want to use logistic regression I think it's trickier. Why I think so - Like any other churn problem, it's hard to define it properly. The definition depends on your task and where the model outcome will be used. Let's say churn is a particular amount of inactivity, e.g. 30 days. You can do an initial analysis of how to find this number. Just pick a particular date (you can do it multiple times) and check % of people who made next transaction. Important thing is - your time period from both sides should be same for all users:
if it's a new users he's not able to be inactive for a long period, right?
if it's a last date in the dataset (e.g. yesterday) - not all users are not able to perform transaction within 1 day. And you'll get high churn rates. So be attentive to dates.
So you need to understand from your data - which inactivity is "normal" for the average user and define it as N. After that you can label users binary like "if inactivity > N then "churned (1)" else "not churned (0)"". And you can use this label with any classification model. |
H: One hot encoding for multiple label(trainy) in .fit() method?
I have a mobile price classification dataset in which I have 20 features and one target variable called price_range. I need to classify mobile prices as low, medium, high, very high.
I have applied a one-hot encoding to my target variable. After that, I split the data into trainX, testX, trainy, testy. So my shape for trainX and trainy is (1600,20) and (1600,4) respectively.
Now when I try to fit trainX and trainy to logisticRegresion, i.e -> lr.fit(trainX,trainy) I am getting an
error and it says: bad input (1600,4)
So, I understood that I have to give trainy value in shape (1600,1) but by one hot encoding I have got array of 4 columns for each individual price_range as per the concept of one hot encoding.
So now I am totally confused how people use one hot encoding for target variable in practice? please help me out.
AI: So now I am totally confused how people use one hot encoding for
target variable in practice? please help me out.
This mostly comes down to the tool you are using. Sklearn, which I assume you are using, does not use one hot encoded target variables. So your y should be of dimension (1600, 1) where the classes are 0, 1, 2 and 3. Instead of applying one hot encoding you can use a LabelEncoder to get it on the correct format.
I suspect the reason for your confusion comes from having seen deep learning frameworks such as Tensorflow and Keras. With them you always one hot encode your target variable.
Short answer:
Using sklearn: Label encode
Using deep learning: one hot encode |
H: How do I know how to construct the layers of my CNN
I've done a CNN project with Keras and OpenCV, and I've got roughly 65% accuracy. And now I have to present this work in my University, but I'm afraid if the teachers ask me for how do I knew how to construct the right layers to my CNN.
In fact in my development I took a look in others projects to see how the other people do, and I know that you have to have a number of neurons equivalent to the number and size of your image. But I know that has a mathematic behind this.
So how could I tell if they ask me it?
AI: There isn't a formula behind it - layer sizes are selected with a lot of trial and error (often automated by cross-validation). But saying "I've used an architecture from a network from a similar problem, or at least from the same domain" is a legitimate answer. |
H: What is difference between "cv2.filter2D" vs Keras "Conv2D" function
When I have to sharpen an image using opencv, I use:
#Create our shapening kernel
kernel_sharpening = np.array([[0,-1,0],
[-1, 5,-1],
[0,-1,0]])# applying the sharpening kernel to the input image & displaying it.
sharpened = cv2.filter2D(image, -1, kernel_sharpening)
In above code sharpened is our resultant image. As you can see in above code I used opencv function named filter2D to perform convolution of input image with the kernel, and as a result I got sharpened image.
Recently I went through this link regarding image Super-Resolution (link)
And found out Keras has something similar to filter2D and Keras calls it Conv2D.
Its syntax is as follows:
dis2 = Conv2D(filters=64, kernel_size=3, strides=2, padding='same')(dis1)
My question is what is the difference between opencv filter2D, and Keras Conv2D ?
(I assume both do the same role of convolution of image with a kernel, I may be wrong pls correct)
AI: So from an architectural viewpoint you are right, both are 2D Convolutional Kernels with size of (3,3).
But there are some major differences. While cv2.filter2D(image, -1, kernel_sharpening) directly convolute the image dis2 = Conv2D(filters=64, kernel_size=3, strides=2, padding='same')(dis1) only constructs a Conv2D Layer which is part of the Graph ( Neural Network). So Keras Conv2D is no operation for directly convolute an image.
Also the weights are different. In the cv2 Part [0,-1,0],
[-1, 5,-1],
[0,-1,0] are your weights.
dis2 = Conv2D(filters=64, kernel_size=3, strides=2, padding='same')(dis1)
has standard weight initializer which is glorot uniform. so the weights would not even match.
Additionally the weights of a Conv2D Layer which represents the Keras way, will be learned during training stage of the neural network.
However if you neural network would have only this convolution layer and yields the same weights as the cv2 convolution, the result should be exactly the same. |
H: How to select 500 most pertinents tags among 10000?
Say we have 100,000 documents tagged with 10,000 different tags (Max 5 tag per document). We wish to limit allowed tags to a list of 500 tags.
How to select 500 tags in order to cover the largest set of documents ?
Firstly, I chose the 500 most frequent tags. If we keep only these 500 tags, 70% of documents keep at least 1 tag.
I'm looking for a better method to pick the 500 most pertinents tags.
example :
my best set of 5 tags to cover all documents is [italian, german, french, chineese, japan].
it is not [italian, spanish, indian, finnish, english] although these are the most frequent tags.
I tried a Single Value Decomposition on the matrix : [documentID, set of tags]. but after that ? is it a good idea ? how to get 500 tags from the SVD results ?
AI: That is a set cover problem. The documents (universe) are being covered by a set of tags (elements).
In the case of maximizing the number of documents with a limited number of tags, a greedy approach can be taken. Find the single tag that covers the most documents. Then find the next tag that covers the most documents not already covered. Continue the process until the number of tags has been reached. |
H: Dataset from sequence of messages
I have a sorted dataset by datestamp which looks like this:
user message
A Hi.
B Hello.
B How are you?
A I am stuck.
B How can I help you?
What I want is to create a pandas df that would look like this:
user message reply
A Hi. Hello.
A Hi. How are you?
B Hello. I am stuck.
B How are you? I am stuck.
A I am stuck. How can I help you?
For each message, I want to find all of the replies. That means that I want the messages after current one but from the other user. How can I do this with pandas? Let's only consider a binary case of 2 users A and B.
AI: First, find out when the user switch and give a separate id to each message group:
df['group_id'] = ((df['user'] != df['user'].shift()).cumsum())
user message group_id
A Hi. 1
B Hello. 2
B How are you? 2
A I am stuck. 3
B How can I help you? 4
Then groupby each group_id and aggregate a list of the messages for each id. By shifting these messages by -1 we recive the replies for each group_id:
df_reply = df.groupby('group_id')['message'].agg(list)
df_reply = df_reply.shift(-1).reset_index().rename(columns={'message': 'reply'})
group_id reply
1 [Hello., How are you?]
2 [I am stuck.]
3 [How can I help you?]
4 NaN
The replies can then by merged back into the original dataframe. The reply lists are exploded to ensure a single reply per row:
df.merge(df_reply, on='group').explode('reply').drop('group', axis=1).dropna()
The final result:
user message reply
A Hi. Hello.
A Hi. How are you?
B Hello. I am stuck.
B How are you? I am stuck.
A I am stuck. How can I help you? |
H: Calculating the average of gradient decent
I am currently studying the backpropagation process and gradient decent algorithm form the book Neural Networks and Deep Learning written by Michael Nielsen and 3Blue1Brown channel in YouTube.
My question is about calculating the gradient in gradient decent algorithm(the whole dataset as input).
I have drawn a picture that shows my understanding about how the algorithm works:
For example we have 1 million hand written digit images and through the first iteration we feed the network with this 1 million images. Then the gradient is calculated for each images, summed together and averaged before updating the weights.
If my understanding is not wrong, this is the same thing that I saw in 3Blue1Brown channel. In this process the average of gradient is calculated with respect to the cost of each image and not the average cost of the whole data set(1 million) in one iteration, so the formula for calculating the cost has no effect here, rather its derivative is used for calculating the gradients for each image and we do not take the average of costs here.
First I want to know if this is a correct picture about how one iteration of calculating gradient decent works, second, why don't we take the derivatives of average costs with respect to the weights and biases and then take the average sum of all gradients?
And the last question, how the number of iterations and epochs can be assigned here? can we say number of epochs is always equal to the number of iteration since the whole data is used for each iteration?
AI: Starting from the last part, as the entire dataset is used, number of epochs(run over entire dataset) equals number of iterations. Instead, one can do the calculation in "mini batches" (of 32, for example), then the run over each 32 samples is called an iteration.
As for the rest of the question, you can chose a batch that is equal to the entire dataset - this is called "batch gradient descent"; or update after every single sample (a batch size of 1) which is "stochastic gradient descent". Any other choice is called "mini-batch gradient descent.
Deep Learning course on Coursera offers a relatively better explanation of these matters compared to Nielsen's book or 3B1B videos. You can watch the videos for free. In particular here is the video on Gradient Descent. |
H: Different results every time I train a reinforcement learning agent
I am training an RL agent for a control problem using PPO algorithm. I am using stable-baselines library for it.
The objective of an agent is to maintain a temperature of 24 deg in a zone and it takes actions every 15 mins.The length of episode is 9 hrs. I have trained the model for 1 million steps and the rewards have converged. I assume that the agent is trained enough. I have done some experiments and have few questions regarding the training
I test an agent by letting it take actions from a fixed initial state, and monitor the actions taken by actions and states for an episode. When I test the agent multiple times, actions taken and states resulted are different every time. Why is this happening when the agent is trained enough?
I train an agent for 1 million steps. I train another agent for 1 million steps on the same environment with same step of hyperparameters and every thing else same. Both these agents converge. Now when I test these agents actions taken by these agents are not identical/similar. Why is this so?
Can someone help me with these.?
Thank you
AI: A part of the agent consists of taking random actions. So there is a % chance that the agent will take a random action instead of an action based on the training. This is called "exploration". This page describes this as "The amount of randomness in action selection depends on both initial conditions and the training procedure. Over the course of training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found. "
This is normal. The agent's network is initialized with random weights, and part of the actions it takes during the training are also random (see above). So different training runs will produce different results. If you want to circumvent this issue, you could use a fixed seed for the random-number generator. |
H: problem submitting classification problem
I am trying to make a submission, so I have a test set without labels and I am tryin to test my classification model on it. In particular, I have also to submit this prediction as a csv.
I have the following test set without labels, which is the output of pd.read_json(), so it is the output from the test dataset:
and the point of the classification problem is to predict from the instruction the type of the compiller. The classification problem is already developed, I just need to submit it.
So I have to predict these instructions from the test set, but if I try to do :
test = pd.read_json('test_dataset_blind.jsonl',lines = True)
test
X_new = test['instructions']
new_pred_class = clf.predict(X_new)
where clf is my model, in this case I am using random forests.
I get the following error message:
ValueError: setting an array element with a sequence.
Can anyone please help me? Thank's in advance.
[EDIT] The full error trace is the following:
ValueError Traceback (most recent call last)
<ipython-input-21-ba881bb9e0fe> in <module>
----> 1 new_pred_class = clf.predict(X_new)
~\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py in predict(self, X)
543 The predicted classes.
544 """
--> 545 proba = self.predict_proba(X)
546
547 if self.n_outputs_ == 1:
~\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py in predict_proba(self,
X)
586 check_is_fitted(self, 'estimators_')
587 # Check data
--> 588 X = self._validate_X_predict(X)
589
590 # Assign chunk of trees to jobs
~\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py in
_validate_X_predict(self, X)
357 "call `fit` before exploiting the
model.")
358
--> 359 return self.estimators_[0]._validate_X_predict(X,
check_input=True)
360
361 @property
~\Anaconda3\lib\site-packages\sklearn\tree\tree.py in _validate_X_predict(self,
X, check_input)
389 """Validate X whenever one tries to predict, apply,
predict_proba"""
390 if check_input:
--> 391 X = check_array(X, dtype=DTYPE, accept_sparse="csr")
392 if issparse(X) and (X.indices.dtype != np.intc or
393 X.indptr.dtype != np.intc):
~\Anaconda3\lib\site-packages\sklearn\utils\validation.py in check_array(array,
accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite,
ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype,
estimator)
494 try:
495 warnings.simplefilter('error', ComplexWarning)
--> 496 array = np.asarray(array, dtype=dtype, order=order)
497 except ComplexWarning:
498 raise ValueError("Complex data not supported\n"
~\Anaconda3\lib\site-packages\numpy\core\numeric.py in asarray(a, dtype, order)
536
537 """
--> 538 return array(a, dtype, copy=False, order=order)
539
540
~\Anaconda3\lib\site-packages\pandas\core\series.py in __array__(self, dtype)
946 warnings.warn(msg, FutureWarning, stacklevel=3)
947 dtype = "M8[ns]"
--> 948 return np.asarray(self.array, dtype)
949
950 # ------------------------------------------------------------------
~\Anaconda3\lib\site-packages\numpy\core\numeric.py in asarray(a, dtype, order)
536
537 """
--> 538 return array(a, dtype, copy=False, order=order)
539
540
~\Anaconda3\lib\site-packages\pandas\core\arrays\numpy_.py in __array__(self,
dtype)
164
165 def __array__(self, dtype=None):
--> 166 return np.asarray(self._ndarray, dtype=dtype)
167
168 _HANDLED_TYPES = (np.ndarray, numbers.Number)
~\Anaconda3\lib\site-packages\numpy\core\numeric.py in asarray(a, dtype, order)
536
537 """
--> 538 return array(a, dtype, copy=False, order=order)
539
540
ValueError: setting an array element with a sequence.
[EDIT 2] The dataset with labels is the following:
and what I did is the following:
I considered just the operators push,mov,.. and then I created the following dataset with pandas:
after doing this I considered only the values and I used a tf ì-idf vectorizer. Then I slitted the data as;
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all,
test_size=0.2, random_state=15)
and I used support vector machine as model.
[EDIT 3] I have achieved eliminated the error and as output I have:
array(['icc', 'gcc', 'gcc', ..., 'clang', 'clang', 'clang'], dtype=object)
now I do the following:
pd.DataFrame({'instructions': test['instructions'],'compiler':new_pred_class})
and I get the error message:
ValueError Traceback (most recent call last)
<ipython-input-41-da853bce8ce2> in <module>
----> 1 pd.DataFrame({'instructions':
test['instructions'],'compiler':new_pred_class})
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data,
index, columns, dtype, copy)
409 )
410 elif isinstance(data, dict):
--> 411 mgr = init_dict(data, index, columns, dtype=dtype)
412 elif isinstance(data, ma.MaskedArray):
413 import numpy.ma.mrecords as mrecords
~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in
init_dict(data, index, columns, dtype)
255 arr if not is_datetime64tz_dtype(arr) else arr.copy() for
arr in arrays
256 ]
--> 257 return arrays_to_mgr(arrays, data_names, index, columns,
dtype=dtype)
258
259
~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in
arrays_to_mgr(arrays, arr_names, index, columns, dtype)
75 # figure out the index, if necessary
76 if index is None:
---> 77 index = extract_index(arrays)
78 else:
79 index = ensure_index(index)
~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in
extract_index(data)
379 "length {idx_len}".format(length=lengths[0],
idx_len=len(index))
380 )
--> 381 raise ValueError(msg)
382 else:
383 index = ibase.default_index(lengths[0])
ValueError: array length 30000 does not match index length 3000
apparently I have that new_pred_class has 30000 elements, while the test dataset is 3000 rows. What should I do in this case? Thank's in advance.
AI: The data that you're loading into the model is malformed.
Usually this error arises when not all of the elements in an array share the same size. You need to make sure that each row/array contains the same number of elements (otherwise you can't form a 2D array out of the data). Maybe there's an issue with reading the data? |
H: How to install Polynote on Windows?
I've been searching around the Internet for a while but I have not been able to find detailed instructions on how to install Polynote (the polyglot notebook
with first-class Scala support) for Windows with mixing multiple languages, Python and Scala.
Github Link for Polynote.
Official Website.
According to the official website:
Polynote is currently only tested on Linux and MacOS, using the Chrome browser as a client. We hope to be testing other platforms and browsers soon. Feel free to try it on your platform, and be sure to let us know about any issues you encounter by filing a bug report
I would really appreciate it if anyone here can share his/her method if he/she
had successfully installed Polynote on Windows, either from Virtual Machines
(VMware/Virtualbox) or directly.
AI: Slightly different from VMware or Virtualbox, but Polynote in Docker is supported by the developers and they have created a number of ready-made Docker images - more info here, although the instructions are targeted at a Linux host. Below are the instructions modified for a Windows host:
Once you have Docker installed, make a folder somewhere on your host system, e.g. C:\polynote:
cd \
mkdir polynote
Then create a text file config.yml in that folder, with the contents:
listen:
host: 0.0.0.0
Then, you can get the container running with a single command - although make sure you customise the paths to your config file and working folder:
docker run --rm -it -e PYSPARK_ALLOW_INSECURE_GATEWAY=1
-p 127.0.0.1:8192:8192
-p 127.0.0.1:4040-4050:4040-4050
-v C:/polynote:/opt/config
-v C:/path/to/your/working/folder:/opt/polynote/notebooks
polynote/polynote:latest
--config /opt/config/config.yml
This exposes some ports to the host machine, mounts the folder with your config file into the container, mounts your working folder in the container at /opt/polynote/notebooks, pulls the latest Polynote Docker image and tells Docker to use your custom config file.
Once the container is spun up, you can use your web browser to access Polynote at http://localhost:8192/. |
H: Do repeated sentences impact Word2Vec?
I'm working with domain-oriented documents in order to obtain synonyms using Word2Vec.
These documents are usually templates, so sentences are repeated a lot.
1k of the unique sentences represent 83% of the text corpus; while 41k of the unique sentences represent the remaining 17% of the corpus.
Can this unbalance in sentence frequency impact my results? Should I sub-sample the most frequent sentences?
AI: Are the sentences exactly the same, word to word? If that is the case I would suggest removing the repeated sentences because that might create a bias for the word2vec model, ie. repeating the same sentence would overweigh those examples single would end with higher frequency of these words in the model. But it might be the case that this works in your favor for find synonyms. Subsample all the unique sentences, not just the most frequent ones to have a balanced model.
I would also suggest looking at the FastText model which is built on top of the word2vec model, builds n grams at a character level. It is easy to train using gensim and has some prebuilt functions like model.most_similar(positive=[word],topn=number of matching words) to find the nearest neighbors based on the word embeddings, you can also use model.similarity(word1, word2) to easily get a similarity score between 0 and 1. These functions might be helpful to find synonyms. |
H: Does the predict function in machine learning understand categorical data
I understand that before feature engineering one has to split the dataset into train and test data, so as to avoid bias in the analysis. I also understand that the machine learning model does not understand data apart from numerical data, thus encoding is required, which is a part of feature engineering. My question is, do I encode the test data separately or does the prediction function understand categorical data.
AI: This depends somewhat on the model and language (implementation).
First please understand that categorical data is not the same as non-numerical data! Many models can handle categorical data (e.g. regression formats) just fine and some can even handle non-numerical data.
Finally and most important for you feature engineering has to be done on the whole data set before the train/test split. All models can only predict on data that has the exact same input formats as the data it has been trained on!
So yes if you one-hot encoded some column it also needs to be one-hot encoded for the prediction. |
H: Can I download Twitter data via web scraping for research?
I want to do a sentiment analysis using twitter data. Was thinking about hardcoding a cURL script to download data, from a Google Cloud service (I'll run the data on a neural network on the server, to label each tweet), but I have this question:
Am I allowed to do this? I know twitter sells the data, so I am not sure if I can get in trouble for downloading it directly (I have to disclose the data gathering methodology on the paper).
AI: Last time I checked it was not allowed to store the contents of the tweets, instead one is supposed to store the tweet id and retrieve the content of the tweet dynamically.
Afaik this is because users are allowed to delete their tweets at any time, and keeping a tweet that they chose to delete would be against Twitter terms of use (and possibly illegal in some jurisdictions). Using the tweet id solves the problem since the content will simply not be available anymore if the tweet was deleted.
Since you plan to write a paper I assume that you're in academia? If yes in case of doubt it's always safer to ask the data protection and/or legal office in your institution. In this case you're using a secondary source (i.e. you're collecting data which already exists) so it should be straightforward (I think). |
H: error when submitting machine learning project
I am trying to make a submission of a machine learning classification problem. I have a test dataset where to try my model. To submit I have to build a csv file. The prblem is that when I go building this csv file by doing:
sub = pd.DataFrame({'instructions': test['instructions'],'compiler':new_pred_class}).to_csv('sub.csv')
sub.head()
I get the following error message:
AttributeError Traceback (most recent call last)
<ipython-input-37-f223adaf5068> in <module>
1 sub = pd.DataFrame({'instructions': test['instructions'],'compiler':new_pred_class}).to_csv('sub.csv')
----> 2 sub.head()
AttributeError: 'NoneType' object has no attribute 'head'
Can somebody please help me? Thank's in advance.
AI: The return type of pandas.DataFrame.to_csv is None.
So your first line assigns sub = None:
sub = pd.DataFrame({'instructions': test['instructions'],'compiler':new_pred_class}).to_csv('sub.csv')
This is probably what you wanted:
sub = pd.DataFrame({'instructions': test['instructions'],'compiler':new_pred_class})
sub.to_csv('sub.csv')
sub.head() |
H: How can different classification algorithms expressed as neural networks?
I have heard that each of the different classification algorithms can be expressed as neural network architecture. How can the different algorithms like Logistic Regression, SVM(Support Vector Machine), ELM(Extreme Learning Machine) be expressed as a neural network? Is there a way to convert the vector equation of a classification algorithm into neural network architecture?
AI: I have heard that each of the different classification algorithms can be expressed as a neural network architecture.
The Universal Approximation Theorem
guarantees that neural networks can approximate any function on a closed subset of $\mathbb{R}^n$. So we are theoretically guaranteed that any function specified by a logistic regression model, SVM, ELM, or any other model can also be expressed by a neural network.
Is there a way to convert the vector equation of a classification algorithm into neural network architecture?
This part may be hard depending on the model. Although the universal approximation theorem guarantees that an equivalent network exists, it offers no information about the network's weights or number of hidden units. Nonetheless, let's give it a shot for the three models you mentioned:
ELMs
This one is easy. ELMs are already neural networks, so there's nothing left to be done.
Logistic Regression
This one is also pretty straightforward. For simplicity, let's consider a binary classification problem. A logistic regression model can be written as
$$
p(Y=1) = \frac{1}{1 + e^{-(w_0 + w_1x_1 + . . . w_nx_n)}}
$$
Where the expression $p(Y=1)$ means "the probability that the class is positive". The input features are ($x_1$, $. . .$ $x_n$) and they have corresponding weights ($w_1$, . . . $w_n$).
We can represent this function with just a single neuron! Suppose we have a neuron with $n$ inputs, weights ($w_1$, . . . $w_n$), and bias term $w_0$. The output of the neuron (with no activation function) is given by
$$
z = w_0 + \sum_{i=1}^{n} w_ix_i
$$
Applying a sigmoid activation function to $z$ results in the exact same expression as we had above for the logistic regression model:
$$
S(z) = \frac{1}{1+e^{-z}} = \frac{1}{1 + e^{-(w_0 + w_1x_1 + . . . w_nx_n)}} = P(Y=1)
$$
Here's a diagram showing the neuron:
The situation would be more complex for multinomial logistic regression, but you get the idea.
SVMs
This one is probably the trickiest of the three, since an SVM's kernel function can be non-linear. Again, let's consider a binary classification problem.
A linear SVM is specified by the maximal-margin hyperplane between the two classes, which can be written as $\vec{w} \cdot \vec{x} - w_0 = 0$, where $\vec{w} = \langle w_1, . . . w_n \rangle$ and $\vec{x} = \langle x_1, . . . x_n \rangle$.
Of course, the actual classification is determined by which side of the hyperplane a point lies, so the classifier can be written $f(\vec{x}) = sign(\vec{w} \cdot \vec{x} - w_0)$, with 1 denoting the positive class and -1 denoting the negative class.
As with logistic regression, it's pretty easy to represent the decision boundary with a single neuron. A neuron with inputs $\vec{x} = \langle x_1, . . . x_n \rangle$ with weights $\vec{w} = \langle w_1, . . . w_n \rangle$ and with bias term $\hat{w_0} = -w_0$ describes the decision boundary of a linear SVM. This is the same neuron as the one associated with logistic regression, except there is no activation function and the bias term has its sign flipped:
However, it's exceedingly common for SVMs to use the kernel trick to find non-linear decision boundaries. When the kernel is non-linear, the translation of an SVM to a neural network is not so straightforward. I imagine you could find an equivalent network by using the kernel function as the activation function for each neuron, but off-hand I can't describe exactly how it would work.
Hopefully that helps! |
H: High / low resources language : what does it mean?
In NLP, languages are often referred as low resource or high resource.
What do these terms mean?
AI: High resource languages are languages for which many data resources exist, making possible the development of machine-learning based systems for these languages. English is by far the most well resourced language. West-Europe languages are quite well covered, as well as Japanese and Chinese. Naturally low-resource languages are the opposite, that is languages with none or very few resources available. This is the case for some extinct or near-extinct languages and many local dialects. There are actually many languages which are mostly oral, for which very few written resources exist (let alone resources in electronic format); for some there are written documents but not even something as basic as a dictionary.
There are many different types of resources which are needed in order to train good language-based systems:
a high amount of raw text from various genres (type of documents), e.g. books, scientific papers, emails, social media content, etc.
lexical, syntactic and semantic resources such as dictionaries, dependency tree corpora, semantic databases (e.g. WordNet), etc.
task-specific resources such as parallel corpora for machine translation, various kinds of annotated text (e.g. with part-of-speech tags, named entities, etc.)
Many types of language resources are costly to produce, this is why the economic inequalities between countries/languages are reflected in the amount (or absence) of language resources. The Universal Dependencies project is an interesting effort to fill this gap. |
H: Why activation functions used in neural networks generally have limited range?
Why do we generally use activation functions with only limited range in neural networks? for e.g.
$sigmoid$ activation function has range $[0, 1]$
$tanh$ activation function has range $[-1, 1]$
Q1) Suppose I use some other non-linear activation function like $f(x)=x^2$, that don't have any such limited range then what can be potential problems in training such a neural network?
Q2) Suppose I use $f(x)=x^2$ as activation function in neural network and I am Normalizing the layers (to avoid values keep multiplying to higher values) then would such a Neural Network work? (This is again in reference to the question I posted in heading that "Why do we generally use activation functions with only limited range in neural networks?")
AI: The main goal of an Activation function is to add non-linearity but at the same time
It must not explode the large inputs else time to reach minima will become very long
It should have a smooth gradient, this will help the training keep going towards minima without being stuck
With your example function -
We will achieve non-linearity but gradient will be very large for big value and it will slow down the training
I believe you guessed this, so you asked the next question. What if I normalize the data -
Normalization helps in bringing two different features at the same level. But the larger value of the same feature will still be larger compared to the smaller one
But such function stops learning at both the extreme due to flattening gradient. Also, they require high computation
We have RELU without an upper limit. It's a good trade-off on all the mentioned points |
H: How to interpret ANOVA results?
I am trying to identify what attributes are not relevant in my dataset to remove them before fitting a classifier.
The target is a categorical variable with three different values.
I also have a lot of numerical attributes.
For ANOVA, I used the following code:
grouped_test2=df[['room_type', 'price']].groupby(['room_type'])
f_val, p_val = stats.f_oneway(grouped_test2.get_group('Entire home/apt')['price'], grouped_test2.get_group('Private room')['price'], grouped_test2.get_group('Shared room')['price'])
The independent variable is room_type, and the explanatory variable is price.
In this case, the f_val is equal to 1061.64 and p_val is equal to 0.
I read that 0 or values near 0 imply a relationship between the two variables but I am not sure about that?
What mean f_val is near enough to 0 to can say that the two variables are related?
AI: f_val is the F Statistic value. Mathematically it is
$ F = \frac{MS_{Between}}{MS_{Within}}$
The null hypothesis for your ANOVA is
$H_0: \mu_{Entire home/apt} = \mu_{Private room} = \mu_{Shared room} $ which means all means ($\mu_i$ s) are equal and there is no need of grouping using explanatory variable
vs
$H_A:$ At least one $\mu_i$ is different. There is a need for grouping
The p-value for this test was very very low, hence python returned 0. Anything less than 0.05 is considered low enough to reject the null hypothesis.
Independent variables are also called explanatory variables. I believe price is a dependent variable. |
H: K fold cross validation reduces accuracy
I am working on a machine learning classifier and when I arrive at the moment of dividing my data into training set and test set Iwant to confron two different approches. In one approch I just split the dataset into training set and test set, while with the other approch I use k fold cross validation.
The strange thing is that with the cross validation the accuracy decreases, so if I have 0.87 with the first approch, with cross validation I have 0.86.
Shouldn't cross validation increase my accuracy? Thank's in advance.
AI: Chance plays a big role when the data is split. For example maybe the training set contains a particular combination of features, maybe it doesn't; maybe the test set contains a large proportion of regular "easy" instances, maybe it doesn't. As a consequence the performance varies depending on the split.
Let's imagine that the performance of your classifier would vary between 0.80 and 0.90:
In one approch I just split the dataset into training set and test set
With this approach you throw the dice only once: maybe you're lucky and the performance will be close to 0.9, or you're not and it will be close to 0.8.
while with the other approch I use k fold cross validation.
With this approach you throw the dice k times, and the performance is the average across these $k$ runs. It's more accurate than the previous one, because by averaging over several runs the performance is more likely to be close to the mean, i.e. the most common case.
Conclusion: k-fold cross-validation isn't meant to increase performance, it's meant to provide a more accurate measure of the performance. |
H: Is there any way to create column based on some previous column in PANDAS dataframe?
Given: I have Pandas Dataframe as shown below
| Employee_ID | Manager_ID |
|:-----------:|:----------:|
| E068 | E067 |
| E071 | E067 |
| E229 | E069 |
| E248 | E144 |
| E226 | E223 |
| E236 | E241 |
| E066 | E001 |
| E067 | E001 |
| E144 | E001 |
| E223 | E001 |
| E069 | E066 |
Problem Statement:
This problem is to identify the Head of Manager by using Employee and their Manager data.
About:
We have an Employee ID and their Manager ID. Please note that Manager ID are from Employee ID. Since each manager has one Manager above their level.
STEPS:
First, we'll take all UNIQUE ID in Manager ID column.
Then for each ID from Manager ID column, we will look for their respective Manager ID(Manager)
Then we will create a new column say Level 1 we will put manager for each Manager ID on their respective cell.
Similarly, we will repeat the above 3 processes again till there is no Manager ID for that particular ID.
This way we can identify the Head of Manager.
I am able to solve the problem in EXCEL.
By using =IFERROR(VLOOKUP(C2,$A:$B,2,FALSE),"")
But this approach lead me to create new column in excel for each level hierarchy. And putting the formula on first cell of that particular column and then dragging the result for each manager
But incase of big companies there would be n no. of level of hierarchy. So creating the new column in excel for each level of hierarchy would be time consuming task. Hence, I am looking for an optimal solution.
Expected Output:
| Employee ID | Manager ID | Level 1 | Level 2 | Head of Manager |
|:-----------:|:----------:|---------|---------|-----------------|
| E068 | E067 | E001 | | E001 |
| E071 | E067 | E001 | | E001 |
| E229 | E069 | E066 | E001 | E001 |
| E248 | E144 | E001 | | E001 |
| E226 | E223 | E001 | | E001 |
| E236 | E241 | | | E241 |
| E066 | E001 | | | E001 |
| E067 | E001 | | | E001 |
| E144 | E001 | | | E001 |
| E223 | E001 | | | E001 |
The Employee ID column contain UNIQUE ID while Manager ID contain DUPLICATES ID.
Thank you for your time and consideration.
AI: Let's call the dataframe "manager" with column names "Emp" and "Man". First you get the people who are always head mananagers:
head_man = set(manager["Man"]) - set(manager["Emp"])
This returns
{'E001', 'E241'}
and then use the following function to find the head managers of the rest:
def manager_of(emp):
if emp in head_man:
return emp
else:
return manager_of(manager[manager["Emp"] == emp]["Man"].values[0])
Complete Code:
import pandas as pd
head_man = set(hrms_data["Manager ID"]) - set(hrms_data["Employee ID"])
def manager_of(emp):
if emp in head_man:
return emp
else:
return manager_of(hrms_data[hrms_data["Employee ID"] == emp]["Manager ID"].values[0])
for index, row in hrms_data.iterrows():
hrms_data.loc[index,"Head of Manager"] = manager_of(row["Employee ID"])
Edit: hrms_data["Head of Manager"] = hrms_data["Employee ID"].map(manager_of) instead of the for loop is better. |
H: Given a list of thresholds in descending order will the corresponding FPRs and TPRs lists always be in ascending order?
I've made a series of predictions with a machine learning model. I have a list y of labels and a list p of predicted probabilities such that p[i] is the predicted probability of the entry associated with y[i].
With Python's sklearn library I ran
fprs, tprs, thresholds = metrics.roc_curve(y, p)
thresholds is a list in descending order of decision thresholds, while fprs and tprs is the list of associated false positive rates and true positive rates where fprs[i] and tprs[i] are the respective rates given the decision threshold at thresholds[i].
My question is, given that thresholds is in descending order, will fprs and tprs be guaranteed to be in ascending order? I believe that as the decision threshold decreases, more and more probabilities will be marked positive, thus both rates will increase monotonically.
But I've been wrong before about things that seem obvious to me, and I'm writing a tool that must assume these lists will be sorted in ascending order, so I'd like to be sure
AI: From sklearn Documentation: roc_curve returns fpr and tpr, which are increasing; and thresholds which is decreasing. Also check the examples below the definitions. |
H: How can you make use of Json format data?
I want to obtain data from https://petition.parliament.uk/petitions/250967 but the data format is Json. I am new to data mining and I would like to know if there is a way to convert this data into an excel format. Thanks for your time.
AI: Here is a tutorial by Corey Schafer on working with JSON data. He explains in detail how to convert JSON data into python objects, but I don't think he explains how to convert them to excel here. But if you are familiar with pandas, you should be able to create dataframes and then convert them to excel files as needed. |
H: creating a csv file from two csv files
I have two csv files:
sub_compiler.to_csv('sub_compiler.csv')
sub_compiler.head()
and
sub_opt = pd.read_csv('sub_opt.csv')
sub_opt.head()
and what I would like to do is to create a csv file where I have something of the form
compiler, opt
How could I do this? I need to do this to make a submission.
Thank's in advance.
[EDIT] Thank you for the answers. Now I have obtained :
don't understand why I have the columns called unnamed.
My objective is to get a csv file that contains only the columns compiler and opt. How could I do this? Thank's again.
[EDIT 2] I have solved my problem but if I open the csv file manually, so just click on the file, I have the following:
s it contains only 11 lines, but it contains 3000 lines in jupyter. Is it there something wrong?
[EDIT 3] I tried to to the following:
import pandas as pd
test = pd.read_csv('1495927.csv')
test
and I have:
where 1495927.csv is the csv file I have created and that is in the image where I have 11 elements.
AI: sub_compiler.merge(sub_opt, how='inner', on='instruction')
This is how to do table join on your table.
Another option if it is already aligned and you just want to concatenate you can do
pd.concat([sub_compiler,sub_opt[['opt']]], axis=1)
This is naive column-wise concatenate.
Finally just proceed with to_csv. |
H: Classifying points exactly on decision boundary
For calculating loss arose by classification we do this:
If $y (w \cdot x + b) > 0$: $\text{no loss}$
If $y (w \cdot x + b) < 0$: $\text{loss} = −y (w \cdot x + b)$
So here what about the points exactly on decision boundary? How do we classify and compute their loss? (since it would become 0)
AI: The answer is irrelevant for two reasons. First, on the decision boundary, both equations (the one for correct classification, and the one for incorrect classification) generate a loss of 0, so it doesn't matter which one you use.
The second reason is that computationally, you will never be exactly on the decision boundary, because the computer uses numbers with finite precision, so there is always a small error. |
H: Doubt in Derivation of Backpropagation
I was going through the derivation of backpropagation algorithm provided in this document (adding just for reference). I have doubt at one specific point in this derivation. The derivation goes as follows:
Notation:
The subscript $k$ denotes the output layer
The subscript $j$ denotes the hidden layer
The subscript $i$ denotes the input layer
$w_{kj}$ denotes a weight from the hidden to the output layer
$w_{ji}$ denotes a weight from the input to the hidden layer
$a$ denotes an activation value
$t$ denotes a target value
$net$ denotes the net input
The total error in a network is given by the following equation
$$E=\frac12 \sum_{k}(t_k-a_k)^2$$
We want to adjust the network’s weights to reduce this overall error:
$$\Delta W \propto -\frac{\partial E}{\partial W}$$
We will begin at the output layer with a particular weight.
$$\Delta w_{kj} \propto -\frac{\partial E}{\partial w_{kj}}$$
However error is not directly a function of a weight. We expand this as follows.
$$\Delta w_{kj} = -\epsilon \frac{\partial E}{\partial a_k} \frac{\partial a_k}{\partial net_k} \frac{\partial net_k}{\partial w_{kj}}$$
Now the document individually calculates above three partial derivatives to get following final equation for weight change rule for a hidden to output weight [Section 4.4 in the attached document]:
$$\Delta w_{kj} = \epsilon (t_k-a_k)a_k(1-a_k)a_j$$
or, $$\Delta w_{kj} = \epsilon \delta_k a_j$$
I followed till above equation.
Now, the document continues the derivation as follows (I am adding exact wordings from the document) [Section 4.5 in the document attached]
Weight change rule for an input to hidden weight
Now we have to determine the appropriate weight change for an input to hidden weight. This is more complicated because it depends on the error at all of the nodes this weighted connection can lead to.
$$\Delta w_{ji} \propto -[\sum_{k}\frac{\partial E}{\partial a_k} \frac{\partial a_k}{\partial net_k} \frac{\partial net_k}{\partial a_j}]\frac{\partial a_j}{\partial net_j}\frac{\partial net_j}{\partial w_{ji}}$$
I couldn't follow that why there is this extra $\sum_{k}$ in above equation as we have already considered sum over all $k$ when we wrote error $E=\frac12 \sum_{k}(t_k-a_k)^2$
Any lights?
AI: For a while I thought you were right, but the equation is correct.
If you think about it conceptually, then I think you will see that it has to be correct. When updating $w_{ij}$, you have to make the change that improves the total error $E$, and not just the error $E_k$ of output node $k$, right? So you have to be looking for the weights that give you the lowest $E$, that is, where $\partial E / \partial a_k$ is 0.
If you write out the sum over $k$ in the equation for $E$, you get one term for $a_1$, one for $a_2$, and so on. The sum over $k$ in the equation that confuses you, just makes sure to first get the $a_1$ term from $E$, then the $a_2$ term, and so in. It first differentiates $E$ to $a_1$, which is $t_1 + a_1$ if I did that correctly, and gets the correct tion for that. Then for $a_2$, until the end, and finally you have to sum over $k$ to add up the corrections from all terms. |
H: If my model is overfitting the training dataset, does adding noise to training dataset help regularizing the machine learning model
I would like to know if this is a best practice or not. Can we add noise to the training data to help the model "fit less the training data"; as a result, hoping to generalize better on new unseen data?
AI: Yes, adding noise can help to regularize a model.
It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to significant improvements in generalization performance
from Training with Noise is Equivalent to Tikhonov Regularization
In particular, adding structured noise that is consistent with natural perturbations of the data could help regularized a model. This is a form of data augmentation. |
H: How to optimize input parameters given target and scoring parameters
I'm new to machine learning/optimization, so I apologize in advance if this has been answered before. I don't know which search terms to use.
I have a large dataset where I have a number of input parameters $I_1$, ... $I_n$ ($n$ up to 10), a target parameter $T$ and a scoring parameter $S$. There are roughly a million rows of real-life data, where each row has the input parameters, the target parameter and the scoring parameter. The data is operational data from a piece of machinery where $T$ is output power, $S$ is fuel consumption and $I_1$, ... $I_n$ are tuning parameters for the machinery. The relationships of the tuning parameters to $S$ and $T$ are unknown but likely nonlinear. So I basically want to understand how to tune the machinery to produce the desired output power as fuel-efficiently as possible given the parameters I can tune.
What I need is some way of getting to a function where I input $T$ and get out the optimal combination of $I_1$, ... $I_n$ that minimizes $S$ for that value of $T$. I work in Python, and I assume that it's some combination of sklearn and scipy, but I haven't been able to figure out the steps to take for this type of problem. Thanks in advance.
AI: If your data have simple linear relationship (or you can transform it such that this is the case) you can do regression on $T$ and $S$, and after that you should have the linear relationship between input parameters and target parameters. After that you can use scipy minimize or any other library to perform constrained optimization. If I follow from my example my idea would be to perform a predefined trade-off between $T$ and $S$ for example $\max 0.8 T + 0. 2S$. This should be easy to code with scipy. |
H: Classification - Divide the interval (0 - 1] to lets say 100 classes and use each class to make a calculation
class-1 represents 0.01, class-i represents 0.01*i, class-100 represents 1.00.
Thus, when the classifier predicts the class-y and it should have predicted class-(y+1) there is a small error so we can accept class-y.
Is there a way to express this behaviour in a neural network? Maybe with a distribution or something?
PS: Not interested in regression.
AI: Correct me if I am wrong but If I understand your question correctly, what you want is a classifier such that classes close to each other (say class 2 and class 3) are preferable to those far away (class 2 and class 99). If this is the case, this problem is called "Ordinal Categorical Classification".
I was working on a similar problem a while ago, and I found this loss function during my research. I ended up not using it so I don't really know how good it works but anyways, hope that helps. |
H: Deep Learning for Video Classification
Which Deep Learning architecture is best for classifying short videos of variable length? I would like to classify videos that last from 1 up to 3 seconds.
AI: My suggestion is to use Convolutional and Recurrent layers in the same Neural Network.
You'd have to capture a given number of frames of a video (let's say one each 0.5 seconds), and feed arrays of screenshots into the model. Its structure would be:
Conv (and MaxPool) layers to process pixel data - they will extract and process relevant information from each screenshot.
LSTM layers - that will process their sequence, extracting meaning from their flow.
Dense layers at the end to perform classification, with softmax activation at the output layer.
That's how I would do. It's going to be computationally expensive, if you don't have a GPU it won't be easy. |
H: Feature Importance based on a Logistic Regression Model
I was training a Logistic Regression model over a fairly large dataset with ~1000 columns.
I did apply scaling of features using MinMaxScaler.
I was wondering how to interpret the coefficients generated by the model and find something like feature importance in a Tree based model.
Should I re-scale the coefficients back to original scale to interpret the model properly?
It will be great if someone can shed some light on how to interpret the Logistic Regression coefficients correctly.
AI: No, you do not need to re-scale the coefficients. To the contrary - if they are scaled, you can use them as a way to compare feature importance.
Let's assume that our logistic regression model has coefficients {$ a_i$}, relating to the different (scaled) variables {$x_i$}.
A change of $\Delta x_i $ in the variable $ x_i $ will result in an increase (or decrease, if $a_i$ is negative) of $ a_i \Delta x_i $ in $ log({\hat p_i \over {1-\hat p_i}}) $, i.e. the logit function of $ \hat p_i $, where $ \hat p_i $ is the predicted probability that the i-th example is in the positive class.
So, if the variables are scaled, you can say that if $ a_i$ is larger, then $x_i$ is more important in the model. |
H: How to handle weekdays in a NN?
I want to test if using additional information of weekdays would improve my NN. Therefore, I just converted the weekdays numerically such as
Monday -> 0
Tuesday -> 1
...
Sunday -> 6
but my NN fails totally with that alongside to other 16 variables. Without it's ok. Now I wonder if I maybe have too few data or if the conversion does not make sense at all but I guess I do not need something like one-hot-encoder, or do I?
AI: What you have is a cyclical feature, i.e. a feature which takes values which repeat cyclically in time. With that trivial encoding, for any algorithm, Sunday will be one day from Saturday, but 6 from Monday. It should be one day, also.
The proper way to handle cyclic features is to encode them with a sine and a cosine variable. So instead of a feature weekday, you should have weekday_sin and weekday_cos, which are calculated as:
data['weekday_sin'] = np.sin(2 * np.pi * data['weekday']/7)
data['weekday_cos'] = np.cos(2 * np.pi * data['weekday']/7)
Although I can't assure you that this will work in your case, it is definitely a conceptual error you're dealing with.
For more in-depth explanation with some plots which make it a lot easier to understand, you can read this: https://www.kaggle.com/avanwyk/encoding-cyclical-features-for-deep-learning
Hope this helps! |
H: Maybe wrong values for precision and recall
I'm trying to do some data mining with RapidMiner studio.
I've applied the K-nearest neighbor algorithm with different values of K.
As I expected, accuracy increase and after K=5, it decrease. But I cannot understand why value of recall for Basic increase (as I expected) but recall for Premium decrease. The same for values of precision.
Below my results:
Basic and Premium are the values of my class label
With K=5
True Basic
Class recall: 91.83%
Class precision: 81.18%
True Premium
Class recall: 32.87%
Class precision: 56.07%
With K=2
True Basic
Class recall: 81.99%
Class precision: 82.94%
True Premium
Class recall: 32.87%
Class precision: 45.20%
AI: If you are calling a higher proportion of cases as Basic with K=5, then this will probably lower your Basic precision, increase your Basic recall, increase your Premium precision, and lower your Premium recall. This is because there is always a tradeoff between precision and recall, and Basic and Premium are opposites in your binary classifier. |
H: Classifying objects in video without machine learning
Recently, Nick Bourdakos posted a series of videos demonstrating bottle detection in a video stream using Tensorflow.js. Specifically, he is using SSD-mobilenet.
The problem could be summarised as follows:
Three different drinks bottles appear together or individually in a video stream. Let's assume Coke, Mountain Dew and Pepsi as in the original video
Each bottle must be classified and labelled with its name
A rectangular bounding box should be placed around each bottle
I am interested to know if there are there any established techniques that can achieve similar results without machine learning?
So far I have tried:
Identification based on colour thresholding. This works surprisingly well but is not very robust. For example the red lettering inside the Mountain Dew label can be falsely detected as "Coke". Also, bounding boxes can only be drawn around coloured regions rather than the entire bottle
Template based matching. This didn't work at all. I assume this is intended for static images where an exact match is required.
KCF Tracking. This seems to work well, but I have to manually define a region of interest to track first. That is, objects are tracked but not classified
I also tried using edge detection to classify "bottle" or "not bottle" based on the aspect ratio of edged objects, however because the bottles are "hand held" separating the bottle edge from the user's hand is problematic
Is there some computer vision technique or steps that I'm missing that would come closer to solving the problem?
AI: You could try SIFT (Scale-invariant feature transform) to recognize the labels on the bottles. Althoug you need some preprocessing, just for comparing the labels on the bottle and to distinguish the different brands it should work.
Also it could get kind of complex (if you show the bottles from behind and the brandlogo is not visible). So you maybe have to extract features from many perspectives of the bottle.
Here is one example: https://towardsdatascience.com/bibirra-beer-label-recognition-8546c233d6f4 |
H: Time Series Classification for 1 hour blocks
I am doing some analysis on time series.
The time series would consist of 3 channels and contain 5 minute interval data.
What I want is to be able to give it a 1 hour block of 5 minute interval data and it will categorise it based on the entire one hour and picking up some patterns how the time series looks for each of the categories as per the training data.
I have many 1 hour series of 5 minute interval data which is classified to a particular category, and I want to be able to have a deep learning model which can detect the pattern between these samples and be able to determine for new samples which categories they belong to.
****Could you please recommend a type of deep learning model which is capable of this?****
Maybe I don't understand LSTM's but to my understanding they provide a prediction for each point in a time series based on the points that occur before it and would therefore give a series of predictions, where as I want 1 prediction for each hour.
I appreciate any help that you can provide to help me understand this better.
Thankyou.
AI: Correct me if I am wrong, but the problem you describe here sounds like a classification problem, not a times series forecasting. You, just want to know to what class each 1 hour of data belongs to. If this is the case, you can try using a CNN with 1 dimensional convolutions and 3 channels. |
H: What is the advantage of positional encoding over one hot encoding in a transformer model?
I'm trying to read and understand the paper Attention is all you need and in it, they used positional encoding with sin for even indices and cos for odd indices.
In the paper (Section 3.5), they mentioned
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks.
My question is that if there is no recurrence, why not use One Hot Encoding. What is the advantage of using a sinusoidal positional encoding?
AI: You are mixing two different concepts in the same question:
One hot encoding: approach to encode $n$ discrete tokens by having an $n$-dimensional vectors with all 0's except one 1. This can be used to encode the tokens them selves in networks with discrete inputs, but only if $n$ is not very large, as the amount of memory needed is very large. Transformers (and most other NLP neural models) use embeddings, not one-hot encoding. With embeddings, you have a table with $n$ entries, each of them being a vector of dimensionality $e$. In order to represent token $k$, you select the $k$th entry in the embedding table. The embeddings are trained with the rest of the network in the task.
Positional encoding: in recurrent networks like LSTMs and GRUs, the network processes the input sequentially, token after token. The hidden state at position $t+1$ depends on the hidden state from position $t$. This way, the network has a means to identify the relative positions of each token by accumulating information. However, in the Transformer, there is no built-in notion of the sequence of tokens. Positional encodings are the way to solve this issue: you keep a separate embedding table with vectors. Instead of using the token to index the table, you use the position of the token. This way, the positional embedding table is much smaller than the token embedding table, normally containing a few hundred entries. For each token in the sequence, the input to the first attention layer is computed by adding up the token embedding entry and the positional embedding entry. Positional embeddings can either be trained with the rest of the network (just like token embeddings) or pre-computed by the sinusoidal formula from (Vaswani et al., 2017); having pre-computed positional embeddings leads to less trainable parameters with no loss in the resulting quality.
Therefore, there is no advantage of anyone over the other, as they are used for orthogonal purposes. |
H: Activation Functions in Neural network
I have a set of questions related to the usage of various activation functions used in neural networks. I would highly appreciate if someone could give explanatory answers.
Why is ReLU is used only on hidden layers specifically?
Why is Sigmoid not used in multi-class classification?
Why do we not use any activation function in regression problems having all negative values?
Why do we use average='micro' while calculating the performance metric in multi_class classification?
f1-score(y_pred,y_test,average='micro')
AI: I'll go through your questions one by one.
1.Why ReLU is used only on hidden layers specifically?
It's not necessarily used on hidden states only. ReLU work much better than "older" activation functions (such as Sigmoid and Tanh) because they backpropagate the error much better than their counterparts. All the most powerful activation functions are ReLU or some version of ReLU.
The typical ML tasks (classification and regression) do not require ReLU activation at the output layer, because of the nature of the task itself. However, sometimes regression do in fact require ReLU's. Let me make an example: I once trained an RNN to predict pollution levels, based on data from the last 24 hours. Since pollution levels cannot, by definition, go below zero, I used ReLU as the activation output for my regressor. In this way, you force your model to never go below zero with its prediction. House price prediction is another example in which you can use ReLU's at the output layers.
2.Why Sigmoid is a not used in Multi-class classification?
You need Softmax, because that returns a vector in which the activation of the final nodes sums up to one. In this way, you can explain each coeffient as "the probability of belonging to class X". A typical Softmax output is something like:
[ 0.3 , 0.5 , 0.2 ]
If you use Sigmoid instead, all nodes would be independent from each other. You could get results like:
[ 0.9 , 0.9 , 0.9 ], or: [ 0.1 , 0.1 , 0.1 ]
which doesn't make a lot of sense for a classifier, and it cannot be interpreted as a vector of probabilities.
(You can use Sigmoid in case of binary classification with a single output node, that's the only case in which it would work.)
3.Why we do not use any activation function in regression problems having all negative values?
Because, I think, there's no need to do it. I don't see any practical usefulness coming from it.
4.Why we use "average='micro'" while calculating performance metric in multi_class classification? Ex:- f1-score(y_pred,y_test,average='micro')
Unfortunately there is no rule of thumb here. You should try different metrics and pick the one that fits your task the best. |
H: Collaborating on Jupyter Notebooks
I have prepared Jupyter Notebook with some findings and I shared it with other team members through GitHub to get their feedback in a written form. It used to work like this when working together on a piece of code but does not work for Jupyter Notebook. In GitHub that would mean commenting on HTML or JSON level (internal markup for .ipynb files), not on the document level. An alternative would be for team members to clone the repo and puts inline comments in the document. That's an additional effort for other team member I would like to avoid.
What is the way you collaborate, peer review and provide feedback when working on Jupyter Notebooks?
AI: There are several collaboration platforms with hosted notebooks that can be shared like:
Google Colab
Kaggle Kernels
Deepnote
Binder
Curvenote
Noteable
Etc.
However the base idea of collaborating and sharing notebooks is actually a base function of jupyter. As you might have noticed it is a server-hosted application which by default opens a local server for you to work on.
By simply hosting that server (e.g. on AWS, your internal servers, etc.) you can collaborate on the notebooks directly and interactively. |
H: Can somebody explain me this method? CNN Keras - starter
def ReadImages(Path):
LabelList = list()
ImageCV = list()
classes = ["nonPdr", "pdr"]
FolderList = [f for f in os.listdir(Path) if not f.startswith('.')]
for File in FolderList:
for index, Image in enumerate(os.listdir(os.path.join(Path, File))):
ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (224,224)))
LabelList.append(classes.index(os.path.splitext(File)[0]))
print(FolderList)
return ImageCV, LabelList
I got this method, and I'm not understanding properly what it do line by line, can someone help me?
AI: def ReadImages(Path): #path to dir with all images
LabelList = list() #Initialized empty labels list
ImageCV = list() #Initialized empty ImageCV list initialized
classes = ["nonPdr", "pdr"] #classes labels all image files startwith
#return list of image folders in main dir path/
FolderList = [f for f in os.listdir(Path) if not f.startswith('.')]
#loop over dir of image folders
for File in FolderList:
#for loop returns index of each image in a image folder
#and the image name
for index, Image in enumerate(os.listdir(os.path.join(Path, File))):
#Read image file resize it and append it to ImageCv list
ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (224,224)))
#return string of characters before dot 1 in file name
#return index from classes !!!dangerous line assumes all files
#of form nonPdr.* or pdr.*
#append that index value either 1 or 0 to libelist
LabelList.append(classes.index(os.path.splitext(File)[0]))
print(FolderList) # print the folder just parsed
return ImageCV, LabelList # return the two list created at the start
#now containing images and labels based on names of image files |
H: XGBClassifier: make the output of predict_proba ascending regarding a specific feature
I'm trying to build a classifier using Xgboost on some high dimensional data, the problem I'm having is that I have the prior knowledge that the output probabilities should be ascending regarding a feature(say x), but I don't know how can I make the model understand this!
For example for a data point with features Feat I want to have:
predict_proba(Feat[x=1]) <= predict_proba(Feat[X=2])
where the rest of the features are the same.
AI: You can enforce such monotonicity with the monotone_constraints (hyper)parameter:
https://xgboost.readthedocs.io/en/latest/tutorials/monotonic.html
https://stackoverflow.com/questions/43076451/how-to-enforce-monotonic-constraints-in-xgboost-with-scikitlearn |
H: Free API for historical weather US data?
I am trying to retrieve a free R-Python API that provides historical weather data in US.
In fact wunderground API is no longer available.
Any suggestion?
AI: Maybe you can use NCEI Data
And use "httr" package to access the API in r. |
H: What method is recommended after outliers removal?
I have a data of mice reaction times. In every session, there are some trials in which the mouse "decides of a break" and responds after a long time to these specific trials.
I was thinking of applying outlier removal on my data. and the data does look better (I used a Matlab function which removed all data above of below 3 IQR's above and below median).
After doing that I got a histogram which is more similar to a normal distribution (below an example picture of one of my sessions).
My question is:
After applying my outlier removal, how should I analyze the remaining data?
Should I consider the Median (together with IQR as standard error mean)?
Or should I consider the mean (together with $ \frac{\sigma}{\sqrt n}$ as standard error mean)?
Remark: I have very little knowledge in statistics, so, I there are mistakes above (for example my standard error mean definition as IQR or $ \frac{\sigma}{\sqrt n}$ is not correct), I would be grateful if you'll let me know.
Thanks!
Edit: The purpose of my analysis is to show that under certain conditions, the response times of the mice will be faster then under other conditions.
fig 1: Data before outlier removal
fig 2: Data after outlier removal
AI: based on this information I would recommend to use
Wilcoxon signed-rank test
or
paired Student's t test
This depends on your sample size and distribution. To test if your data set is normally distributed you can use the Jarque-Bera test.
I didn't work with Matlab yet, but I guess all the tests should be implemented in Matlab.
From that point you could evaluate the impact of the condition on response time.
Visualize your data (X on Y) - Scatter plot / Heatmap (if multivariate data).
From here you could start building a model
starting with linear regression
going into more advanced models/approaches -- random forests, bootstrapping etc.
Added
If you only have binary variables (condition / no condition) - you need to set up dummy variables to perform a regression model (1 / 0) . Data then could look like:
obs response-time any_condition condition_1 condition_2 condition_3
1 12.54 1 1 0 0
2 19.34 0 0 0 0
3 13.32 1 1 1 0
4 14.7 0 0 0 0
If you have multiple different "state" conditions, I would recommend to use a control variable ("any_condition") - to see if conditions (not specifying which, have an impact) on the response time.
I hope that helps. |
H: Why are my Decision Tree Leafs not pure?
I'm making a using DecisionTreeClassifier from SKlearn (v0.21.3) with its default settings, using Python. I do not want regularize it in any way, I want it to overfit as much as possible.
When drawing the tree out I saw that some of the leafs were not pure. Is this normal? Was the tree not able to separate the samples?
...
model = DecisionTreeClassifier(criterion="entropy")
model = modell.fit(X, y)
...
AI: With default settings the DecisionTreeClassifier does not have any restrictions in terms of complexity as described in the Scikit Documentation.
Therefore, it will stop further branching the tree if either a given node is pure (all examples have the same classification) or there are no further attributes to branch on.
So if your final tree contains leaves which are not pure (for the data set it has been trained on) the algorithm did not have any attributes to further split on.
In case you applied any kind of randomization on your data like a random split of training and test data this might turn out differently when splitting again and getting a different training set. |
H: sklearn.feature_selection vs xgboost feature_importances?
sklearn.feature_selection vs xgboost feature_importances
Can somebody explain in-detailed differences between sklearn.feature_selection and xgboost feature_importances? And how the algorithms work under the hood?
Which module gives the best results?
AI: They are very different things. sklearn.feature_selection is a general library to perform feature selection. xgboost feature_importances is simply description of how each feature is important (for details should refer in the xgboost documentations) in regards with model-fitting procedure and it is simply attribute and it is up to you how you can use this importance.
You can however, apply feature selection on top of xgboost model or models that follow some level of sklearn models api structure, for example using sklearn.feature_selection.RFE . |
H: Is it possible to know the output vectors of MLP Classifier of scikit learn?
I'm a beginner with scikiti-learn library.
I have an ANN with 3 input, 2 hidden layers and 3 output.
mlp = MLPClassifier(hidden_layer_sizes= hidden_layers,max_iter=iterations, activation=activation_fun)
I read on the documentation that the classifier uses softmax for the output activation function and cross-entropy loss function.
I have a multi-class problem where the three outputs will predict the classes 0,1,2.
My question is that. How can I retrieve the vectors that enconds the classes 0,1,2?
example:
[1,0,0] -> 0
[0,1,0] -> 1
[0,0,1] -> 2
AI: How did you create the labels in the first place? You can know which corresponds to which by using scikit-learn's Label Encoder. This handles the labeling and at the end you can use inverse transformation to get the label names.
For one-hot-encoding the labels, you can use Label Binarizer, which again has an inverse defined in the link. |
H: How to draw neural network diagrams with this particular style?
I would like to draw a neural network architecture with the follow style. Do you know which tool can be used to do this? The paper is Operation-aware Neural Networks for User Response Prediction.
AI: I asked me something similar as well as I thought that a lot of papers use rather high quality images but it seems that all the authors generate them individually: https://graphicdesign.stackexchange.com/questions/114886/software-to-draw-schemes-quickly
For NNs, there is already an answered question: How to draw Deep learning network architecture diagrams?
Have a look at https://www.quora.com/What-tools-are-good-for-drawing-neural-network-architecture-diagrams and https://softwarerecs.stackexchange.com/questions/47841/drawing-neural-networks |
H: GridSearchCV vs RandomSearchCV and How it works?
GridSearchCV vs RandomSearchCV
Can somebody explain in-detailed differences between GridSearchCV and RandomSearchCV?
And how the algorithms work under the hood?
As per my understanding from the documentation:
RandomSearchCV
This uses a random set of hyperparameters. Useful when there are many hyperparameters, so the search space is large. It can be used if you have a prior belief on what the hyperparameters should be.
GridSearchCV
Creates a grid over the search space and evaluates the model for all of the possible hyperparameters in the space. Good in the sense that it is simple and exhaustive. On the minus side, it may be prohibitively expensive in computation time if the search space is large (e.g. very many hyper parameters).
AI: Imagine the following scenario:
params = {
epoch = [20, 30, 40, 50], #those numbers are only for example
dense_layer_size = [20, 30],
second_danse_layer = [30, 40]
}
In GridSearch you try all the combinations of your parameters, in this case:
(4 * 2 * 2) = 16 #Total of parameters
In RandomSearch as in documentation:
Not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from the specified distributions. The number of parameter settings that are tried is given by n_iter.
In this case, he select "n_iter" number of combinations and try then. Is good for optimizing less parameters, but, in cases when you don't have sure about multiples parameters, may let aside some combinations that would be better.
Another good approach, is Using an genetic algorithm to optimize your network.
Genetic algorithm generate a few combinations of "individuals" (combination of parameters) and at each step, select the best individuals(called parents) and crossover then, generating more individuals with the characteristic from both parents. This way, in some case you can optimize your network and add random elements when you don't have sure about what parameter would be increase your result.
If you want an easy to integrate genetic algorithm you can check this project. |
H: Dealing with categorical variables
I have a panel data set. My dependent variable is total costs, and almost all of my independent variables are categorical variables. For instance, age is "old","new". Now i have some questions.
Should i use a dummy for all of them? For example, only type variable has 33 values itself, or i can use clustering and reduce them? Or any other way if you know)
Is there a difference in terms of behaviour between categorical variables which have a rank or not? For example type is "A","B",..."S" so no rank between A and B but quality is "A1","A2","A3" which A1 means highest quality.
I don't know why, I can't find enough information about variable selections and making data ready. So now i have lots of variable and I think I should choose between them and also reduse number of dummies.
AI: You should convert the categorical variables to dummies. For each individual variable in general you want to have equal number of elements of each class, or at least the numbers should be close. If not, you can cluster smaller classes to form a larger one. For example, let's assume you have a categorical variable with 5 different categories. You want each class to be approximately %20 of the data. If it is not, you can define a new class which combines smaller classes to make each class approximately equal.
For the second part, if you can actually quantify how much A1 is better than A2, or able to assign a relative value to them based on some heuristics; you can convert them to numerical variables.
You can find an example of this in this notebook (section titled "Aggregating categorical variables"). It is from the course "Principles of Machine Learning: R Edition" on edX. You can watch the videos on audit mode for free; and the notebooks are on github. |
H: How to combine GridSearchCV with Early Stopping?
I'm a beginner in machine learning and want to train a CNN (for image recognition) with optimized hyperparameter like dropout rate, learning rate and number of epochs.
The optimal hyperparameter I try to find via GridSearchCV from Scikit-learn.
I have often read that GridSearchCV can be used in combination with early stopping, but I can not find a sample code in which this is demonstrated.
With EarlyStopping I would try to find the optimal number of epochs, but I don't know how I can combine EarlyStopping with GridSearchCV or at least with cross validation.
Can anyone give me a hint on how to do that, it would be a great help?
My current code looks like this:
def create_model(dropout_rate_1=0.0, dropout_rate_2=0.0, learn_rate=0.001):
model = Sequential()
model.add(Conv2D(32, kernel_size=(3,3), input_shape=(28,28,1), activation='relu', padding='same')
model.add(Conv2D(32, kernel_size=(3,3), activation='relu', padding='same')
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(dropout_rate_1))
model.add(Dense(128, activation='relu'))
model.add(Dropout(dropout_rate_2))
model.add(Dense(10, activation='softmax'))
optimizer=Adam(lr=learn_rate)
model.compile(loss='categorical_crossentropy', optimizer=optimizer,
metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, epochs=50, batch_size=10, verbose=0)
epochs = [30, 40, 50, 60]
dropout_rate_1 = [0.0, 0.2, 0.4, 0.6]
dropout_rate_2 = [0.0, 0.2, 0.4, 0.6]
learn_rate = [0.0001, 0.001, 0.01]
param_grid = dict(dropout_rate_1=dropout_rate_1, dropout_rate_2=dropout_rate_2,
learn_rate=learn_rate, epochs=epochs)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=5)
grid_result = grid.fit(X, y)
AI: Just to add to others here. I guess you simply need to include a early stopping callback in your fit().
Something like:
from keras.callbacks import EarlyStopping
# Define early stopping
early_stopping = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
# Add ES into fit
history = model.fit(..., callbacks=[early_stopping]) |
H: What are the ways to identify a good attribute test while constructing a decision tree?
I'm working through a decision tree by hand to learn it. From my research, I have found the following three ways of determining which variables to split on:
Minimum remaining values - The variable with the fewest legal values is chosen
Degree heuristic - The variable with the most constraints on remaining values
Least Constraining value - The variable that rules out the fewest remaining values in the remaining variables
Do I have these right? What are some other ways of determining the splits?
AI: You might have mixed up CSP (Constrained Satisfaction Problem) search trees and decision trees:
CSP search trees
'Minimum remaining values', 'Degree Heuristic' and 'Least Constraining Value' are used to solve CSPs and not in decision trees (i.e. we are talking symbolic AI here and not a sub-symbolic AI method like decision trees).
However, as far as I know the 'least constraining value' approach is not used to select a variable but after having selected one to choose the order to inspect its values.
So it could go rather like this:
1. Choose a variable applying 'minimum remaining values'
2. If step 1 fails select a variable applying the 'degree heuristic'
3. For the selected variable select its values ordered according to 'least constraining value'
Besides the ones you mentioned you could go the really simple route: just pick 'the next' variable or pick randomly.
Also see section 6.3.1 in 'Artificial Intelligence: A Modern Approach' by Russel and Norvig.
Decision trees
In contrast decision trees are a machine learning method. Here you usually choose the best splits for the tree using a greedy algorithm providing the biggest ad-hoc gain (see below list for how to define 'gain'). Typical criteria for the greedy split are:
Classification
Information gain (based on entropy)
Gini score
Classification error (usually not used when building the tree)
Regression
Squared error
Also see sections 9.2.2 and 9.2.3 in 'The Elements of statistical learning' by Hastie, Tibshirani and Friedman. |
H: How much of a disadvantage is a small sample size?
I am examining a petition involving all UK constituencies. In this dataset 2 of the 632 constituencies have not participated in the petition - in terms of data quality how does this affect my examination?
I am examing which parts of the UK tend to vote for left/right ideals.
AI: When I read this, I guess you are up to a causal model?
When you use methods like linear regression or logistic regression, $n=630$ is a healthy sample size. So no problem with that. What could be a problem is that you have censored data (you don't know about the remining two constituencies).
IMO the best way to deal with this problem is to cleary state that you have missing data for this constituencies, so that you only can make claims for the remaining ones. It is a common problem that there is missing data. If it is really only about two constituencies and if you look at "left/right" tendence, the missing data is not a big problem as long as the missing observations are not highly important (e.g. super large) and there is no reason to believe that the missing observations come from an entirely different data generating process (e.g. extremely different to all other observations). |
H: What is the output polytree after aplying the Ramex algorithm to this graph?
I've been trying to understand the way this algorithm works, but I can't get a consistent result.
It has two phases: the first one coverts a table of events into a graph, and the second where the graph is tranformed into a polytree.
The question is about the second phase, using a back-and-forward heuristic:
This is the heuristic algorithm:
Algorithm 3. Back-and-Forward Heuristic
Input: Network G;
Output: Poly-tree S;
Initialize S;
For each vertex in G
For each edge in G
x = arg_max(weighted forward-vertex not visited in G and connected with S; weighted back-vertex not visited in G and connected with S)
End-for;
Update solution S with x;
End-for;
The example shown in the paper is this:
Fig. 2. (a) Original cyclic network, (b) Forward Heuristic provides a tree solution, (c) Back-and-forward Heuristic provides a poly-tree solution
Right now I'm having trouble applying the back-and-forward algorithm to the vertex b. I'll describe my thought process:
Initialize S -> Take 'a' and put it in the tree (S).
Look for the edges that connect 'a' (the tree S) to any unexplored node and select the one with higher weight -> 'c'.
Add 'c' to the tree. a--674-->c
'a' is now explored/visited.
Move to next vertex - 'b'.
Look for the edges that connect the tree ('a' or 'c') to any unexplored node and select the one with higher weight -> 'd'.
Add 'd' to the tree. a--674-->c--684-->d
'b' is now explored/visited.
Move to next vertex - 'c'.
Look for the edges that connect the tree ('a', 'c' or 'd') to any unexplored node and select the one with higher weight -> 'e'.
Add 'e' to the tree. a--674-->c--684-->d--1080-->e
'c' is now explored/visited.
Move to next vertex - 'd'.
Look for the edges that connect the tree ('a', 'c', 'd' or 'e') to any unexplored node and select the one with higher weight -> 'h'.
Add 'h' to the tree. a--674-->c--684-->d--1080-->e--930-->h
'd' is now explored/visited.
And now 'b' is left hanging because 'b' and 'd' are visited and the algorithm says the connection must be to a node not visited in G and the polytree shows b-->d.
Is my interpretation wrong?
AI: Here's my interpretation of the algorithm, not sure it is completely right but that could help you:
Initialize S: null (a is added after that)
Enter in first loop on vertices (assume we start with a)
Select the edge with highest weight connected to a: a -> c, and add it
Same with updated S (a or c) c -> d, and add it
Same with updated S: d -> e
Same with updated S: e -> h
Same with updated S: e -> g
Same with updated S: this is where forward and forward & backward algorithms diverge. Forward algorithm only looks at weights for edges starting from S (this is a -> b), while forward and backward algorithm also screens backward links, so the selected link is b -> d which has a higher weight (396) than a -> b (335)
Last update of S with e -> f
This is what seems right to explain the figure, and works to build a polytree. I am a bit confused because it does not exactly correspond to what is written in Algorithm 3, but it is not so rare that pseudo-code algorithm make no actual sense, and would yield wrong results if written directly in any programming language. |
H: How would I model hysteresis?
I have the task of modeling the current to torque mapping for a given motor. I have an experimental set up where I can retrieve current, torque pairs.
Now my initial approach was to model the relationship with a regression curve, but I realized that the motor certainly shows some kind of hysteresis.
How would I be able to compute a current to torque mapping that also includes hysteresis from the data I obtained
AI: If your goal is solely predicting torque for a given engine, you can ignore hysteresis. A regression model will implicitly model hysteresis in the learned parameters of the model.
If your goal is building a model hysteresis for a given motor, then you should systematically measure performance with different recent histories. Then apply a time series model to model the effect of recent history. |
H: Clustering categorical variable values based on continuous target values
Let's say I have $n$ data points with just one categorical feature $x$ and a continuous target variable $y$. I want to divide the possible values of $x$ into subsets such that the value of $y$ doesn't vary much within a subset.
As an example, suppose there are only $5$ possible values of $x$: $x_1,\ldots,x_5$. We observe that the values of $y$ don't vary much across $x_1$ and $x_2$. $y$ doesn't vary much across $x_3$ and $x_5$, but is different from the typical $y$ value for $x_1$ and $x_2$. For $x_4$, $y$ values are quite different from the above groups. So we can say that $(x_1,x_2), (x_3,x_5)$ and $(x_4)$ can be considered as "clusters".
Now what's a concrete way of saying that $y$ doesn't "vary much across $x_1$ and $x_2$"? One natural way to define that is for $y$ to have the same distribution for $x_1$ and $x_2$, i.e. the same mean and standard deviation of $y$ values for both $x_1$ and $x_2$.
Is there a better way to characterize the fact that $y$ doesn't "vary much across $x_1$ and $x_2$? Maybe the way I defined it above is too idealistic and I need a criterion more suited to a real-life dataset?
Are there any popular existing methods or library functions to solve such kind of a problem (i.e. clustering categorical feature values on the basis of continuous target values)?
AI: Since you are looking for a degree of similarity regarding $y$ and the values of $x_1,...,x_5$ do not matter you can view this as a clustering problem regarding $y$:
Let $y_1,...y_5$ be the target values with $f(x_i)=y_i$ for $i\in\{1,...5\}$ then you need to define a distance measure $d(y_i,y_j)$ which, since your variables $y$ are continuous, could be the euclidian distance: $d(y_i,y_j) = (y_i-y_j)^2$. But you could also choose the absolute difference. (note that I am assuming your $y_i$ to be one-dimensional here, i.e. $y_i\in \mathbb R$). That gives you a formula to measure "does not vary that much".
It also provides an answer to your second question: you can choose from a range of unsupervised ML algorithms to do clustering. The most popular one probably being $K$-means. The idea here is very straight forward:
1. select a number of clusters K
2. initialize the position of the K clusters randomly
3. Assign each y_i to the closest cluster
4. For each cluster k calculate the new cluster position as the mean of all y_i belonging to that cluster
5. repeat 3 and 4 until the assignments of y_i does not change anymore
Mathematically this gives you a mapping $c(y_i)=k$.
And eventually when you are done with your clustering you just pick a cluster $k$ and for each $y_i$ in that cluster you look up the corresponding $x_i$. Which will return what you asked for. |
H: Random Forest VS LightGBM
Random Forest VS LightGBM
Can somebody explain in-detailed differences between Random Forest and LightGBM? And how the algorithms work under the hood?
As per my understanding from the documentation:
LightGBM and RF differ in the way the trees are built: the order and the way the results are combined. It has been shown that GBM performs better than RF if parameters tuned carefully.
Random Forest: RFs train each tree independently, using a random sample of the data. This randomness helps to make the model more robust than a single decision tree, and less likely to overfit on the training data
My questions are
When would one use Random Forests over Gradient Boosted Machines?
What are the advantages/disadvantages of using Gradient Boosting over Random Forests?
AI: RandomForest advantage compared to newer GBM models is that it is easy to tune and robust to parameter changes. It is robust for most use cases although the peak performance might not be as good as a properly-tuned GBM. Another advantage is that you do not need to care a lot about parameter. You can compare the number of parameter for randomforest model and lightgbm from its documentation. In sklearn documentation the number of parameter might seem a lot, but actually the only parameter you need to care about(ordered by importance) are max_depth, n_estimators, and class_weight, and the other parameters are better to be left as is. So for me, I would most likely use random forest to make baseline model.
GBM is often shown to perform better especially when you comparing with random forest.
Especially when comparing it with LightGBM. A properly-tuned LightGBM will most likely win in terms of performance and speed compared with random forest.
GBM advantages :
More developed. A lot of new features are developed for modern GBM model (xgboost, lightgbm, catboost) which affect its performance, speed, and scalability.
GBM disadvantages :
Number of parameters to tune
Tendency to overfit easily
Please bear in mind that increasing the number of estimators for random forest and gbm implies different behaviour. High value of n_estimators for random forest will affect it robustness, where as for GBM model will improve the model fit with your training data (which if too high will cause your model to overfit). |
H: Cosine similarity vs The Levenshtein distance
I wanted to know what is the difference between them and in what situations they work best?
As per my understanding:
Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. The cosine of 0° is 1, and it is less than 1 for any angle in the interval (0,π] radians.
The Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits
When would one use Cosine similarity over The Levenshtein distance?
AI: As mentioned in other answers, traditionally cosine is used to measure similarity between vectors whereas Levenshtein is used as a string similarity measure, i.e. measuring the distance between sequences of characters.
Nevertheless they both can be used in non-traditional settings and are indeed comparable:
the vectors compared with cosine can for instance contain frequencies of characters or characters n-grams, hence making it a string similarity measure
one can replace the sequence of characters with a sequence of strings or a sequence of n-grams, thus making Levenshtein a more general distance measure.
The main conceptual difference between Cosine and Levenshtein is that the former assumes a "bag-of-words" vector representation, i.e. compares unordered sets, whereas the latter takes into account the order of the elements in the sequences.
In the context of comparing sequences of words many combinations are possible. In case that's what you're looking for you might be interested in this paper: https://www.aclweb.org/anthology/C08-1075/ (full disclosure: I'm one of the authors). |
H: to include first single word in bigram or not?
in a text such as
"The deal with Canada's Barrick Gold finalised in Toronto over the
weekend"
When I try to break it into bigram model, I get this
"The deal"
"deal with"
"with Canada's"
"Canada's Barrick"
"Barrick Gold"
"Gold finalised"
"finalised in"
"in Toronto"
"Toronto over"
"over the"
"the weekend"
my question
shall I include the first word and the last word as single words
"* The"
"Weekend *"
AI: What you describe is called padding and is indeed used frequently in language modeling. For instance if one represents the sequence "A B C" with trigrams:
# # A
# A B
A B C
B C #
C # #
The advantages of padding:
it makes every word/symbol appear the same number of times whether it appears in the middle of the sequence or not.
it marks the beginning and end of a sentence/text, so that the model can represent the probability to start/end with a particular word. |
H: Value error in an embedding layer
I am new to deep learning and I am trying to build a book recommender system using embedding layers. I use one layer for the book and one for the user.
I am having trouble with fitting the model. More specifically, when I try to feed the first layer with the books' ISBN list I get back the " ValueError : could not convert string to float: '087584877X' ".
Note that a book's ISBN ( as '087584877X' for example ) is either a sequence of numbers or a sequence of letters and numbers. The second category seems to be the problematic one.
I dont get why it should convert the string to float. I though that what the network does is to assign each string to a new integer regardless of the structure of the string.
Here is my code for the books embedding layer :
book_input = Input(shape=(1,))
book_embedding = Embedding(input_dim = n_books+1, output_dim = 4)(book_input)
AI: It looks like you're using Keras, so this answer is going to refer to the Keras embedding layer.
The error message is pretty clear - Keras embedding layers don't accept strings as inputs. The embedding layer requires an array of positive integers and can only be used as the first layer of a model. The embedding layer requires that you encode each word as an integer so you might have to use the Keras tokenizer.
Source: Embedding Layers |
H: AlphaGo Zero loss function
As far as I understood from the AlphaGo Zero system:
During the self-play part, the MCTS algorithm stores a tuple ($s$, $\pi$, $z$) where $s$ is the state, $\pi$ is the distribution probability over the actions in the state and $z$ is an integer representing the winner of the game that state is in.
The network will receive $s$ as input (a stack of matrices describing the state $s$) and will output two values: $p$ and $v$. $p$ is a distribution probability over the actions and $v$ is a value in $[-1,1]$ representing which player is likely to win the game.
For the training it will use the following loss function:
$$l = (z - v)^2 - \pi^T log(p) + c ||\theta||^2$$
Lastly, it evaluates the new network and it starts the self-play section again.
My questions
If the network receives only the state $s$ (represented as matrices) as input, how can it then calculate the loss function if the values $\pi$ and $z$ are needed?
If these values are indeed passed as input for the network, are they passed through the convolutional (and other) layers of the network? Because if this is true, there is no mention in the article (unless I missed it) of this information.
AI: The best way to understand that part is by looking at figure 1 in the AlphaGo Zero paper.
The neural network (NN) minimizes the differences between its own policy $p_t$ and the MCTS policy $\pi_t$. The value of $\pi_t$ is produced by the MCTS self-play which in return uses the NN from the previous iteration.
The same goes for $v_t$ and $z$. In each iteration the weights of the NN are adjusted to minimize the distance between $v_t$ (output of the NN) and $z$ (output of the MCTS) as defined by the loss function. $z$ does not have a time index here as the full self-play produces just a single value for $z$ each time it is conducted.
TLDR for your first question: Both, $\pi$ and $v$, are being produced by the MCTS as input to the NN.
(The indexing in the paper is a bit confusing in my opinion so it is probably easiest to just look at it as stated above)
Now, with "input" I do not mean input on the input layer of the NN. As described in the appendix under "Neural network architecture" the input is a "19 x 19 x 17 image stack". which contains the following information:
The positions of player 1 for the latest 8 rounds (8 feature planes)
The positions of player 2 for the latest 8 rounds (8 feature planes)
A color feature indicating whose turn it is (1 feature planes)
And those 17 feature planes ($8+8+1$) combined with the $19\cdot19$ sized board is the $19\cdot19\cdot17$ input the NN receives thru its input layer. $\pi$ and $z$ are passed to the NN via the loss function only (i.e. they are the target values in this supervised learning problem!).
TLDR for your second question: $\pi$ and $z$ are not fed to the NN thru the input layer but just via the loss function as target values. |
H: Help making a custom categorical loss function in Keras
I am a bit new to machine learning, and I'm trying to get the basics working towards a bigger project using a very simple encoder-decoder model. It looks like this:
embedding_dim = 300
lstm_layer_size_1 = 300
lstm_layer_size_2 = 300
model = Sequential()
model.add(Embedding(self.max_input_vocab, embedding_dim,
input_length=self.max_input_length, mask_zero=True))
model.add(LSTM(lstm_layer_size_1)) # encoder
model.add(RepeatVector(self.max_output_length))
model.add(LSTM(lstm_layer_size_2, return_sequences=True)) # decoder
model.add(TimeDistributed(Dense(self.max_output_vocab, activation='softmax')))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
It takes in a sequence of words encoded as integers, with 0 padding up to max_input_length. And outputs a one-hot-encoded version of the output for words up to max_output_length.
For example, with a max ouput length of 115, and an expected output of length 20, the network should predict 20 integers in the range max_output_vocab, followed by 95 predicted 0's.
My problem:
I've been running into the issue that the network trains way too much off of the zero tokens in the output, as many of the target sequences have output lengths far below the max output length. The network ends up learning it can get the most accuracy by just predicting almost all 0's for most of the output.
I want to try to make a custom loss function that won't train on any output that comes after the first 0 token, but I'm not sure how I would go about doing this properly.
I know it will look similar to the keras.backend categorical_crossentropy, but would it be as simple as continuing to use a version of that function, but only feeding it the portion of the output sequence I want (everything before the first 0 token in the expected output)?
AI: The issue is very easy to solve assumming you still want to use crossentropy as your loss.
Tell your model to use temporal sample weights. You can do it like this by model.compile(sample_weight_mode='temporal', **other_params)
Generate your sample weights, I think you are smart enough to write your own implementation. The idea is as you said apply weights of 1 if it is supposed to be counted and apply 0 if it is not supposed to be counted. For example you have a sequence [3,5,1,-3,2,0,0,0,0] then your sample weights will be [1,1,1,1,1,0,0,0,0].
Supply the sample weights during fitting. Simply use model.fit(X,y,sample_weights=sample_weights, **other_fit_params).
Done. Now loss is only counted on non-zero entries of your output. |
H: Ordered and unordered categorical features – terminology
In the famous book "The Elements of Statistical Learning" by Hastie et al., the authors denoted unordered categorical variables as qualitative variables / nominal variables / factors.
I wonder, do other statisticians strictly follow this or some authors can use these terms (qualitative variables / nominal variables / factors) not only for unordered categorical variables but also for ordered categorical variables?
AI: The statistical programming language R uses the term "ordered factor," so factor isn't completely safe, though I can't find an example of an ordinal variable being called "factor" without the adjective.
I think ordinal variables are fairly often considered either qualitative or quantitative: they have the quantitative feel from the ordering, but lack mathematical operations. See the end of this post for a few examples.
But "nominal" seems relatively safe as meaning only unordered. (I found one contradiction to this, the Bonus Note #2 at https://www.mymarketresearchmethods.com/types-of-data-nominal-ordinal-interval-ratio/, but that's immediately contradicted by having "ordinal" in the next section?)
See also https://en.wikipedia.org/wiki/Level_of_measurement, especially the "Debate" section that lists a few other proposals. (Chrisman's proposal is nice for including "cyclic" features that are sometimes important in ML but don't fit into most standard libraries without some [unfaithful] encoding.)
A few links to show that the lines get blurred:
https://stats.stackexchange.com/q/159902/232706
https://www.mymarketresearchmethods.com/data-types-in-statistics/
https://stats.stackexchange.com/a/158226/232706
https://web.ma.utexas.edu/users/mks/statmistakes/ordinal.html |
H: Improving the performace of the Naive Bayes classifier by decorrelating the data
I was wondering if it is possible to improve the performance of the Naïve Bayes classifier by decorrelating the data. The Naïve Bayes assumes conditional independence of the features given some class $P(a_1, a_2 | c_1) = P(a_1 | c_1)P(a_2 | c_1)$ which is not necessarily true.
What if we applied a transformation into the space using decorrelation which will result in $Cov(a_1, a_2) = 0$ for all features. We have that if $a_1$, $a_2$ independent, then $Cov(a_1, a_2) = 0$. Although the converse does not strictly follow (e.g. consider the two dependent random variables $X \sim Norm(0, 1)$, and $X^2$ whose covariance is 0), the new features will be closer to being independent.
AI: As you pointed out, a null covariance does not guarantee that variables are independent. You can have strongly dependent variables which show a covariance equal to 0. See a famous plot taken from Wikipedia page on Pearson correlation coefficient
A standard way to uncorrelate variables is to perform principal component analysis on the dataset, and use all components which are, by definition, uncorrelated. But please note that the Naive Bayes assumption is conditional independence, so independence for each class. In the general case, PCA projection would not be the same depending on the selected class.
Regarding the performance of Naive Bayes, I see no reason that a simple transformation should improve the model. This is supported by some reasonably well-received answers on similar questions, for instance this one on Stack Overflow, or this other one on Cross Validated.
However, I have read several reports of people getting increased accuracy when performing PCA prior to Naive Bayes. See here for instance, for a few exchanges on the subject, within a community of which trustworthiness is unknown to me; or search "PCA naive bayes" through your web search engine.
So in the end, I have no mathematical evidence to support this, but my feeling would be that the model performance variation before/after transformation could depend on the problem, especially the directions of principal components for each class.
Perhaps, if you perform some tests, you could share the result here. Otherwise, I think you could get answers with more mathematical background on Cross Validated. |
H: Is machine learning the right tool for this job?
I would like to create a troubleshooting wizard.
The user will go through the wizard and choose different options, what options they choose will determine what is displayed next in the wizard.
Eventually the user will solve their problem (or not), at the end they will choose 'yes' or 'no' if there problem was solved.
The wizard would learn what most common resolutions to a problem was based on past input.
Before I go exploring machine learning further, does this sound like the right tool for the job?
AI: There is lack for details here, but problem is very interesting, sounds like QA chat-bot (you may read more about these) a bit. My thoughts for this solution to be viable:
there should be sufficient amount of users of your wizard to collect necessary amount of data if you want to use deep learning here. If you only have several hundred of cases when wizard is used, forget it, use some simple statistical analysis (like linear regression, naive Bayes etc.).
For collected data being variable enough to cover different cases, there should be some randomization in collecting the data - sometimes wizard should be asking questions in different order and maybe even asking irrelevant questions.
As for architecture, you may want to classify sequences of questions/answers, then the most obvious choice is variant of RNN. Or maybe you want to classify how good the next question is - maybe you want to use decision tree/random forest here to classify it. Or maybe you need reinforcement learning? That was to point you should think very well what you want to classify and then test it, maybe in several iterations.
So, if the wizard is complex enough and amount of data you have or may collect is big, then answer is most probably yes. |
H: Optimization of pandas row iteration and summation
i'm wondering if anyone can provide some input on improving the speed and calculations of a pandas result.
What i am trying to obtain is a summation of IDs in one table (player table) based on each row of a second table (UUID). Functionally each row needs to sum the total of the players table rows that are contained in its Active row and assign the UUID as the index to that row.
My initial thought was to loop row by row and calculate out my results but that has produced quite a slow result, and i suspect is not the optimal way that this could be accomplished. In the version below my estimate total time for the full dataset would be around 66 minutes. Running on a subsample of 10,000 takes around 20 seconds.
Would anyone have a better solution to calculating these results?
Thanks in advance!
UUID Table
This is a subset of the whole table
shape = (2060590, 2)
Player ID Table
This is a subset of the whole table
shape = (39,8)
Final Table
Code
# executes in ~20 seconds
df = None
for ix, i in enumerate(uuid_df[["UUID", "Active"]].sample(10000).itertuples(index=False)):
# Get UUID for row
_uuid = i[0]
# Get list of "Active items" (these are the ones that will be summed)
_active = i[1]
# Create new frame by selecting everything from points table where the ID is in the Active List.
# Sum selected values, convert to a dataframe with UUID as index and tranpose
_dff = points_table_df.loc[points_table_df.index.isin(_active)].sum().to_frame(_uuid).T
# Check if first dataframe, if not concat to existing one
if df is None:
df = _dff
else:
df = pd.concat([df, _dff])
AI: This could actually be done quickly and intuitively using linear algebra.
So consider your player as label binarized array (can be done with MultiLabelBinarizer) so you would expect an array of size (2060590, 39) containing 0 an 1, rearrange the columns similar as how you order the your player table (or the other-way around which ever is easier), basically such that first column of your new matrix correspond to the same player on the player table. Finally just apply matrix multiplication, and done.
This is an example using generated sample, but hopefully you get the idea of doing this.
import numpy as np
import pandas as pd
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
sample_active = pd.Series([[100,50,150,200],
[100,50,150],
[100,50],
[100]])
sample_df = pd.DataFrame()
sample_df['id'] = ['fadfsadsa', 'dsafsadf', 'dfsafsda', 'dasfasdfsaf']
sample_df['active'] = sample_active
## sample_df should look close to your original df
classes = [50,100,150,200]
player_df = pd.DataFrame({cl : np.random.uniform(0,1,size=5) for cl in classes}).T
player_df.columns = ['A','B','C','D','E']
sample_transformed = mlb.fit_transform(sample_active.values) ##apply multilabel binarizer
output = sample_transformed.dot(player_df.loc[mlb.classes_]) ##matrix multiply and get your required answer, use loc so the order will be similar as your binarized matrix.
new_df = pd.concat([sample_df['id'], pd.DataFrame(output)], axis = 1)
new_df.columns = ['id'] + list(player_df.columns)
For your case I think this should work :
mlb = MultiLabelBinarizer()
active_transformed = mlb.fit_transform(uuid_df['Active'])
output = active_transformed.dot(points_table_df.loc[mlb.classes_])
df = pd.concat([uuid_df[['UUID']], output], axis = 1)
df.columns = ['UUID'] + list(points_table_df.columns)
Try it! |
H: Keras - error when adding layers to loaded model
I want to use ResNet50 as a feature extractor. For this purpose, I have loaded the pre-trained model, deleted a few layers and added my layers to the model. For adding my layers, I have used the Sequential API. The code is the following:
resnet_model = ResNet50(weights='imagenet')
# Delete layers
for i in range(12):
resnet_model.layers.pop()
# Fix weights
for layer in resnet_model.layers:
layer.trainable = False
base_model = Model(inputs=resnet_model.inputs, outputs=resnet_model.layers[-1].output)
model = Sequential()
model.add(base_model)
model.add(Conv2D(2048, kernel_size=(3, 3), input_shape=(7, 7, 2048)))
model.add(BatchNormalization(momentum=bn_momentum))
model.add(Activation('relu'))
model.add(Conv2D(2048, kernel_size=(3, 3)))
model.add(BatchNormalization(momentum=bn_momentum))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(1000, activation='relu'))
model.add(Dense(200, activation='softmax'))
This code works. However, when I run print(model.summary()) I obtain the following:
Model: "sequential_26"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
model_16 (Model) (None, 7, 7, 2048) 19115904
_________________________________________________________________
conv2d_31 (Conv2D) (None, 5, 5, 2048) 37750784
_________________________________________________________________
batch_normalization_23 (Batc (None, 5, 5, 2048) 8192
_________________________________________________________________
activation_917 (Activation) (None, 5, 5, 2048) 0
_________________________________________________________________
conv2d_32 (Conv2D) (None, 3, 3, 2048) 37750784
_________________________________________________________________
batch_normalization_24 (Batc (None, 3, 3, 2048) 8192
_________________________________________________________________
activation_918 (Activation) (None, 3, 3, 2048) 0
_________________________________________________________________
flatten_10 (Flatten) (None, 18432) 0
_________________________________________________________________
dense_17 (Dense) (None, 1000) 18433000
_________________________________________________________________
dense_18 (Dense) (None, 200) 200200
=================================================================
Total params: 113,267,056
Trainable params: 94,142,960
Non-trainable params: 19,124,096
_________________________________________________________________
None
As you can see, the conv layer adds 37750784 new parameters to the network!!! Obviously, this isn't right, but I can't see why this happens...
AI: Actually, the number of parameters for the second layer is correct, the number of parameters in a convolutional layer is calculated as follows:
n_parameters = n_filters_l * (n_filters_l-1 * (kernel_size_h * kernel_size_w) + 1)
= 2048 * (2048 * (3 * 3) + 1) = 37750784
If you want to decrease the number of parameters, you therefore have multiple options:
Decrease the number of filters in the layer before it
Decrease the number of filters in the layer itself
Use a lower kernel height of width |
H: Build a model to classify given string/text input
I need to build ML/NN model to classify/predict a given string pattern. Sample training data looks as shown in the image. Input will be the string in the column "Id Number", i need to tell to which class it belongs to in column "Id Type".
How do i move forward in building a model for text classification?
How to convert string to digit for using embedding in keras?
AI: From the sample you have here it is not obvious that you need to apply some fancy ML technics. A simple rule based approach might also give you great results.
For instance:
if the string contains only numbers, then return B
If the string starts with 5 letters then return A
If the string starts with 2 letters and and then numbers return D
Etc.
That being said, to transform your strings into numbers a simple approach -looking at your data- could consist in assigning to each character a value in the set {0,1,...,9,10,..35,36}, 0 being assigned to the value 0, 9 to the value 9, A to the value 10, Z to the value 35, and NULL to the value 36 (as your strings don't have the same size, it might come in handy to introduce some placeholder for the blank values so that all your final vectors get to be of the same size)
Hope this helps! |
H: Not able to connect to GPU on Google Colab
I'm trying to use tensorflow with a GPU on Google Colab.
I followed the steps listed at https://www.tensorflow.org/install/gpu
I confirmed that gpu is visible and CUDA is installed with the commands -
!nvcc --version
!nvidia-smi
This works as expected giving -
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Wed Nov 20 10:58:14 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 53C P8 10W / 70W | 0MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
So far so good. I next try to see if it is visible to tensorflow -
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 16436294862263048894, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 18399082617569983288
physical_device_desc: "device: XLA_CPU device", name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 1461835910630192838
physical_device_desc: "device: XLA_GPU device"]
However when I try to run even a simple operation on the GPU with tensorflow it throws an error. When I checked if the GPU is visible to tensorflow it returns False -
tf.test.is_gpu_available()
False
What am I doing wrong and how do I fix this ?
AI: In Google Colab you just need to specify the use of GPUs in the menu above. Click:
Edit > Notebook settings >
and then select Hardware accelerator to GPU.
At that point, if you type in a cell:
import tensorflow as tf
tf.test.is_gpu_available()
It should return True. |
H: How to handle multi-label feature for binary classification problem?
I have dataset like :
profile category target
0 1 [5, 10] 1
1 2 [1] 0
2 3 [23, 5000] 1
3 4 [700, 4500] 0
How to handle category feature, this table may have others additional features too. One hot encoding lead to consume too much space.because number of rows is around 10 million. Any suggestion would be helpful.
AI: I recommend using scikit-learn's MultiLabelBinarizer to encode your category feature. This is the multi-label analog of one-hot encoding, but it has a sparse_output mode, which should solve your problem of limited memory. |
H: How to handle addresses of the restaurants to feed the data-set in the ML model?
I have data from different restaurants which have also address of the restaurants now I want to predict the food delivery timing based on the given data, now the restaurant address is one of the crucial data which I need to predict the food delivery timing, now my problem is the address is in string format so how do I handle this address to feed the data in the ML model. I have a total of 35 unique addresses if I do one-hot encoding my dataset will be very large and it will take too much time to train, is there anyway except one-hot encoding to handle the addresses of restaurants.
Data Sample-
AI: You can convert each address to a pair of geographical coordinates (latitude, longitude). In this way, you'll have a rough measure of the distance between different location. This can be done with Google Maps, for example.
I suggest you to use compute Manhattan distance between locations, in order to properly estimate travel times within a city. |
H: How could I improve my FB Prophet forecast?
I've got 1325 days of revenue data and when plotting the components it makes 100% sense from a domain expert point of view, so the model is capturing the variations quite well (or it seems it does...). I've added the country holidays using m.add_country_holidays(country_name='GB')
However, when it comes to accuracy I'm getting the following averages:
MAPE: 0.3
MAE: 721,415
721,415 is not an acceptable error. Around 100K would be.
These are the MAE and MAPE plots:
Time-series plot:
Performance metrics (first 20):
What else can I do to improve the accuracy of this model? Thank you
AI: The data here is bit noisy and has a lot of fluctuations. As a few of the comments suggest, apply some transformation on it. I would say get your data in some smaller range and then apply a LSTM to predict it. I made time-series work with a LSTM with removal of noise by eliminating outliers and it worked with nice further prediction.
RNNs tend to work better with time-series data especially bidirectional-LSTM due to their backwards learning capabilities. |
H: Question mark on Correlation Matrix with RapidMiner
I'm using RapidMiner to evaluate correlation between attribute in my dataset. The problem is that a lot of values appear with '?'.
Someone can help me?
This is a sample of data
AI: All your data is non-numeric, so there is no straightforward method to compute a correlation value. It appears that the software does compute a correlation value between 2-valued fields (breast and irradiat).
You will have to work on your data if you want to compute the full correlation matrix:
Make fields with numeric values numeric. For instance, the 6th field seems to be numeric data encoded as strings (make sure those are not numbered categories that cannot be ordered, otherwise the correlation will mean nothing)
Give numeric values to ordered categories. For instance, for the age field, you could certainly use a centered numeric value for each category (45 instead of 40-49, 55 instead of 50-59, etc.), or just a rank.
Ignore rows with no data. For instance, in the 5th field, you have binary results (yes/no) and appear to have a few question marks. Removing the lines with question marks will make the field 2-valued, something that appears to allow the software to compute the correlation.
For the rest of the fields, there is no magic trick, unfortunately. You may want to one-hot-encode them, but interpreting the results will be more complicated. There are also ways to test categorical fields independence, but that's a different measure: see this post. |
H: Am I overfitting my random forest model (more information in description)?
First off, sorry if this a novice question! Relatively new to all this stuff. Posted this in Stack Overflow and someone sent me here! Hope it's the right place.
Anyway, I'm working with 22 datasets that each have 180 observations of "Oddball" data and 720 observations of "Standard" data. I'm trying to use random forests for classification (i.e., oddball=1, standard=0). I understand there should be approximately equal trials/observations for both factors, but if I use 75% of the oddball data, then I'm barely using over 18% of the standard data. These data are pretty variable, and I think this could be problematic.
If I make four models, still using the same training data for each, am I overfitting my model? There's a lot more I've written, but this is basically what I'm trying to do:
jj = sample(1:180,(180*75),replace = F #Take 75% of all oddball data
kk = sample(181:900,(720*.75),replace = F) #Take 75% of all standard data
jj = sample(jj); kk = sample(kk) #Mix them up
kk = matrix(kk,4) #Divide the standard data so there are 4 sets of equal numbers for jj
samp1 = c(jj,kk[1,])
samp2 = c(jj,kk[2,])
samp3 = c(jj,kk[3,])
samp4 = c(jj,kk[4,])
I would then make four models (all while not touching the out-of-sample data) using each of these sample sets and average all their predictions to give me a "master" probability (i.e., an average of .8 would be deemed an oddball).
Is this overfitting the data? Is it even possible to overfit the data when using random forests? Is there something wrong with this intuition?
Thank you for anyone that helps! Your time and expertise is much appreciated.
AI: I dont think its necessary combining 4 models into one by averaging probabilities (how do you know that they have same weights? let the learner handle those weights) since you are using same features and learner which is an ensemble (it combines weak learner into one natively already). Therefore it's better to think about labeling and class balances of those 4 models.
Is this overfitting the data? Is it even possible to overfit the data when using random forests? Is there something wrong with this intuition?
In order to say that you are overfitting or not, train and test errors should be available. Simply training error is very low compare to test error it implies overfitting.
Secondly, ensemble models are likely to overfit due to their greediness.
And finally, you mentioned a bit class imbalance. So you should check metrics such as roc auc, recall and precision according to your case rather than accuracy.
Therefore my suggestion is that:
Apply hyper parameter tuning for all 4 models.
Report mean training & test errors with relevant performance metric (for imbalanced consider to use roc auc).
If there are huge differences between test and training errors, try to change your model/feature set etc since you are overfitting (eg: .99 for training, .70 for test). Otherwise pick the best estimator among 4 models and use this as final estimator.
Final comment: As complexity increases, models are more likely to overfit. (logistic regression is less likely to overfit compare to ensemble trees such as rf, gbm, xgboost). Thus always tune your parameters when using ensembles.
Hope it helps! |
H: To calculate my confusion matrix with recall and precision, my test set need to be equal(balanced)?
In my CNN, I have 200 'negative' images and 50 'positive' images in my test set and I want to make a confusion matrix. My doubt is if I have to equalize the samples in the dataset because if I keep this 200 - 50 my precision falls because I have a lot of 'false positives'.
So, I have to divide the percentage of negatives by 50, or keep 200/50?
This is my results without balance the samples:
predicted positive predicted negative
actual pos. 41 9
actual neg. 31 169
recall = 41 / 50 = 82%
precision = 41 / 72 = 57%
AI: I don't think there is any reason to modify the matrix so keep it as it is. Even if you scale it what purpose does it serve? At the end of the day your model does not change even if you modify your confusion matrix.
In my opinion you can use other metrics e.g. f1-score (or f beta score), AUC score, etc to judge your model. Confusion matrix only provides visualization where your model "confused" and I would say it is less useful for binary classification (as you only have False positive or False negative). Metrics above serve as better judge for evaluating your model.
This is a related question which you can probably check. |
H: Adjust predicted probability after smote
i have an imbalance data set and I used smote to oversample the minority class and undersample the majority class. now, I want to check the test AUC using predict_proba of the model.
I have two questions: 1. Do I have to correct the probability if I am comparing AUCs? 2. How can I correct it (a combination of undersampling and oversampling!)
AI: No, any adjustment to the probabilities will presumably be monotonic, so the rank-ordering of the predictions will be the same, so the AUC will be the same.
See, e.g., https://datascience.stackexchange.com/a/58899/55122
See also the more complex "probability calibration" techniques.
Also, if you see better results after smote+undersampling, and can share your data and work, I'd be very interested. I haven't yet seen an example where training on the original dataset doesn't do just as well (with appropriate thresholding). |
H: Should features be correlated or uncorrelated for features-selection with the help of multiple regression analysis?
I have seen researchers using Pearson correlation coefficient to find out the relevant features - to keep the features that have a high correlation value with the target. The implication is that the correlated features contribute more information in finding out the target in classification problems. Whereas, we remove the features which are redundant and have negligible correlation value.
Q1) Should highly correlated features with the target variable be included or removed from classification problems ? Is there a better/elegant explanation to this step?
Q2) How do we know that the dataset is linear when there are multiple variables involved? What does it mean by dataset being linear?
Q3) How to check for feature importance for non-linear case?
AI: Q1) Should highly correlated features with the target variable be included or removed from classification and regression problems? Is there a better/elegant explanation to this step?
Actually there's no strong reason either to keep or remove features which have a low correlation with the target response, other than reducing the number of features if necessary:
It is correct that correlation is often used for feature selection. Feature selection is used for dimensionality reduction purposes, i.e. mostly to avoid overfitting due to having too many features / not enough instances (it's a bit more complex than this but that's the main idea). My point is that there's little to no reason to remove features if the number of features is not a problem, but if it is a problem then it makes sense to keep only the most informative features, and high correlation is an indicator of "informativeness" (information gain is another common measure to select features).
In general feature selection methods based on measuring the contribution of individual features are used because they are very simple and don't require complex computations. However they are rarely optimal because they don't take into account the complementarity of groups of features together, something that most supervised algorithms can use very well. There are more advanced methods available which can take this into account: the most simple one is a brute-force method which consists in repeatedly measuring the performance (usually with cross-validation) with any possible subset of features... But that can take a lot of time for a large set of features.
However features which are highly correlated together (i.e. between features, not with the target response), should usually be removed because they are redundant and some algorithms don't deal very well with those. It's rarely done systematically though, because again this involves a lot of calculations.
Q2) How do we know that the dataset is linear when there are multiple variable involved? What does it mean by dataset being linear?
It's true that correlation measures are based on linearity assumptions, but that's rarely the main issue: as mentioned above it's used as an easy indicator of "amount of information" and it's known to be imperfect anyway, so the linearity assumption is not so crucial here.
A dataset would be linear if the response variable can be expressed as a linear equation of the features (i.e. in theory one would obtain near-perfect performance with a linear regression).
Q3) How to do feature importance for nonlinear case?
Information gain, KL divergence, and probably a few other measures. But using these to select features individually is also imperfect. |
H: Is it possible to generate syllogisms using an NLP algorithm?
I want to build a tool that generates sensible syllogisms. An example of a syllogisms is: all A are B. all C are A. all C are B). I want the triplet (A, B, C) to be semantically related to each other in the way described by the syllogisms. That is, I would like my tool to generate 'a=humans; b=mortal; c=greeks' and not 'a=chickens; b=burgers; c=frogs'.
There's a syllogism generator online (http://krypton.mnsu.edu/~jp5985fj/courses/609/Logic/Silly%20Syllogisms.htm) but it doesn't generate syllogisms that are semantically related, it generates random terms for A, B, and C which may or may not be related.
My question is, in NLP, are there any research papers for generating semantically valid syllogisms? What topics would I need to research to build this tool?
AI: That would probably be related to textual entailment and also relation extraction.
I'm not aware of any specific work but I would check in the biomedical domain, because there are resources such as SemRep and I wouldn't be surprised if people tried to use it for similar purposes. |
H: Calculating possible number of configuration
I am wondering how did they get the $19200$ possible configurations? Like, $5^6 = 15625$, where $6$ is the number of hyper-parameters:
AI: The total number is:
$$5 \times 5 \times 6 \times 4 \times 4 \times 8$$
which is equal to $19200$. Here, we just count the number of possible values for each parameter. |
H: Create a new column in Pandas Dataframe based on the 'NaN' values in another column
What is the most efficient way to create a new column based off of nan values in a separate column (considering the dataframe is very large)
1 2 3
4 5 NaN
7 8 9
3 2 NaN
5 6 NaN
Should give
1 2 3 0
4 5 NaN 1
7 8 9 0
3 2 NaN 1
5 6 NaN 1
EDIT:
What if it were based on 2 columns? Like:
1 2 3 0
4 NaN 1 1
7 8 9 0
3 2 NaN 1
5 NaN 2 1
AI: In [8]: %timeit df['B'] = df['A'].isnull()*1
517 µs ± 145 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [7]: %timeit df['B'] = df['A'].isnull().astype(int)
283 µs ± 14.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [12]: %timeit df['C'] = np.where(np.isnan(df['A'].values), 1, 0)
105 µs ± 5.97 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [13]: %timeit df['C'] = np.where(df['A'].isnull(), 1, 0)
227 µs ± 12.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Seems like the full numpy solution (In [12]) is fastest.
EDIT:
For across multiple columns
df['C'] = np.where(np.any(np.isnan(df[['A', 'B']])), 1, 0) |
H: Face dataset organized by folder
I'm looking for a quite little/medium dataset (from 50MB to 500MB) that contains photos of famous people organized by folder.
The tree structure have to bee something like this:
├── bfegan
└── ...
├── chris
└── ...
├── dhawley
└── ...
├── graeme
└──...
├── heather
└──...
Do you know some dataset that have this tree structure?
I want to understand how my very basic neural network is able to detect face in a 'real-world' situation.
https://github.com/alessiosavi/PyRecognizer
AI: I've found a dataset the meet the specification here:
https://www.kaggle.com/atulanandjha/lfwpeople/data#lfw-funneled.tgz
Some people in this dataset have only very few image.
We can create a new one dataset using the following bash command, in order to exctract only the people that contains more than 5 images:
# Extract only the people that have more than 5 photos (-gt 5)
for i in $(ls); do a=$(ls $i |wc -l); if [ "$a" -gt 5 ]; then echo $i ; fi ; done > people_ok
# Create a directory for store the images
mkdir -p /tmp/faces
# Copy the filtered directory in the new one
for i in $(cat people_ok | xargs echo -n) ; do cp -r $i /tmp/faces/ ; done |
H: ROC curve interpretation
I trained a CNN model and a combined CNN-SVM model for classification. I wanted to compare their performance using ROC curve but I was confused which model is better. How to interpret the given ROC curves ?
AI: If you hear from the area under the curve (AUC), you can find that the first classifier is better as AUC of the first curve is more than the second classifier. To know more about AUC you can find this post useful. |
H: Why GA convergence curves continue as two parallel lines?
I'm working on a optimization problem and using GA algorithm (in MATLAB, ga function).
As you know MATLAB plots GA result with two curves, one for the best values and other to show the mean values and when this two curves touch each others it means algorithm has been converged.
On my cause these two curves don't get along and continue as two parallel lines until the end of max Generations number and finally a premature convergence,What makes it happen?
I have tested with other parameters value but same result. On pre-processing I used Missing value,Normalized and smoothing and tested data set belongs to UCI.
GA Parameters:
MutationFcn : @mutationadaptfeasible: 0.03
CrossoverFcn : @crossoverheuristic : 1.2
maxGenerations : 200;
populationSize : 180;
Chromosome length: 33
Test Instances : 71
AI: As you know MATLAB plots GA result with two curves, one for the best values and other to show the mean values and when this two curves touch each others it means algorithm has been converged.
More accurately when the two curves touch each other it means that all the individuals in the population perform exactly the same, because that's the only way for the best and the mean to be equal (and usually this happens because all the individuals are identical). That can happen only by chance and/or if the mutation probability is very small, because as soon as there is an individual with a mutation it's unlikely that this individual performance will be equal to the best.
To me in a plot like yours successful convergence was reached around 60-70 iterations. The residual difference is due to the random mutations. |
H: how to use standardization / standardscaler() for train and test?
At the moment I perform the following:
estimators = []
estimators.append(('standardize', StandardScaler()))
prepare_data = Pipeline(estimators)
n_splits = 5
tscv = TimeSeriesSplit(n_splits = n_splits)
for train_index, val_index in tscv.split(df_train):
X_train, X_val = prepare_data.fit_transform(df_train[train_index]), prepare_data.fit_transform(df_train[val_index])
X_test = prepare_data.fit_transform(df_test)
Now I would like to know if this is correct. My concern is that X_train and X_test are transformed separately. While in the first instance I thought this is how it should be I'm about to change my mind as I think I have to use the mean and std of the train set to use within the test set?
AI: The recommended way (see 'Elements of Statistical Learning', chapter 'The Wrong and Right Way to Do Cross-validation') is to calculate the mean and the standard deviation of the values in the training set and then apply them for standardizing both the training and testing sets.
The idea behind this is to prevent data leakage from the testing to the training set because the aim of model validation is to subject the testing data to the same conditions as the data used for the model training. |
H: Fine Tuning the Neural Nets
I have recently read about Fine Tuning, and what I want to know is, when we are fine-tuning our model is it necessary to Freeze the model and train only the top part of the model and then unfreeze some layers and again train the model or one can directly begin by unfreezing some layers? Till now, I have read that one does not unfreeze the layers directly because then we risk losing the important features captured by the earlier layers
AI: It can work either way. If you want to keep the exact feature extractors, then you should freeze everything except the "top" of the model. You can also unfreeze the whole model; the "top" of the model will be trained from scratch, and the feature extractors near the "bottom" of the model will be tweaked to work better with your dataset. The potential drawback of unfreezing the whole model is a higher potential for overfitting (and a longer, more expensive training time)
[I]s it necessary to Freeze the model and train only the top part of the model and then unfreeze some layers and again train the model or one can directly begin by unfreezing some layers?
I'm not aware of any training routine that involves freezing and unfreezing different parts of the model at different times during training. People may have done this, but I'm not sure what the benefits would be. |
H: K-Means initialization
K-Means initializes the centroids randomly, but there are other methods to initialize. In this paper, http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf, they propose randomly choosing a data point initially then choose the other centroids based on the distance from the initial centroid.
My question is: how does this give you the right result? Say my data clusters naturally into three clusters, a noisy cluster around the (x, y) points (1, 1), (0, 0), and (-1, -1). Say I use the method from the paper and initially choose a data point (1.32, 0.98) and mark it as the center of cluster #1. According to the paper, I choose the next centroid based on distance, so the next point will be around (-1, -1). Say the data point chosen for cluster #2 is (-1.12, -0.89). These first two steps make sense, but now I continue on to cluster #3 and again I chose based on distance so I'll end up putting another cluster center very close to cluster #2's center. What am I missing here? Shouldn't the centers be chosen based on the sum of distances from the already initialized cluster centers?
EDIT:
Initially, I randomly choose a data point to mark as center of cluster #1. I choose the red point. Now I calculate the distances between red point and all other data points and choose the furthest point away as center of cluster #2. This is the green point. My question is: according to the paper, I repeat this and calculate distances from the red point to all remaining points and take the furthest away, but this puts me back near the green point, but I was trying to get to the center cluster.
AI: The nth centroid is chosen from a distribution proportional to $D(x)^2$, but pay careful attention to how $D(x)$ is defined. From the paper (top of page 3):
In particular, let $D(x)$ denote the shortest distance from a data point to the closest center we have already chosen.
Notice that $D(x)$ is the distance from $x$ to the nearest centroid. Compare that to the description of how you are choosing the points:
I repeat this and calculate distances from the red point to all remaining points and take the furthest away, but this puts me back near the green point, but I was trying to get to the center cluster.
Instead of computing the distance between all remaining points and the red point, you should instead compute the distance between each point and its nearest centroid.
For the points in the cluster around (1, 1) you'll compute the distance from the red point, and all the distances will be relatively small. Similarly, for the points around (-1, -1), you'll compute the distance from the green point, and all these distances will be pretty small.
However, for points around (0, 0), some of them will be closer to the red centroid and some will be closer to the green centroid, but they are not very close to either one. This means that $D(x)$ will be large for points in the (0, 0) cluster, and you are likely to choose one of these points as the next centroid.
Does that make sense? |
H: What is the approx minimum size of dataset required to build 90% correct model?
I am working with a financial dataset size which is around 3000. I have attempted the supervised-learning regression techniques and not able to go beyond 70% accuracy.
Features: 10
Data size:3700
Models attempted: Decision Trees, Random forest, Lasso Regression, Ridge regression, Linear regression
I am of the opinion that dataset size is too small to expect any good results beyond 65%. It's obvious because machine learning algorithms are data-hungry in nature. However, In a generic sense, Is there a lower-bound on the dataset size that has been found to achieve 90% accuracy?
Such a theory would also help me to gather data until I reach that point and then do some productive work.
Any help is appreciated.
AI: There is no theory or general case that sets the size of dataset required to reach any target accuracy. Everything is dependent on the underlying, and usually unknown, statistics of your problem.
Here are some trivial examples to illustrate this. Say want to predict the sex of a species of frog:
It turns out the skin colour is a strong predictor for the species Rana determistica, where all males are yellow and all females blue. The minimal dataset to get 100% accuracy on the prediction task is data for two frogs, one of each sex.
It turns out the skin colour is uncorrelated for the species Rana stochastica, where 50% of each sex are yellow and the other 50% are blue. There is no size of dataset of frog colour labelled with sex that will get you better than 50% accuracy on the task.
However, Rana stochastica does have eye colouration with almost determistic relationship to the sex of the creature. It turns out that 95% of males have orange eyes and 95% of females have green eyes (with only those two eye colours possible). Those are predictive variables that are strong enough that you can get 95% accuracy if you can discover the relationship.
Some related theory worth reading to do with limitations of statistical models is Bayes error rate.
In the last case, simply predicting "male" for orange eyes and "female" for green eyes will give you 95% accuracy. So the question is what size of dataset would guarantee a model would both make those predictions, plus give you the confidence that you had beaten your 90% accuracy goal? It can be figured out, assuming you collect labelled sample data at random - note that there is a good chance that models trained on very little data would get 95% accuracy, but that it could take a lot more data for a test set in order for you to be confident that you really had a good enough result.
The maths to demonstrate even this simple case is long-winded and complex (if I were to outline the theory being used), and does not actually help you, so I am not going to try and produce it here. Plus of course I chose 95%, but if the eye colour relationship was only 85% predictive of sex, then you would never achieve 90% accuracy. With a real project you have many more variables and at best only a rough idea on how they might correlate with the target variable or each other in advance, so you cannot do the calculation.
I think instead it is more productive to look at your reason for wanting a theory to choose your dataset size:
Such a theory would also help me to gather data until I reach that point and then do some productive work.
Sadly you cannot do this theoretically. However, you can do a few useful things:
Plot a learning curve against data set size
I'd recommend this as your approach here. The driving question behind your question is: Will collecting more data improve my existing model?
Using the same cross-validation set each time, train your model with increasing amounts of data from your training set. Plot the cross-validated accuracy against number of training samples, up to the whole training set that you have so far.
If the graph has an upward slope all the way to end, then this implies that collecting more data will improve accuracy for your current model.
If the graph is nearly flat, with accuracy not improving towards the end, then it is unlikely that collecting more data will help you.
This does not tell you how much more data you need. An optimistic interpretation could take the trend line over the last section of the graph and project it to where it crosses your target accuracy. However, normally the returns for more data will become less and less. The training curve will asymptotocally approach some maximum possible accuracy for the given dataset and model. What plotting the curve using the data you have does is allow you to see where you are on this curve - perhaps you are still in the early parts of it, and then adding more data will be a good investment of your time.
Reassess your features and model
If your learning curve is not promising, then you need to look in more detail. Here are some questions you can ask yourself, and maybe test, to try and progress.
Your features:
Are there more or different features that you could collect, instead of focussing on collecting more of the same?
Would some feature engineering help - e.g. is there any theory or domain expert knowledge from the problem that you can turn into a formula and express as a new feature?
Your model:
Are there any hyper-parameters you can tune to either get more out of the existing data, or improve the learning curve so that is worth going back to get more data?
Would an entirely different model help? Deep learning models are often top performers only when there is a lot of data, so you might consider switching to a deep neural network and plotting a learning curve for it. Even if the accuracy on your current dataset is worse, if the learning curve shows a different model type might have the capacity to go further, it might be worth it.
Do note however, that you could just end up with the same maximum accuracy as before, after a lot of hope and effort. Unfortunately, this is hard to predict, and you will need to make careful decisions about how much of your time is worth sinking into solving the original problem.
Check confidence limits to choose a minimum test dataset size
Caveat: This is a guide to thinking about data set sizes, especially test set sizes. I have never known anyone use this to actually select some ideal data set size. Usually it happens the other way around, you have some size of test data set made available to you, and you want to understand what that tells you about your accuracy measurements.
You could determine a test set size that gives you reasonable confidence bounds on accuracy. That will mean, when you measure your 90% accuracy (or better) that you can be reasonably certain that the true accuracy is close to it. You can do this using confidence intervals on the accuracy measure.
As an example from the above link, you could measure a 92% accuracy on your test set, and want to know whether you are confident in that result. let's say you want to be 95% certain that you really do have accuracy > 0.9 . . . how should you choose N, the size of your test set?
You know that you are 0.02 over the desired accuracy by measurement, and you want to know if this enough that you can claim to be certain that you have 90% accuracy:
$$0.02 > 2 \sqrt{\frac{0.92 \times 0.08}{N}}$$
Therefore you need
$$N > \frac{0.92 \times 0.08}{0.0001}$$
$$N > 736$$
This is the minimum test data set size that would give you confidence that you have met your target of 90% accuracy, provided that
you have actually measured 92% or higher accuracy
you have selecting test data at random from the target population
that you have not used the test data set to select a model (by e.g. doing this test multiple times until you got a good result)
Typically you don't work backwards like this to figure N for a specific accuracy, but it is useful to understand the limits of your testing. You should generally consider how the size of the test dataset limits the accuracy by which you can confirm your model.
The formula above also has limitations when measuring close to 100% accuracy, and this is because the assumptions behind it fail - you would need to switch to more complex methods, perhaps a Bayesian approach, to get a better feel for what such a result was telling you, especially if the test sample size was small.
After you have established a minimum test dataset size, you could use that to guide data collection. For instance, the typical train/cv/test dataset might be 60/20/20, so with your result above you could choose an overall dataset size of 5 times 736, let's round up and call it 4000. In general this sets a lower bound on the size of dataset, as it says nothing about how hard it would be to learn a specific accuracy. |
H: Classification accuracy of a Random Multi-label Classifier
What is the exact accuracy of a random classifier which has n labels (say 1000) where k labels (say 50) are true?
Can I say the accuracy of a random classifier has an upper bound of k/n?
-Edit-
I am interested in a numerical figure or an upper bound for a random classifier (for example the accuracy of a classifier randomly picking 50 classes out of 1000 where some 50 classes are correct).
Sorry if my previous question is not fully understood.
AI: I'll assume you want accuracy as @AlexanderRoiatchki defined it (see also CV.SE), and want exactly $k$ distinct labels, both true and predicted.
Letting $X$ denote the random variable of the number of correct labels for an instance, we have that
$$ \mathbb{P}(X=x) = \binom{k}{x} \binom{n-k}{k-x} / \binom{n}{k} $$
(choose the correct labels and the incorrect ones separately). The accuracy score from a given instance is
$$ \frac{|T_i \cap P_i| }{ |T_i \cup P_i| } = \frac{ X}{2k-X}. $$
So the expected value of the accuracy is
$$ \mathbb{E}\left[\frac{X}{2k-X}\right] = \frac{1}{\binom{n}{k}} \sum_{x=0}^k \frac {x \binom{k}{x} \binom{n-k}{k-x}} {2k-x}.$$
That doesn't seem to have a nice closed form. When $k\ll \sqrt{n}$, this seems to be roughly $k/2n$, but I've made a big assumption about the non-dominating terms of the summation being negligible.
In the specific case you mentioned, $k=50, n=1000$, I get 0.0259.
Edit, in response to comment: if you want to define accuracy as just $X/k$, then the expected accuracy of the random classifier simplifies nicely to $k/n$:
$$ \begin{align*}
\frac{1}{\binom{n}{k}} \sum_{x=0}^k \frac{x}{k} \binom{k}{x} \binom{n-k}{k-x}
&= \frac{1}{k\binom{n}{k}} \left( k \binom{n-1}{k-1} \right) & \text{combinatorial identity*} \\
&= \frac{ (n-1)! / [(k-1)! (n-k)!] }{ n! / [k! (n-k)!] } & \text{just expand coefficients}\\
&= \frac{k}{n} & \text{simplify factorials}
\end{align*}
$$
(* I can't help but elaborate. In the current context, this is the number of choices with $X=x$ with the additional decoration of a special correct label (one of the $x$). To get the RHS then, pick the decorated label first ($k$), then choose the remaining labels without regard to how many are correct ($\binom{n-1}{k-1}$). ) |
H: Need explanation of a matrix multiplication
I'm reading the Deep Learning book by MIT.
On the page 172, there's a part like this:
$$
f^{(1)}(x)=h=W^Tx \tag{1}
$$
$$
f^{(2)}(h)=h^Tw \tag{2}
$$
Substitute (1) into (2), they got:
$$
f(x)=w^TW^Tx
$$
Since I'm not so familiar with linear algebra stuff, I infer that something like below is valid:
$$
A^TB=B^TA \tag{3}
$$
So what is the property of (3) is called?
AI: Let's take it step by step.
$$
f^{(1)}(x)=h=W^Tx \tag{1}
$$
$$
f^{(2)}(h)=h^Tw \tag{2}
$$
We substitute h.
$$
f(x)=(W^Tx)^Tw \tag{3}
$$
To make it work, we'll make a little trick.
The transpose of a transposed matrix is the original matrix.
$$
(A^T)^T = A
$$
We substitute w.
$$
f(x)=(W^Tx)^T (w^T)^T \tag{4}
$$
Now we use following:
$$
(AB)^T=B^T A^T
$$
The transpose of two matrices multiplied together is the same as the product of their transpose matrices in reverse order.
et voila
$$
f(x)=w^TW^Tx
$$ |
H: How to scale a variable when not knowing the maximum
I have a dataset with different features where some of them are not categorical, so they need to be scaled or normalized (especially the target).
However, normalizing between 0-1 for instance means that the variable maximum value will be equal to one, and the mean to 0.
Now if I receive a new example never seen before, and this example has a value higher than the max of the training examples, how should this value be normalized ?
EDIT
As an example. If my maximum value is 150, it will be scaled to 1.0. Now if I receive a new example, with a value equal to 320, how should it be scaled ?
AI: If your model works in production, you should not retransform your scaler, you should transform new example like 150 is still the maximum value. (It will give you higher than 1, so its a bit problematic but possible solution is below)
However you can still label those example as outlier.
Possible solution for that case: If you have high number of outliers/leverages, you should consider tree ensembles and/or regularized models.
If your predictor is not on production, just add those examples to your train set and fit again since your sample in the first training would be different than the reality. |
H: SHAP value can explain right?
I face a problem with using SHAP value to interpret the Tree-based model.
(https://github.com/slundberg/shapsd)
First, I have input around 30 features and I have 2 features that have high positive correlation between them.
After that, I train the XGBoost model(python) and look at SHAP values of 2 features the SHAP values have negative correlation.
Could you all explain to me, why the output SHAP values between 2 features don't have the correlation the same as input correlation? and I can trust that output of SHAP or not?
=========================
The correlation between input: 0.91788
The correlation between SHAP values: -0.661088
2 features are
1) Pupulation in province and
2) Number of family in province.
Model Performance
Train AUC: 0.73
Test AUC: 0.71
Scatter plot
Input scatter plot (x: Number of family in province, y: Pupulation in province)
SHAP values output scatter plot (x: Number of family in province, y: Pupulation in province)
AI: I guess what you meant by correlation between SHAP values is "SHAP Interaction Value".
SHAP value is a measure how feature values are contributing a target variable in observation level. Likewise SHAP interaction value considers target values while correlation between features (Pearson, Spearman etc) does not involve target values therefore they might have different magnitudes and directions.
The features may grow together but their contribution to target variable in different intervals may reverse.
You may want to check docs and this beautiful work. |
H: How does $\chi^2$ feature selection work?
I can't find the information how $\chi^2$ are used to select numerical features for a model.
Fro instance, If I employ the sklearn library:
from sklearn.datasets import load_iris
from sklearn.feature_selection import chi2
iris = load_iris()
X, y = iris.data, iris.target # X contains 4 features, y does 3 classes
I can build the following table for the dataset
pivot_table = np.zeros((X.shape[1], len(np.unique(y))))
pivot_table[:, 0] = X[y==0].sum(axis=0)
pivot_table[:, 1] = X[y==1].sum(axis=0)
pivot_table[:, 2] = X[y==2].sum(axis=0)
print(pivot_table.T)
array([[250.3, 171.4, 73.1, 12.3],
[296.8, 138.5, 213. , 66.3],
[329.4, 148.7, 277.6, 101.3]])
Then applying the contingency table to it:
from scipy.stats import chi2_contingency
chi2_contingency(pivot_table, correction=False)
Output:
125.58397740261773
1.0924329599765022e-24
6
[[213.82265358 301.31664021 361.36070621]
[111.8757204 157.6540915 189.0701881 ]
[137.51492279 193.78458652 232.40049069]
[ 43.88670323 61.84468177 74.168615 ]]
But this only provides the information about dependence of variables. And they are dependent because p-value is quite low.
Then I use the other function and I get what I want:
chi2(X,y)
(array([ 10.81782088, 3.7107283 , 116.31261309, 67.0483602 ]),
array([4.47651499e-03, 1.56395980e-01, 5.53397228e-26, 2.75824965e-15]))
But how does this work? What math lies at the heart of it?
AI: Just go inside the chi2 function
It calculates chi2 values for Y's based on the contingency table then computes p-values for each feature |
H: How to select multiple columns in a RDD with Spark (pySpark)?
Lets say I have a RDD that has comma delimited data. Each comma delimited value represents the amount of hours slept in the day of a week.
So for i.e. [8,7,6,7,8,8,5]
How can I manipulate the RDD so it only has Monday, Wednesday, Friday values? There are no column names by the way. But the PySpark platform seems to have _co1,_co2,...,_coN as columns.
AI: I dont know which version you are using but I recommend DataFrames since most of upgrades are coming for DataFrames. (I prefer spark 2.3.2)
First convert rdd to DataFrame:
df = rdd.toDF(["M","Tu","W","Th","F","Sa","Su"])
Then select days you want to work with:
df.select("M","W","F").show(3)
Or directly use map with lambda:
rdd.map(lambda x: [x[i] for i in [0,2,4])
Hope it helps! |
H: Apply LSTM to each matrix element with Keras
I'm trying to apply a LSTM/GRU to each entry of a matrix $X$
note: Each matrix element is a time-series, so shape of X is (batch_size, rows, cols, time_steps, dims)
$
y_{i,j}=
\begin{cases}
0, & \text{if}\ x_{i,j}\small[0\small] = 0 \\
LSTM(x_{i,j}), & \text{otherwise}
\end{cases}
$
, where:
$x_{i,j} := \text{element (i,j) of matrix X. This element is a time series}$
$x_{i,j}\small{[0]} := \text{ First tiem step of time series $x_{i,j}$ }$
$y_{i,j} := \text{LSTM output for $x_{i,j}$}$
PROBLEM
I've managed to evaluate the condition and to apply the LSTM to each entry, but I can not build a keras model with these outputs, beacuse of the typical error:
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
CODE
import keras
def convert_to_layer(X):
return X
def gather(r):
def gather_(X):
return X[:,r,:]
return gather_
n_rows = 2
n_cols = 2
time_steps = 20
dims = 2
units = 32
M = keras.layers.Input(batch_shape=(1,n_rows,n_cols,time_steps,dims))
# Serialize inputs
M2 = keras.layers.Reshape((n_rows*n_cols,time_steps,dims))(M)
# Gather element 0 (only work on this element for simplicity...)
seq = keras.layers.Lambda(gather(0))(M2)
# Create condition
cond = keras.backend.equal(keras.backend.sum(keras.backend.abs(seq),axis=-1),0)
# Apply if statement
y = keras.backend.switch(cond,
lambda: keras.layers.GRU(units,return_sequences=True)(seq),
lambda: keras.backend.zeros(shape=(1,time_steps,units)))
y = keras.layers.Lambda(convert_to_layer)(y)
print(y)
model = keras.models.Model(M,y)
OUTPUT
Tensor("lambda_311/Identity:0", shape=(1, 20, 32), dtype=float32)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-76-983e9545cca1> in <module>()
29 print(y)
30
---> 31 model = keras.models.Model(M,y)
5 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/network.py in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1378 ValueError: if a cycle is detected.
1379 """
-> 1380 node = layer._inbound_nodes[node_index]
1381
1382 # Prevent cycles.
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
QUESTIONS
The problem is with the keras.backend.switch (if the switch is removed and the output of the model is the output of the gru_cell unconditionally, the model can be build)
Why isn't the model compiling?
Any other alternative way of computing the LSTM over the matrix $X$
AI: SOLUTION
Replace:
# Apply if statement
y = keras.backend.switch(cond,
lambda: keras.layers.GRU(units,return_sequences=True)(seq),
lambda: keras.backend.zeros(shape=(1,time_steps,units)))
y = keras.layers.Lambda(convert_to_layer)(y)
with:
y = keras.layers.Lambda(lambda x: keras.backend.switch(cond,
lambda: keras.layers.GRU(units,return_sequences=True)(x),
lambda: keras.backend.zeros(shape=(1,time_steps,units))))(seq) |
H: What is Pruning & Truncation in Decision Trees?
Pruning & Truncation
As per my understanding
Truncation: Stop the tree while it is still growing so that it may not end up with leaves containing very low data points. One way to do this is to set a minimum number of training inputs to use on each leaf.
Pruning is a technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that provide little power to classify instances.
Can somebody explain the in-detailed implementation of these techniques in GBDT frameworks like XGBoost/LightGBM.
Which parameters are used in implementing these techniques?
AI: Your understanding is correct. xgboost has nice explanation in the docs.
Reading the original papers is always great idea. Here's one for LGBM and here's one for XGBoost.
As for dessert, here's catboost paper.
Boosting is by far one of the most important concepts in hard machine learning. It's good to know it by heart. |
H: Classification of images of different size
I am doing image classification using Convolutional neural networks, but I have a problem, because the images I want to classify are all of different sizes. My code is the following:
import numpy as np
import tensorflow as tf
import keras
from keras.preprocessing.image import ImageDataGenerator
trainingset = '/content/drive/My Drive/Colab Notebooks/Train'
testset = '/content/drive/My Drive/Colab Notebooks/Test'
batch_size = 32
train_datagen = ImageDataGenerator(
rescale = 1. / 255,\
zoom_range=0.1,\
rotation_range=10,\
width_shift_range=0.1,\
height_shift_range=0.1,\
horizontal_flip=True,\
vertical_flip=False)
train_generator = train_datagen.flow_from_directory(
directory=trainingset,
target_size=(118, 224),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
shuffle=True
)
test_datagen = ImageDataGenerator(
rescale = 1. / 255)
test_generator = test_datagen.flow_from_directory(
directory=testset,
target_size=(118, 224),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
shuffle=False
)
num_samples = train_generator.n
num_classes = train_generator.num_classes
input_shape = train_generator.image_shape
classnames = [k for k,v in train_generator.class_indices.items()]
print("Image input %s" %str(input_shape))
print("Classes: %r" %classnames)
print('Loaded %d training samples from %d classes.' %
(num_samples,num_classes))
print('Loaded %d test samples from %d classes.' %
(test_generator.n,test_generator.num_classes))
and
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten,\
Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras import optimizers
def AlexNet(input_shape, num_classes, regl2 = 0.0001, lr=0.0001):
model = Sequential()
# C1 Convolutional Layer
model.add(Conv2D(filters=96, input_shape=input_shape, kernel_size=(11,11),\
strides=(2,4), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation before passing it to the next layer
model.add(BatchNormalization())
# C2 Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation
model.add(BatchNormalization())
# C3 Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# C4 Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# C5 Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation
model.add(BatchNormalization())
# Flatten
model.add(Flatten())
flatten_shape = (input_shape[0]*input_shape[1]*input_shape[2],)
# D1 Dense Layer
model.add(Dense(4096, input_shape=flatten_shape, kernel_regularizer=regularizers.l2(regl2)))
model.add(Activation('relu'))
# Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# D2 Dense Layer
model.add(Dense(4096, kernel_regularizer=regularizers.l2(regl2)))
model.add(Activation('relu'))
# Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# D3 Dense Layer
model.add(Dense(1000,kernel_regularizer=regularizers.l2(regl2)))
model.add(Activation('relu'))
# Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# Output Layer
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# Compile
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
return model
# create the model
model = AlexNet(input_shape,num_classes)
model.summary()
now, if I do the training, I get:
steps_per_epoch=train_generator.n//train_generator.batch_size
val_steps=test_generator.n//test_generator.batch_size+1
try:
history = model.fit_generator(train_generator, epochs=50, verbose=1,\
steps_per_epoch=steps_per_epoch,\
validation_data=test_generator,\
validation_steps=val_steps)
except KeyboardInterrupt:
pass
if get the following error message:
ValueError Traceback (most recent call last)
<ipython-input-11-70354a7752ae> in <module>()
3
4 try:
----> 5 history = model.fit_generator(train_generator, epochs=50,
verbose=1, steps_per_epoch=steps_per_epoch,
validation_data=test_generator,
validation_steps=val_steps)
6 except KeyboardInterrupt:
7 pass
8 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in
standardize_input_data(data, names, shapes, check_batch_axis,
exception_prefix)
139 ': expected ' + names[i] + ' to have
shape ' +
140 str(shape) + ' but got array with shape ' +
--> 141 str(data_shape))
142 return data
143
ValueError: Error when checking target: expected activation_9 to have shape
(4,) but got array with shape (5,)
so, this should mean that the images I want to classify are of different sizes. So how can I do classification in this case?
I think I should reshape the images somehow in such a way they have all the same size.
I have looked up on the internet for a solution, but I haven't find anything that works well. Can somebody please help me? Thanks in advance.
[EDIT]I am trying to do the following to resize the photos:
from PIL import Image
import os, sys
path = "/content/drive/My Drive/Colab Notebooks/Train"
dirs = os.listdir( path )
def resize():
for item in dirs:
if os.path.isfile(path+item):
im = Image.open(path+item)
f, e = os.path.splitext(path+item)
imResize = im.resize((200,200), Image.ANTIALIAS)
imResize.save(f + ' resized.jpg', 'JPEG', quality=90)
resize()
In particular, I write this code before building the network.
But it still gives me the same error. I am really stuck on this.
[EDIT 2] I have also tried to apply this to the sub folders, so if I have:
I have considered sigularly the sub-directories HAZE,SUNNY,CLOUDY,SNOWY , but it still does not work.
The fact is that I don't see what I am doing wrong in the code above.
AI: I am not an expert in image classification but from the little I know I can tell you that the images should have the same size because the images are converted to an structured array of size n (pictures) x (width (px) x height (px) x 3)
The 3 is due to the arrays RGB.
If the width and height are not the same, the derived structured array will not have the same size for all individuals.
There should be some package that helps you convert the images to the same size. |
H: Lime Explainer: ValueError: training data did not have the following fields
I'm attempting to gather ID level drivers from my XGBoost classification model using LIME and I'm running into some odd errors. I'm using this link as a reference.
Here is the overall code that I'm using:
explainer = lime.lime_tabular.LimeTabularExplainer(Xs_train.values, class_names = [1.0, 0.0], kernel_width = 3)
predict_fn_xgb = lambda x: trained_model.predict_proba(x).astype(float)
data_point = Xs_val.values[5]
exp = explainer.explain_instance(data_point, predict_fn_xgb, num_features = 10)
exp.show_in_notebook(show_all = False)
Key:
trained_model: trained xgboost classification model
class names: This is a binary classification model
Xs_train: This is a (73548, 84) dimension training set. This was used to build the training_model
Xs_val: This is a (4910, 84) dimension training set. The columns are the same with the training and validation set.
data_point: one specific validation point
Now, when I run this code, I get the following error:
ValueError: expected res_time, email_views...training data did not have the following fields: f6, f49, f34, f21,...
I don't know where the f# column names are coming from. Seems really bizarre and I believe I'm following the example correctly.
Any help would be much appreciated. Let me know if any additional information is required.
AI: It looks like a dataframe/nparray mismatch. Probably your model was trained on a dataframe; doing the training instead on an nparray may fix the problem.
I've seen such errors in working with xgboost; see e.g.
https://github.com/dmlc/xgboost/issues/2334 and
https://stackoverflow.com/a/52578211/10495893 |
H: Classification accuracy based on top 3 most likely classifications
My goal is to recommend jobs to job seekers based on their skill set.
Currently I'm using an SVM for this, which is outputting one prediction, e.g. "software engineer at Microsoft". However, consider this: how significantly different are the skill sets of a software engineer at Microsoft and a software engineer at IBM? Probably not significantly different. Indeed, by inspection of my data set I can confirm this. Hence, the SVM struggles to discriminate in situations like this, of which there are many in my data set, and my classification accuracy is about 50%.
So I had an idea.
In SK Learn, once you've trained some model, you can compute the probability a particular input X belongs to each class.
So for each input X in my test set, I took the the top 3 most likely classifications. Then I tested whether or not the correct label was in the top 3 predictions. If it was, then I considered the prediction to be correct. In doing so, the classification accuracy increased to over 80%.
So my question is: is this a valid approach to measuring classification accuracy? If it is, then does it have a name?
In my mind, it is valid given my intended application, which is to recommend a selection of jobs to a job seeker, which are relevant to their skill set.
Cross posted from CS SE:
https://cs.stackexchange.com/questions/117695/classification-accuracy-based-on-top-3-most-likely-classifications
I'm interested to know what perspective data scientists have on this.
AI: Yes, it's common to consider that the prediction is made of multiple answers (typically top N most relevant answers) and use a performance measure based on that.
Currently you're treating the problem as a classification problem but logically this is more like a recommendation problem or an information retrieval problem (like results from a search engine). Usually for this kind of problem the gold answer would also consists of a list of several items, but apparently your dataset contains a single answer for every instance.
Answer to comment: a couple of papers using some top N performance measures (note: it's just a quick selection based on the keyword "information retrieval")
https://www.microsoft.com/en-us/research/publication/letor-benchmark-collection-research-learning-rank-information-retrieval/
http://informationr.net/ir/18-2/paper582.html
The CLEF series of Shared Tasks have proposed many datasets and evaluation measures across the years, it's probably a good source for resources and papers... if you have a bit of time to explore it ;) |
H: Given n ordered sets, each containing 6 numbers, generate the next set in the sequence
I am facing the following problem in machine learning. Given n ordered sets, each containing 6 numbers, generate the next set in the sequence. The numbers in a set are not random.
For instance, it could be that these numbers are generated by an equation, e.g. $(1+x^y)$ so that the form the following sequence:
[{1, 2, 3, 4, 5, 6}, {1, 2, 5, 10, 17, 26}, {1, 2, 9, 28, 65, 126}, ..., {1, 2, 1 + 2**n, 1 + 3**n, 1 + 4**n, 1 + 5**n}]
But this equation or any other rule governing the order is not known. We are given a large n of these sets. What would be the most appropriate way to predict this in Keras?
AI: This should just be a sequence prediction problem, and a multivariate one. You can use LSTMs or GRUs in Keras to handle the sequences. In the simplest architecture, the inputs will be batches of (it seems) n-1 steps of 6 numbers, and the output will be the final 6 numbers in the sequence. This is pretty much how any RNN works. The loss will be mean_squared_error, I presume.
But given how much is unknown here about the distribution of the sequences, it's unclear whether this simple approach would work well, or whether even a more complex network can learn whatever patterns these follow. |
H: increase performances in neural networks
I am starting being interested in neural networks, and I am writing some code about it. But, differently from methods like support vector machines, random forests,..etc., to me it seems more like a black box which I can't control.
So my question is:
What are methods to increase accuracy and other performances in a neural network?
Thanks in advance.
AI: basically you should try to use regularization/dropout/batch-normalization and find out the optimal hyper-parameters to improve performance of your model.
After applying these optimizations you can use Randomized Search to find optimal hyper-parameters |
H: How regularization helps to get rid of outliers?
I have heard regularization helps to get rid of outliers, how so?
'My intuition is, regularization shrinks parameter or even make it zero, and hence large value will have less effect on overall result'.
Could you shed some more light on it?
AI: You don't get rid of the actual outliers (no data reduction). But statistical methods can be robust against outliers.
Robust statistics are statistics with good performance for data drawn
from a wide range of probability distributions, especially for
distributions that are not normal. Robust statistical methods have
been developed for many common problems, such as estimating location,
scale, and regression parameters. One motivation is to produce
statistical methods that are not unduly affected by outliers.
Source: wikipedia.
So, L-1 regularization is robust against outliers as it uses the absolute value between the estimated outlier and the penalization term.
Whereas, L2-regularization is not robust against outliers as the squared terms blow up the differences between estimation and penalization. |
H: NoSQL Comparison - Is this part of my Job?
For the more experienced Data Scientists here, i was asked to perform a case study on how Redis / HBase etc performs, compared to each other.How does data science play a role into this? Note that there will be no actual data involved.
AI: I am not very familiar with Redis, but as far as I understand the two are based on different principles and used for different things. Redis is in-memory while HBase is not. They are solutions to different problems. Maybe you should start by describing what kind of problems they are suitable for. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.