text
stringlengths 83
79.5k
|
---|
H: How to get model attributes in scikit learn (not hyper parameters)
How to get model attributes list (not hyper parameters passed to Estimator's class)?
For ex:
kmeans = KMeans(n_clusters=5)
kmeans.fit(X)
kmeans.labels_
how to get list of the attributes like labels_ from model object (ending with _)?
AI: I believe you are trying to access "labels_" before fitting the data.
from sklearn.cluster import KMeans
import numpy as np
X = np.array([[1, 2], [1, 4], [1, 0],
[10, 2], [10, 4], [10, 0]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
def get_properies(model):
return [i for i in model.__dict__ if i.endswith(‘_’)]
get_properies(kmeans)
['n_clusters',
'init',
'max_iter',
'tol',
'precompute_distances',
'n_init',
'verbose',
'random_state',
'copy_x',
'n_jobs',
'algorithm',
'cluster_centers_',
'labels_',
'inertia_',
'n_iter_'] |
H: Topics to learn in Neural Network
I have recently started learning Deep Neural Networks and was going through the tutorials online. Everywhere I saw that the topics post Image classification using CNN is a little hazy. No one seems to follow a guide as to what topics should one learn after learning image classification.
I would really appreciate if someone could guide me so as to learn advanced topics and become good to get started with research in this field.
The topics i have covered so far are:
Basic Machine Learning
Linear Regression
Logistic Regression
SVM
Decision Trees
Optimization Algorithms
Regularization Methods
Different Loss Functions
Different Metrics during Training
Feature Extraction
HyperParameter Tuning
Basic Neural Network
ANN
CNN
Image Classification
Image Augmentation before training
Visualizing Filters
I have started learning Object detection using CNN but I have this feeling that I am missing some topics which can strengthen my foundation.
Thanks in advance.
Any guidance is appreciated.
AI: You will observe a lack of blogs because post above points Libraries don't have a very simple implementation. So, need to get your hand dirty yourself.
I am assuming, you are thinking to move into Computer vision.
- Object detection
- Object segmentation
- Object recognition
- Try all of the above using some custom data.
- Face detection/Recognition
- Get the knowledge of Classical Image processing. You don't need DL every time.
- Learn OpenCV
- Video data analysis
- Follow latest research paper Or Commercial products on CV
Jason Brownlee and Adrian Roserbrock have suggestions for CV on their blogs.
To add to this I would like to point to a very beginner friendly blog which helped me to get an overall understanding -> blog
This covers the basics of computer vision from where you can have a better understanding of how to use CNN in image processing. |
H: Time series binary classification
Which deep learning architecture and algorithms do you most recommend for time series classification problem?
Of course LSTM, I am looking for state of the art papers.
AI: https://paperswithcode.com/task/time-series-classification/latest -> You can find all the state-of-the-art papers along with code for Time Series Classification.
https://arxiv.org/pdf/1809.04356.pdf -> This paper provides an in-depth review of some deep learning architectures for Time Series Classification.
Hope it helps! |
H: pandas groupby and sort values
I am studying for an exam and encountered this problem from past worksheets:
This is the data frame called 'contest' with granularity as each submission of question from each contestant in the math contest. The question is
and the answer is in red. I get why that works, but why is the 4th choice wrong? I really can't figure it out - please help.
*please let me know if this is not allowed as a post in this community.
for full description of the problem:
In this question, we will be looking at the contest dataframe which contains data from a
math contest in 2019. In the contest, each participant had a total of five questions. The participants submit each question separately and each row of the DataFrame records a particular submission of one of the contestants by some participant. The Timestamp column specifies the
time a given problem is submitted by a participant; each timestamp is discretized to the minute
and has been properly converted to a Pandas datetime object with pd.to datetime.
The Contestant column contains the id-name pair of each participant. The Question
column contains the question that was submitted. The Correct column tells us if the answer
given in the submission is correct (1) or not (0). Assume each participant can have several
submissions for the same problem, but they can only submit one question per minute.
AI: If the participant has answered the question 2 before the question 1, you will lose the information on question one by using .agg("first") in the 4th option |
H: Selecting threshold for F1 Score
When selecting a probability threshold to maximize the F1 score prior to deploying a model (based on the precision-recall curve), should the threshold be selected based on the training or holdout dataset?
AI: Ideally, the threshold should be selected on your training set. Your holdout set is just there to double confirm that whatever has worked on your training set will generalize to images outside of the training set.
This is the reason why hyperparameters tuning like GridSearch and RandomizedSearch in python has a cv parameter to cross-validate between different folds of your training set instead of allowing to choose the best parameters based on metric measured using the holdout set. |
H: Negatively correlated features
Is it ok to use negatively correlated features in data modeling? Say I have features A and B that have a correlation coefficient of 0.2 and features C and D with -0.2 correlation coefficient, is it fine to use features C and D in the model, since they have a low negative correlation? Also, does this have different effects on a regression vs. classification problem?
AI: Negative correlations are just as valid and useful as a positive correlation.
In your example, the 0.2 correlation and the -0.2 correlation have equal value in your model. A negative correlation just means that as one value goes up, the other goes down.
Also, the closer to 1 for a positive correlation and closer to -1 for a negative correlation, the more useful it will be for a modelling algorithm.
For most algorithms, the independent variables do not have to be un-correlated to be useful in a model. Most models will handle the cross-correlation between features, and in some cases, dropping one of them could actually be detrimental, possibly losing some information that would have been useful to the model.
Usually we drop features if we have too many of them, they are too sparse, or if our feature to row ratio is too high.
Both of these facts apply for Classification and Regression equally. |
H: How to use df.groupby() to select and sum specific columns w/o pandas trimming total number of columns
I got Column1, Column2, Column3, Column4, Column5, Column6
I'd like to group Column1 and get the row sum of Column3,4 and 5
When I apply groupby() and get this that is correct but it's leaving out Column6:
df = df.groupby(['Column1'])[['Column3', 'Column4', 'Column5']].sum
I tried with this but it doesn't group according to Column1 and it doesn't sum anything, but I get all my columns:
df.sort_values(['Column1']).groupby(['Column3', 'Column4', 'Column5']).sum()
How can I use groupby() correctly in this case?
Thank you!
I add my code:
df = df.drop(['Position', 'Swap', 'S / L', 'T / P'], axis=1)
df = df.groupby(['Symbol']).agg({'Profit': ['sum'], 'Volume': ['sum'], 'Commission': ['sum'], 'Time': pd.Series.mode})
df['Comm. ratio'] = (df['Commission'] / df['Profit'])
df['Comm. ratio'] = df['Comm. ratio'].round(2)
df['Net profit'] = (df['Profit'] + df['Commission'])
df.loc['Total'] = pd.Series([df['Commission'].sum(),df['Profit'].sum(),df['Net profit'].sum()], index = ['Commission','Profit','Net profit'])
The output is:
As you can see it adds "sum" and "mode" rows that I'd like not to have.
Moreover, it ignores the df.loc['Total'] code and it leaves the Total row empty.
AI: I think the answer depends on what you want to do with column 6. Keep in mind that the values for column6 may be different for each groupby on columns 3,4 and 5, so you will need to decide which value to display. Typically, when using a groupby, you need to include all columns that you want to be included in the result, in either the groupby part or the statistics part of the query.
If you don't want to group by that column, you can just display the min or mode value. In general, if you want to calculate statistics on some columns and keep multiple non-grouped columns in your output, you can use the agg function within the groupyby function.
Example with most common value for column6 displayed:
df.groupby('Column1').agg({'Column3': ['sum'], 'Column4': ['sum'], 'Column5': ['sum'], 'Column6': pd.Series.mode})
Full example with code:
If there is a tie for most common, with one Mary and one Jane both being Female Engineers, this will generate an error as mode doesn't reduce to a single answer:
You will need to use another aggregate in that case, such as min, which will choose Jane as an alphanumeric min:
If you don't like the look of the multi-index, you can remove it using as_index=False and replacing the column names with a list(map(...join...))
or remove it using to_flat_index() which gives a slightly different format for the columns: |
H: How is GPT able to handle large vocabularies?
From what I understand, GPT and GPT-2 are trained to predict the $N^{th}$ word in a sentence given the previous $N-1$ words. When the vocabulary size is very large (100k+ words) how is it able to generate any meaningful prediction? Shouldn't it become extremely difficult to correctly label the next word given that there are 100k possible labels to choose from? Even a large-scale classification problem like ImageNet has only 1k classes to choose from.
AI: GPT-2 does not use a word-level vocabulary but a subword-level vocabulary, specifically byte-pair encoding (BPE). This means that it does not predict the next word but the next subword token.
BPE tries to find the pieces of words that are most reusable. BPE also keeps character subwords (e.g. "a", "W").
The subword vocabulary used by GPT-2 is 50k tokens big. You can take a look at the vocabulary here. There, you can see that token #12175 is "retro" and token #11147 is "fit", so when tokenizing word "retrofit", you would probably get the 2 subword tokens #68 and #12541. When tokenizing a low-frequency word (i.e. that did not appear in the BPE training data), you probably end up with small subwords tokenization, e.g. "Kilimanjaro" --> "Kil", "iman", "jar", "o".
Using a word-level vocabulary is very troublesome, because:
the amount of existing word surface forms exceeds the size that it is manageable for a neural net, even for a morphologically simple language like English. For morphologically-rich fusional languages and, especially, agglutinative languages, using word-level vocabularies is much more inconvenient.
language is "open", in the sense that new words can be created, either by stitching together existing words (e.g. manspreading), or by making up new words altogether (e.g. bazinga).
the less frequently a token appears in the training data, the less our net will learn to use it. If we have many many different tokens, the frequency of appearance of each token in the training data will be very low (i.e. data scarcity), so our net will not learn properly.
Therefore, subword vocabularies are the norm nowadays. |
H: First two principal components explain 100% variance of data set with 300 features
I am trying to do some analysis on my data set with PCA so I can effectively cluster it with kmeans.
My preprocessed data is tokenized, filtered (stopwords, punctuation, etc.), POS tagged, and lemmatized
I create a data set of about 1.2 million tweet vectors (300 features each) by taking the averaged word vectors multiplied by their tfidf scores, like so:
# trained with same corpus as tfidf
# size=300, epochs=5, and min_count=10
tweet_w2v = Word2Vec.load('./models/tweet2vec_lemmatized_trained.model')
tweet_tfidf = TfidfVectorizer()
with open('./corpus/ttokens_doc_lemmatized.txt', 'r') as infile:
tweet_tfidf.fit(infile)
tweet_tfidf_dict = dict(zip(tweet_tfidf.get_feature_names(), list(tweet_tfidf.idf_)))
tfidf_tweet_vectors = []
with open('./corpus/ttokens_doc_lemmatized.txt', 'r') as infile:
for line in infile:
word_vecs = []
words = line.replace('\n', '').split(' ')
if len(words) == 0:
continue
for word in words:
try:
word_vec = tweet_w2v.wv[word]
word_weight = tweet_tfidf_dict[word]
word_vecs.append(word_vec * word_weight)
except KeyError:
continue
if len(word_vecs) != 0:
tweet_vec = np.average(np.array(word_vecs), axis=0)
else:
continue
tfidf_tweet_vectors.append(tweet_vec)
I also tried the above code with just average tweet vectors (no tfidf), and my problem still ended up happening.
I am starting to think that maybe my data set just isn't big enough or I am not training my word2vec model properly? I have somewhere around 100 million tweets I can use, but after filtering out retweets and only getting english language, it comes to around 1.3 million.
I'm not sure what's happening and what step I should take next. Any explanation is appreciated.
# Load in the data
df = pd.read_csv('./models/tfidf_weighted_tweet_vectors.csv')
df.drop(df.columns[0], axis=1, inplace=True)
# Standardize the data to have a mean of ~0 and a variance of 1
X_std = StandardScaler().fit_transform(df)
# Create a PCA instance: pca
pca = PCA(n_components=20)
principalComponents = pca.fit_transform(X_std)
# Plot the explained variances
features = range(pca.n_components_)
plt.bar(features, pca.explained_variance_ratio_, color='black')
plt.xlabel('PCA features')
plt.ylabel('variance %')
plt.xticks(features)
AI: So the question is asking why the first two principal components of your encoded text data is encapsulating all of the variation in the data.
One potential issue could be the averaging over word vectors.
Suppose for a particular feature across word vectors for a particular post f, there could be an array of positive and negative values. When we then apply an average over f we could zero out the dimension and thus cause greater data sparsity, which could explain for what you are seeing (this zero value will exist regardless of whether you multiply this average with the td-idf or not). It could be the case that this sort of thing is happening across multiple dimensions in your text embeddings / feature vectors.
With this, you might need to think of another way of deriving a text embedding, maybe used Doc2Vec instead, which follows the same principles as Word2Vec, but instead derives document embeddings, which encapsulates the meaning of a section of text instead of word embeddings, which encapsulates the meaning of an individual word within a section of text. |
H: How to use id's in binary classification problem
I would like to predict for a given user (on a website) if he/she logs out from the website within ten minutes.
In terms of data, I have a user ID and timestamp of the latest post on the website.
example of an id: 54a47e7a9cd118513
It would be great to get advice on how to use this data to achieve the goal. Should I use the user ID as a feature during training? If so, how would I use this?
AI: So the question asks how to model the task of predicting whether a user (u) will log out within the next ten minutes (?, worth clarifying this and the data you have available).
If so, as you suggest we represent this as a binary classification model (does log out within ten minutes / does not log out).
The modelling of the data itself is the next question and depends on the nature of your data.
If you have data which shows all timestamps of posts for particular users, then you can frame this as a sequence-to-one problem, whereby you encode the timestamps into a sequential encoder (e.g. RNN or LSTM). This encodes the data into a "hidden representation". Then, the hidden representation is decoded with a neural network with a final 2-node softmax layer, which produces a probability distribution over the two classes (logs out / does not log out).
When it comes to representing the input, to ensure that you are making this model user-specific, it would be worth concatenating this information (as a one-hot encoded vector) with the timestamp information.
If you only have the most recent post timestamps, then you could use a simple feedforward neural network, whereby you feed in the user ID along with timestamp through the model (with a final softmax layer as explained as above) to predict whether the user will log out or not within 10 minutes. |
H: AttributeError: 'DataFrame' object has no attribute 'ix'
I was learning Classification-based collaboration system and while running the code I faced the error AttributeError: 'DataFrame' object has no attribute 'ix'. Here is the code I write until now.
X=bank_full.ix[:,(18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36)].values
AI: From the docs:
Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor
of the more strict .iloc and .loc indexers.
Just use .iloc instead (for positional indexing) or .loc (if using the values of the index).
To read more about loc/ilic/iax/iat, please visit this question on Stack Overflow. To quote the top answer there:
loc: only work on index iloc: work on position ix: You can get data
from dataframe without it being in the index at: get scalar values.
It's a very fast loc iat: Get scalar values. It's a very fast iloc
http://pyciencia.blogspot.com/2015/05/obtener-y-filtrar-datos-de-un-dataframe.html
Note: As of pandas 0.20.0, the .ix indexer is deprecated in favour of
the more strict .iloc and .loc indexers. |
H: Confusion between precision and recall
I have a machine learning model that try to fingerprint the functions in a binary file with a corpus. Final output of upon inputing a binary file is a table with one to one mapping between the binary function and the corpus function as follows-:
As you could see from the names, some of the functions are correct while the others are incorrect. Is there a way to calculate precision and recall for the above result? I understand that precision and recall make sense if I am doing other ML tasks such as image classification. Using a confusion matrix will help to calculate both the metrics easily. However, I am confused and feel that I could not do such measures as it is just one to one mapping which is either true or false. If precision and recall does not make sense, is there any other metrics I could use to evaluate the model? Thank you!
AI: First of all, precision and recall are not specific to image classification; they are relevant wherever there are two distinct "positive" and "negative" classes (for example, when you test an e-mail for "spam/not-spam", or a blood sample for "has virus/does not have virus").
You can read more on this question on Cross Validated, but to sum it up - precision is the probability that a sample is positive if a test said it is, and recall is the probability that a positive sample will be reported as positive by the test. False positives mess up your precision, and false negatives mess up your recall.
Now, your task appears to be one of multi-class classification - with at least 17 classes, from your example. I wouldn't go with precision/recall for this - you can only do it pair-wise for pairs of classes. You can, however, plot a CxC confusion matrix (where C is the number of classes), and investigate where your models tend to miss. There's an implementation in SKLearn (link).
If you need a single-number metric, I'd start with just accuracy (and develop from there). Following Nuclear Wang's comment, I'd also suggest looking at Cohen's Kappa (see explanation on Cross Validated) to better account for class imbalance.
To read more on multi-class classification, see this question. I'd also recommend this blog post on Towards Data Science |
H: Count the max number of consecutive 1 and 0 in Pandas Dataframe
Hey I have the following Dataset
import pandas as pd
df = pd.DataFrame({
'column1': [0,0,1,0,1,0,0,1,1,0,1,1,1]})
I want to be able to count the number of consecutive 1 and 0 and generate 2 columns as such:
consec0: 1,2,_,1,_,1,2,_,_,1,_,_,_
consec1: _,_,1,_,1,_,_,1,2_,1,2,3
I then want to take the max number of consecutive and create two lists:
max_consec0: 2,1,2,1
max_consec1: 1,1,2,3
My dataset in the end will be just max_consec0 and max_consec1
AI: To check if a value has changed, you can use .diff and check if it's non-zero with .ne(0) (the NaN in the top will be considered different than zero), and then count the changes with .cumsum, like this:
df['counter'] = df.diff().ne(0).cumsum()
Afterward, you can create a second dataframe, where the indices are the groups of consecutive values, and the column values are the value (0 or 1, in your case) and length (which is what you ultimately want):
df2 = df.groupby('counter')['column1'].min().to_frame(name='value').join(
df.groupby('counter')['column1'].count().rename('number'))
The resulting max_consec0, max_consec1 are just the values in the [number] column, filtered by the [value] column:
max_consec0 = df2[df2['value']==0]['number'].tolist()
max_consec1 = df2[df2['value']==1]['number'].tolist()
You can verify that the result is [2, 1, 2, 1] and [1, 1, 2, 3], as desired. |
H: How to select 'cutoff' of classifier probability
I have recently used xgboost to conduct binary classification in an nlp problem. The idea was to identify if a particular article belonged to an author or not, pretty standard exercise.
The results are outputted as a probability between 0 and 1, and there is the ocasional article that is completely misclassified.
I would like to know if there is a statistical approach that gives me a confidence interval for the probability outputs (for example if I consider all articles with prediction of 0.4 I will get 95% of the articles that belong to the author), or something that helps me make decisions regarding the cut-offs.
AI: What you're looking for is something along the line of an ROC curve:
Using the threshold as a decision parameter, you can observe the trade-off between FPR (False Positive Rate: how many of the articles not belonging to the author will be correctly classified) and TPR (True Positive Rate, aka recall: how many of the articles which are really by the author will be classified as such).
When the parameter is at one end, you'll classify all documents as belonging to the author (100% recall, but pretty bad precision), and at the other hand, you'll have 100% precision but pretty bad recall.
The plot will allow you to decide on a value that satisfies your requirements (i.e. how much will your precision suffer when you want 95% recall). You can select it based on your desired value in one metric (e.g. 95% recall), but really I'd just plot it and have a look. You can do it in SKLearn with plot_roc_curve. |
H: How is the fit function in SimpleImputer working to find the mean in the Salary column as well when just the Age column is given as its argument?
The only argument inside the fit function of SimpleImputer is: 'Age'. Yet the returned output worked on the 'Salary' column as well. That is what I am unable to understand.
Here is my code (considering all the necessary libraries imported):
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values = np.nan)
imputer = imputer.fit(df[['Age']])
X[:, 1:3] = imputer.fit_transform(X[:, 1:3])
print(X)
Dataset:
Country Age Salary Purchased
0 France 44.0 72000.0 No
1 Spain 27.0 48000.0 Yes
2 Germany 30.0 54000.0 No
3 Spain 38.0 61000.0 No
4 Germany 40.0 NaN Yes
5 France 35.0 58000.0 Yes
6 Spain NaN 52000.0 No
7 France 48.0 79000.0 Yes
8 Germany 50.0 83000.0 No
9 France 37.0 67000.0 Yes
Output:
[['France' 44.0 72000.0]
['Spain' 27.0 48000.0]
['Germany' 30.0 54000.0]
['Spain' 38.0 61000.0]
['Germany' 40.0 63777.77777777778]
['France' 35.0 58000.0]
['Spain' 38.77777777777778 52000.0]
['France' 48.0 79000.0]
['Germany' 50.0 83000.0]
['France' 37.0 67000.0]]
AI: imputer = imputer.fit(df[['Age']])
X[:, 1:3] = imputer.fit_transform(X[:, 1:3])
You are again applying "fit" with fit_trasnform. Try with "transform" only
You will get a value error
ValueError: X has 3 features per sample, expected 2 |
H: What's the complexity of HDBSCAN?
I can't find any complexity information about HDBSCAN by google or wiki.
And how about compare to OPTICS?
AI: By googling: HDBSCAN is order of n squared whereas optics is order of n times log(n). |
H: decision -tree regression to avoid multicollinearity for regression model?
I read in comments a recommendation for decision tree´s instead of linear models like neural network, when the dataset has many correlated features. Because to avoid multicollinearity.
A similar question is already placed, but not really answered.
https://stats.stackexchange.com/questions/137573/do-classification-trees-need-to-consider-the-correlation-between-attributes
or here
In supervised learning, why is it bad to have correlated features?
https://www.quora.com/Is-multicollinearity-a-problem-in-decision-trees#:~:text=Decision%20trees%20follow%20the%20non%20parametric%20approach.&text=Though%20single%20tree%20leads%20to,robust%20to%20the%20multi%20collinearity%20.
My problem:
I have a dataset of about 30 columns. 10 columns have a high correlation with the target/dependend variable. Data are numerical. I would like to do a prediction (regression model) include all variables if possible?
One big problem is to avoid multicollinearity.
Is there a decision tree regression model good when 10 features are high correlated? (if I follow the answers of the links, but there is no really good explanation for that).
Is there a scientific or math explanation or recommendation (to use decision tree regression)?
AI: To answer your questions directly, first:
Is there a decision tree regression model good when 10 features are
high correlated?
Yes, definitely. But even better than decision trees, is many decision trees (RandomForest, Gradient Boosting (xGBoost is popular). I think you'd be well served by learning about how decision trees split, and how they naturally deal with collinearity. Maybe try this video Follow the logic until the 2nd tier of splits, and you'll be able to imagine how the correlated variables are suddenly not important because they're relative to the split above them.
Is there a scientific or math explanation or recommendation (to use
decision tree regression)?
The math explanation of why collinearity is "bad" for linear models, comes down to the coefficients and how you interpret them. One of the side effects is that they can undermine the statistical significance of a variable, as well as flip their coefficients the wrong direction. It usually doesn't affect the accuracy of the model very much, but most people want linear models so that they can interpret the coefficients (which is totally messed up with collinearity). I suggest reading maybe this article to start.
One of the things that you mentioned, include all variables if possible? is not really something you should be concerned with. The goal of a model is to explain the most, with the least. If you're forcing as many variables as possible into the model, then it's possible that you'll be fooled into thinking a model is good, when in fact it isn't if you were to test it on new data. In fact, sometimes less variables will give you a better model. This is exactly the kind of problem that multicollinearity causes with linear models - that you can't really judge very well what variables are significant or not. Stepwise selection doesn't work very well when there are correlated features.
In general, I think decision trees - especially Random Forests - will be a good start for you. But remember not to force all of the variables into the model just for the sake of it. Experiment with using less variables and manipulating the tree structure such as leaf size and max depth. And as always - test your model on validation data and holdout data so that you don't overfit a model and fool yourself into thinking it's a strong model. |
H: How do reshape an image to fit my Mnist Convolutional model?
I have done research but cannot seem to find what's wrong here
I have created this model for Mnist digit clasification :
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D , MaxPool2D,Dropout,Flatten
from tensorflow.keras.datasets import mnist
model = Sequential()
model.add(Conv2D(filters = 32 , kernel_size = (3,3),activation ='relu' , input_shape = input_shape ))
model.add(Conv2D(64,(3,3),activation = 'relu'))
model.add(MaxPool2D(2,2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128,activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation ='softmax'))
and I have this image to feed into the model:
So i reshaped the image like this:
from skimage import data
from skimage.transform import resize
image = image_gray
image = resize(image, (28, 28,1)).shape
image
But when i try to predict using this image i get this error:
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (3, 1)
I have tried multiple ways using skimage but I can't seem to get the model to fit. What can I do?
AI: model.predict(img) requires a batch axis, i.e. img needs to have a shape of (BS, H, W, C). Try model.predict(np.expand_dims(img,0)). |
H: Cleaning a certain feature to predict salary using Machine Learning
Info:
I am working on a dataset, and i would like to create a model that would predict salary.
Columns are as follows:
Index(['ID', 'Salary', 'DOJ', 'DOL', 'Designation', 'JobCity', 'Gender', 'DOB',
'10percentage', '10board', '12percentage', '12board', 'CollegeTier',
'Degree', 'Specialization', 'collegeGPA', 'CollegeCityTier',
'CollegeState', 'English', 'Logical', 'Quant', 'Domain',
'ComputerProgramming', 'ElectronicsAndSemicon', 'ComputerScience',
'MechanicalEngg', 'ElectricalEngg', 'TelecomEngg', 'CivilEngg',
'conscientiousness', 'agreeableness', 'extraversion', 'nueroticism',
'openess_to_experience'],
dtype='object')
Issue:
The train dataset contains around 3000 entries. For the feature 'JobCity', there are around 200 unique values. Out of 3000 entries, 180 are cities whose value counts are below 10.
For example, the city 'Agra' occurs only twice and 'Ranchi' occurs only once.
I plan to encode all the cities to feed into the machine learning (polynomial regression) model.
How should the cities with value counts less than 10 be handled?
AI: There are two ways to handle this:
Aggregating values and feature engineering
Instead of simply inputting the individual city, you could try to aggregate this value into a new variable "JobCitySize" to cluster cities by inhabitant size or a variable "Region" to cluster by geographic region. This will increase cases and also help solve the question of how to deal with unseen data.
Defining NA or unseen values
With any model and categorical encoding you have to think ahead what you want to do with "unseen data" in your case cities that aren't present in your training data set. Do you want to set them as NA, give a standard value ("Other"), etc.? Whatever process you use for these values you could also use for any value in your data set that falls below a threshold of "usefulness". |
H: Keras model.predict giving different shape from training label array
I'm using the following code to try and learn tensorflow. I've clearly specified the shapes of the training and validation X and y arrays.
import numpy as np
import tensorflow as tf
f = lambda x: 2*x
Xtrain = np.random.rand(400,1)
ytrain = f(Xtrain)
Xval = np.random.rand(200,1)
yval = f(Xval)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.MeanSquaredError()
)
model.fit(Xtrain, ytrain, epochs=50, verbose=0)
When I run yval.shape, model.predict(Xval).shape, I get the output ((200, 1), (200, 10)). I'm not able to understand where these extra 9 dimensions are coming from. Even the Xval prediction should have the shape (200, 1).
AI: The 10 outputs came from the fact that you have 10 neurons in the final layer of your network.
If you change your model to
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='relu')
])
its output will have a shape of (200, 1). |
H: Layer weights don't match in keras
This question uses the following code:
Xtrain = np.random.rand(400,1)
ytrain = f(Xtrain)
Xval = np.random.rand(200,1)
yval = f(Xval)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
#tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='relu')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.MeanSquaredError()
)
model.fit(Xtrain, ytrain, epochs=500, verbose=0)
Using the command model.layers[0].get_weights() I get the following output:
[array([[-0.43412966, -0.51346564, -0.14263666, 0.8693182 , -0.4930619 ,
1.249465 , -0.3924656 , -0.48984256, -0.55827504, 0.11134321]],
dtype=float32),
array([ 0. , 0. , 0. , 0.34663308, 0. ,
0.36201355, 0. , 0. , 0. , -0.11139664],
dtype=float32)]
And using model.layers[1].get_weights() I get this:
[array([[-0.04339373],
[ 0.19533908],
[-0.2295354 ],
[ 0.903574 ],
[-0.17581558],
[ 0.7272965 ],
[-0.69347996],
[ 0.02008992],
[-0.30351916],
[-0.29846227]], dtype=float32),
array([0.29466572], dtype=float32)]
I don't understand why the outgoing weights from layer 0 (second array in the model.layers[0].get_weights() list) don't match the incoming weights for layer 1 (first array in the model.layers[1].get_weights() list). And why is there an outgoing weight from layer 1? Isn't that supposed to be the final layer?
AI: Expected shape of parameter arrays
Each layer has two arrays:
one for the weights, which has a shape of (num_inputs, num_outpus)
one for the biases, which has a shape of (num_outputs)
Here the num_inputs is the number of input features to that layer and the num_outputs is the number of outputs from that layer (this is what you select when instantiating a layer).
Output of .get_weights()
Another important thing to note is the usage of .get_weights(). In fact there are to ways to use it:
From the model, i.e. model.get_weights(): This will return a flattened list containing all parameter arrays in order. For example, it could look like this: [layer1_weights, layer1_biases, layer2_weights, layer2_biases, ...]
From a layer, i.e. layer.get_weights(): This is what you used. Here it will return the parameter arrays for a given layer. For example model.layers[1].get_weights() will return the parameter arrays for layer1. If layer1 has biases then this will return two arrays, one for the weights and one for the biases.
I took the liberty of changing your code a bit to make this a bit more clear.
import numpy as np
import tensorflow as tf
f = lambda x: 2*x
Xtrain = np.random.rand(400, 5) # 5 input features
ytrain = f(Xtrain)
Xval = np.random.rand(200, 5) # 5 input features
yval = f(Xval)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation='relu'), # this layer has 5 inputs and 10 outputs
tf.keras.layers.Dense(1, activation='relu') # this layer has 10 inputs and 1 output
])
model.compile(optimizer='adam',
loss=tf.keras.losses.MeanSquaredError()
)
model.fit(Xtrain, ytrain, epochs=1, verbose=0)
# I will be calling .get_weights() directly from the model,
# so we expect 4 arrays: 2 for each layer.
print('First layer weights:', model.get_weights()[0].shape)
print('First layer biases:', model.get_weights()[1].shape)
print('Second layer weights:', model.get_weights()[2].shape)
print('Second layer biases:', model.get_weights()[3].shape)
The output:
First layer weights: (5, 10)
First layer biases: (10,)
Second layer weights: (10, 1)
Second layer biases: (1,) |
H: How to find correlation between categorical data and continuous data
I'm working on imputing null values in the Titanic dataset.
The 'Embarked' column has some. I do NOT want to just set them all to the most common value, 'S'.
I want to impute 'Embarked' based on its correlation with the other columns.
I have tried applying this formula to the 'Embarked' column:
def embark(e):
if e == 'S': return 1
if e == 'Q': return 2
if e == 'C': return 3
else: return 4
This allows me to check out data.corr(), but I think it's trickier than that since I'll get a different correlation with different value assignments (right??). I also thought about using a four-dimensional (for S,Q,C,NaN) one-hot vector, but I doubt that would work.
Is there a skLearn method that does this some way? Any further insights on the matter?
AI: I suggest trying the sklearn module KNNImputer. KNN will use clustering to calculate the null/missing values based on the data that is available (non-null). It should handle numerical and categorical data. You may need to do some encoding on the non-null values first.
You can also look at creating and modelling with multiple imputed datasets using different imputation settings/values and then compare or combine the results. This will help deal with some of the problems inherent in imputation, such as
randomness
high sampling variability
standard error estimates, etc
Other options:
RandomForest
fancyimpute
missingpy
It is fine to impute the data in your test dataset also. Just be sure not to include the label or response in any of the imputation, as that value won't be available in a new dataset.
Also, any imputation method you use should be calculated on the train dataset and then applied to the test dataset. This will prevent data or information leakage between the 2 datasets, and will simulate model performance on any future datasets you use the model on.
One more thing: After imputing, you should look at the distributions of both train and test datasets to compare their distributions, you want them to match as closely as possible.
References:
https://towardsdatascience.com/the-use-of-knn-for-missing-values-cf33d935c637
https://scikit-learn.org/stable/modules/generated/sklearn.impute.KNNImputer.html
https://www.kaggle.com/c/house-prices-advanced-regression-techniques/discussion/80000
https://towardsdatascience.com/preprocessing-encode-and-knn-impute-all-categorical-features-fast-b05f50b4dfaa
https://statisticalhorizons.com/more-imputations |
H: Text classification with Word2Vec on a larger corpus
I am working on a small project and I would like to use the word2vec technique as a text representation method. I need to classify patents but I have only a few of them labelled and to increase the performance of my ML model, I would like to increase the corpus/vocabulary of my model by using a large amount of patents. The question is, once I have train my word embedding feature, how to use this larger corpus with my training data - my labelled data?
My data set is composed by 2000 patents which are labelled.
The patents used to train my word embedding corpus are 3 millions (some of my 2000 labelled patents are already included in this larger corpus) which I trained using Gensim.
Do you have any suggestions on how to do it?
Thank you very much in advance.
Rob
AI: Use large amount of un-label data to finetune the BERT based model. You can train BERT in unsupervised manner. Then, use that bert to get embeddings of your input text of label data and train a classifier. |
H: Explanation on some steps of AdaBoost.R2
I am trying to understand AdaBoost.R2 in order to implement it and apply it to a regression problem. In this circumstances I need to understand it perfectly, however there's some step i don't really get.
The paper is available here, and Adaboost.R2 is presented in section 3:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.314&rep=rep1&type=pdf
In step 4, $\operatorname{sup}|.|$ is used; I've never seen that notation, what does it mean exactly?
In step 7, "** means exponentation", in that case that would mean $w_i\beta *\operatorname{exp}([1-L_i])$, right?
AI: Q1. $\operatorname{sup}$ is the supremum, or least upper bound. You may think of it as the maximum, but relaxed in case the set of values doesn't actually have a maximum. (And the pipes are just absolute values.) https://en.wikipedia.org/wiki/Infimum_and_supremum
Q2. No, they mean $w_i \cdot \beta^{1-L_i}$. See also https://stats.stackexchange.com/a/299266/232706. |
H: Why can't I specify the correct NumPy size?
In the network (model of Keras, Sequential), the input layer must have 4 neurons. The input must be 1 list, the length of which is 4, each element is a number.
print("SHAPE:", np.array([1, 1, 1, 1]).shape)
self.model.fit(np.array([1, 1, 1, 1]),
self.rightAnswer,
epochs = 1,
batch_size = 1)
Here is the conclusion:
SHAPE: (4,)
ValueError: Error when checking input: expected dense_1_input to have shape (4,) but got array with shape (1,)
Why is this happening, and how can I fix it?
AI: Keras needs you to pass one more dimension than it says in the error message: the batch dimension.
I.e.
If you have a model that requires each sample to have an input shape of (4,) and you have 1000 training samples you need to feed it with an array of (1000, 4).
In your case since you want to feed it with just one sample you need to pass a shape of (1, 4). Try self.model.fit(np.array([[1, 1, 1, 1]])) (notice the one extra bracket with makes the array's shape be (1, 4)) . |
H: Logistic regression does cannot converge without poor model performance
I have a multi-class classification logistic regression model. Using a very basic sklearn pipeline I am taking in cleansed text descriptions of an object and classifying said object into a category.
logreg = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(n_jobs=1, C=cVal)),
])
Initially I began with a regularisation strength of C = 1e5 and achieved 78% accuracy on my test set and nearly 100% accuracy in my training set (not sure if this is common or not). However, even though the model achieved reasonable accuracy I was warned that the model did not converge and that I should increase the maximum number of iterations or scale the data.
ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
Changing max_iter did nothing, however modifying C allowed the model to converge but resulted in poor accuracy. Here are the results of testing varying C values:
--------------------------------------------------------------------------------
C = 0.1
Model trained with accuracy 0.266403785488959 in 0.99mins
maxCoeff 7.64751682657047
aveProb 0.1409874146376454
[0.118305 0.08591412 0.09528015 ... 0.19066049 0.09083797 0.0999868 ]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
C = 1
Model trained with accuracy 0.6291798107255521 in 1.72mins
maxCoeff 16.413911220284994
aveProb 0.4221365866656076
[0.46077294 0.80758323 0.12618175 ... 0.91545935 0.79839096 0.13214606]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
(no converge)
C = 10
Model trained with accuracy 0.7720820189274448 in 1.9mins
maxCoeff 22.719712528228182
aveProb 0.7013386216302577
[0.92306384 0.97842762 0.71936027 ... 0.98604736 0.98845931 0.20129053]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
(no converge)
C = 100
Model trained with accuracy 0.7847003154574133 in 1.89mins
maxCoeff 40.572468674674916
aveProb 0.8278969567537955
[0.98949986 0.99777337 0.94394682 ... 0.99882797 0.99992239 0.28833321]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
(no converge)
C = 1000
Model trained with accuracy 0.7796529968454259 in 1.85mins
maxCoeff 72.19441171771533
aveProb 0.8845385182334065
[0.99817968 0.99980068 0.98481744 ... 0.9999964 0.99999998 0.36462353]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
(no converge)
C = 10000
Model trained with accuracy 0.7757097791798108 in 1.88mins
maxCoeff 121.56900229473293
aveProb 0.9351308553465546
[0.99994777 0.99999677 0.98521023 ... 0.99999987 1. 0.48251051]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
(no converge)
C = 100000
Model trained with accuracy 0.7785488958990536 in 1.84mins
maxCoeff 160.02719692775156
aveProb 0.9520556562102963
[0.99999773 0.99999977 0.98558839 ... 0.99999983 1. 0.54044361]
--------------------------------------------------------------------------------
So as you can see, the model training only converges at values of C between 1e-3 to 1 but does not achieve the accuracy seen with higher C values that do not converge.
Update:
Here are learning curves for C = 1 and C = 1e5. As I mentioned in passing earlier, the training curve seems to always be 1 or nearly 1 (0.9999999) with a high value of C and no convergence, however things look much more normal in the case of C = 1 where the optimisation converges. This seems odd to me...
C = 1, converges
C = 1e5, does not converge
Here is the result of testing different solvers
--------------------------------------------------------------------------------
Solver = newton-cg
Model trained with accuracy 0.7810725552050474 in 6.23mins
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
Solver = lbfgs
Model trained with accuracy 0.7847003154574133 in 1.93mins
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Solver = liblinear
Model trained with accuracy 0.7779179810725552 in 0.27mins
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
ConvergenceWarning: The max_iter was reached which means the coef_ did not converge
"the coef_ did not converge", ConvergenceWarning)
Solver = sag
Model trained with accuracy 0.7818611987381704 in 0.47mins
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
ConvergenceWarning: The max_iter was reached which means the coef_ did not converge
"the coef_ did not converge", ConvergenceWarning)
Solver = saga
Model trained with accuracy 0.782018927444795 in 0.54mins
--------------------------------------------------------------------------------
Is this common behaviour? Based on this behaviour can anyone tell if I am going about this the wrong way?
AI: I've often had LogisticRegression "not converge" yet be quite stable (meaning the coefficients don't change much between iterations).
Maybe there's some multicolinearity that's leading to coefficients that change substantially without actually affecting many predictions/scores.
Another possibility (that seems to be the case, thanks for testing things out) is that you're getting near-perfect separation on the training set. In unpenalized logistic regression, a linearly separable dataset won't have a best fit: the coefficients will blow up to infinity (to push the probabilities to 0 and 1). When you add regularization, it prevents those gigantic coefficients. So, with large values of C, i.e. little regularization, you still get large coefficients and so convergence may be slow, but the partially-converged model may still be quite good on the test set; whereas with large regularization you get much smaller coefficients, and worse performance on both the training and test sets.
If you're worried about nonconvergence, you can try increasing n_iter (more), increasing tol, changing the solver, or scaling features (though with the tf-idf, I wouldn't think that'd help).
I'd look for the largest C that gives you good results, then go about trying to get that to converge with more iterations and/or different solvers. |
H: Forecasting using Python
I have very less training observations (15). I need to predict 6 months into the future. What forecasting model is best suited for this scenario? Here is how my dataset looks
Month | Response Rate |% Promoters |% Detractors |%Neutrals
2019-01-01 | 5% |60% |30% | 10%
2019-02-01
.....
2020-07-01
I need to predict Response Rate, % Promoters, % Detractors and % Neutrals all of which are numeric variables.
I am new to this forum, so pardon me if I have done any mistake while framing the question.
AI: If you use any forecasting model, then it will do over fitting on your data set, as you have just 19 observations. At least 1000 observations are the good point to start applying any machine learning model.
I need to predict Response Rate, % Promoters, % Detractors and %
Neutrals all of which are numeric variables.
You can do data exploration where you start using bar charts, line charts, and try to find the pattern for each of those variables. Using this, if not accurate, but you would be able to understand lump sum forecasted value for upcoming months.
Even built in functions are there in excel. Check this out: https://www.excel-easy.com/examples/forecast.html |
H: How do stacked CNN layers work?
The internet is full of pictures like this:
But how are the second/third/etc CNN layers able to extract features when the features are already extracted by the previous layers?
For example, the mid-level feature in the picture has a nose. When we apply this "nose" filter, the output feature map will be an image without the nose, right?
Then, this feature map is passed to the next CNN layer, but how is it able to extract the "high-level feature" if the feature map given to it doesn't contain the nose? And the more layers we stack in CNN, the less meaningful data will be extracted in the latter layers.
AI: I think you might be misunderstanding the phrase "extract" here. Think of it as "gets activated by" instead.
For example, the "nose filter" gets activated by inputs which look like a human nose (more precisely: it gets activated by activation maps of previous layers which correspond to a nose in the input image). And simply put, the following higher level feature maps in your example, which might encode a human face, then get activated if the previous layers contain activated feature maps for a human nose, mouth, eye etc. This article explains it well.
As NNs in general, this process is loosely inspired by how visual perception in humans and other animals works. |
H: Why is batch size limited by RAM?
The parameters of the network are changed to minimize the loss on the mini-batch, but usually the loss on the mini-batch is just the (weighted) sum of losses on each datum individually. Loosely, I would represent this as
$$ dT = \frac{1}{\text{batch_size}} \sum_{i \in \text{batch}} dT_i$$
Where $dT$ is the update of the net parameters for the batch and $dT_i$ is only for one training example.
Why can't $dT$ be calculated 'on-line' then, where the only RAM needed is on the partial sum for $dT$ and whichever $dT_i$ you are working on at that moment?
AI: Something similar to what you describe is frequently used in some domains and it is called gradient accumulation. In layman terms, it consists of computing the gradients for several batches without updating the weight and, after N batches, you aggregate the gradients and apply the weight update.
This certainly allows using batch sizes greater than the size of the GPU ram.
The limitation to this is that at least one training sample must fit in the GPU memory. If this is not the case, other techniques like gradient checkpointing can be used. |
H: How do we get the coefficients and intercept in Logistic Regression?
I'm using Codecademy to learn about logistic regression and there are some holes in my understanding of this topic.
import numpy as np
hours_studied = np.array([[ 0],[ 1],[ 2],[ 3],[ 4],[ 5],[ 6],[ 7],[ 8],[ 9],[10],[11],[12],[13],[14],[15],[16],[17],[18],[19]])
calculated_coefficients = np.array([[0.20678491]])
intercept = np.array([-1.76125712])
def log_odds(features, coefficients,intercept):
return np.dot(features,coefficients) + intercept
# z = b0 + b1x1 + b2x2 + ....
# log odds measures how likely it is that the data sample belongs to the positive class
def sigmoid(z):
denominator = 1 + np.exp(-z)
return 1/denominator
## In order to map predicted values to probabilities, we use the sigmoid function.
# The function maps any real value into another value between 0 and 1.
# In machine learning, we use sigmoid to map predictions to probabilities
# Create predict_class() function here
# features = matrix
# coefficients = vector
def predict_class(features, coefficients, intercept, threshold):
calculated_log_odds = log_odds(hours_studied, calculated_coefficients, intercept)
probabilities = sigmoid(calculated_log_odds)
return np.where(probabilities >= threshold, 1, 0)
# If a value in array_to_check is above threshold, the output is 1. If a value in array_to_check is below threshold, the output is 0.
# Make final classifications on Codecademy University data here
final_results = predict_class(hours_studied, calculated_coefficients, intercept, .5)
print(final_results)
hours_studied, calculated_coefficients and intercept were given by codecademy.
I'm not sure how to get these inputs myself to plug into the model and codecademy didn't explain either. Hours_studied I understand.
Maybe could someone help explain what are the possible steps before I define log_odds, sigmoid, etc to get coefficients and intercept?
AI: Maybe this part of the code is trying to teach you activities post ML training i.e. prediction
The coefficient and intercept are the parameters of the Model. These are determined by using Training data (Features and Labels) and training process
You follow these steps(Very high level) -
Get data - X , Y
Define model i.e. Logistics Regression
Train Model using the data - Here you get the Coef/Intercept
Predict using the Model
In your work, you are at the last step.
You have a model ready with the parameters and you also got the data i.e. Features(X)
It is as good as calling a function i.e. get_answer(x) and get_answer is your model with it's parameters.
Then you say
if answer > THRESOLD(normally 0.5 but we get the best by trials) it's Class-A else it's Class-B
You may learn Logistics regression from scratch form Blog-machinelearningmastery |
H: Function extrapolation
I have a list
[1.0, 0.488, 0.300, 0.213, 0.163, 0.127]
Plot (dont have enough reputation to post an image)
I need to extrapolate this function for 15 more points further, asymptote is 0.
I figured out that maybe I need a custom kernel for SVR (something like y=1/x), but I havent found any examples on how to implement it.
I will be very grateful for any examples/links/or tips
AI: Your data looks logarithmic. Try using the scipy.optimize curve_fit() function to find the approximate log coefficients. I tried several built-in python functions, but couldn't get a good fit on any of them, but they can be used as a starting point.
Finally, I ran the curve_fit() function on the data you posted, and after making a few adjustments, I was able to produce the green line on this chart. The blue line is your original data. They are so close together that it is difficult to see the difference. The yellow line is the one that the curve_fit gave me for a best fit.
I also included the extrapolated points (green line again) and the values for the next 15 points below. As you can see, the line is asymptotic to the x axis.
FYI, My adjustments included adding a np.log() to the denominator to convert the base of the log. Also, I changed the intercept to 1 as that is where your data starts. If you want to automate this, you could write some functions to iterate over different values of the log base and the intercept. I included comments in the code for more details.
import matplotlib.pyplot as plt # Matlab-style plotting
import numpy as np
import scipy
from scipy.optimize import curve_fit
x = [1.0,2.0,3.0,4.0,5.0,6.0]
x2 = np.arange(0, 22.0, 1.0)
y = [1.0, 0.488, 0.300, 0.213, 0.163, 0.127]
x = np.array(x)
y = np.array(y)
plt.plot(x,y)
# get the approximate values for the log function
data = scipy.optimize.curve_fit(lambda t,a,b: a+b*np.log(t), x, y, p0=(1, 1))
# plot those values, the fit is not very good
y1 = data[0][0] + data[0][1]*np.log(x) # yellow
plt.plot(x,y1)
# original coefficients were .91 and -.48
# adjust the 2 coefficents until you get a better fit
# first coefficient is like the intercept but for the line of x of 1, not the y axis
# I also changed the log base value by including the np.log() on the denominator,
# this gives the line a larger curve than the original
y2 = 1.0 - np.log(x2)/np.log(x2+1.9) # green
plt.plot(x2,y2)
plt.show
data
(array([ 0.91083753, -0.48242954]),
array([[ 0.00543936, -0.00380311],
[-0.00380311, 0.00346828]]))
y2
array([ inf, 1. , 0.49069866, 0.30871637, 0.21896813,
0.16675119, 0.13310187, 0.10985155, 0.09295092, 0.08018302,
0.0702405 , 0.06230656, 0.05584666, 0.05049745, 0.04600401,
0.04218259, 0.03889766, 0.03604721, 0.03355312, 0.03135462,
0.02940378, 0.02766231]) |
H: What does it mean that an hypotesis is consistent?
I am studying concept learning, and I am focusing on the concept of consistency for an hypotesis.
Consider an Hypotesis $h$, I have understood that it is consistent with a training set $D$ iff $h(x)=c(x)$ where $c(x)$ is the concept and this has to be verified for every sample $x$ in $D$.
For example consider the following training set:
and the following hypotesis:
$h_1=<?,?,?,Strong,?,?>$
I have that this is not consistent with D because for the example $3$ in $D$ we have $h(x)!=c(x)$.
I don't understand why this hypotesis is not consistent.
Infact, consider the following hypotesis:
$h=<Sunny,Warm,?,Strong,?,?>$
this is consistent with $D$ because for each example in $D$ we have $h(x)=c(x)$.
But why the first hypotesis $h_1$ is not consistent while the second,$h$, is consistent?
Can somebody please explain this to me?
AI: I'm not especially familiar with this but from the example provided we can deduce that:
An hypothesis is a partial assignment of values to the features. That is, by "applying the hypothesis" we obtain a subset of instances for which the features satisfy the hypothesis.
An hypothesis is consistent with the data if the target variable (called "concept" apparently, here EnjoySport in the example) has the same value for any instance in the subset obtained by applying it.
First case: $h_1=<?,?,?,Strong,?,?>$. All 4 instances in the data satisfy $h_1$, so the subset satisfying $h_1$ is the whole data. However the concept EnjoySport can have two values for this subset, so $h_1$ is not consistent.
Second case: $h_2=<Sunny,Warm,?,Strong,?,?>$. This hypothesis is more precise than $h_1$: the subset of instances which satisfy $h_2$ is $\{1,2,4\}$. The concept EnjoySport always have value Yes for every instance in this subset, so $h_2$ is consistent with the data.
Intuitively, the idea is that an hypothesis is consistent with the data if knowing the values specified by the hypothesis gives a 100% certainty about the value of the target variable. |
H: Tensorflow.Keras: How to get gradient for an output class w.r.t a given input?
I have implemented and trained a sequential model using tf.keras. Say I am given an input array of size 8X8 and an output [0,1,0,...(rest all 0)].
How to calculate the gradient of the input w.r.t to the given output?
model = ...
output = np.asarray([0, 1, 0, 0, 0, 0, 0, 0, 0, 0])
input = np.random.randn(1,64)
pred = model.predict(input)
gradient = somefunction(model,input,output,pred)
Is there any way to achieve that?
I am trying to implement a project similar to Google Deep Dreams, so that given a random picture, if I mention a digit, then with each iteration I will update the input with its gradient to make the picture more like the digit that was asked.
I tried to follow other StackOverflow answers and Keras documentation. But as they are not mentioning the input shape, when someone writes input[:,:,something] I am having a hard time translating it to my requirement.
Can you provide a simple generic answer so that given an array as input and an expected output array, one can calculate the gradient of loss w.r.t the input for that exact output.
AI: You can get that from the weight updates, not sure if it is the best approach
-Save the model
-Save the weights of the first layer
-Load the model and Compile the model with SGD w/o momentum
-Set all the weights = that of the previous model
-Train with the input and output i.e. the Array for epoch=1 and batch_size=1
-Get the weights again
-Calculate the respective gradient using two weights of 1st layer
-If Gradient is very small, it might become zero due to matching digits of both the weights
w0 = model.get_weights().copy()
w0_0 = w0[0]
optimizer = tf.keras.optimizers.SGD( learning_rate=0.01, momentum=0.0, nesterov=False, name="SGD")
model.compile(optimizer=optimizer, loss='mse')
model.fit(input, output, epochs=1, verbose=1, batch_size=1)
w0_new = model.get_weights().copy()
w0_0_new = w0_new[0]
'''
w_new = w0 - learning_rate * g
g = (w0 - w_new)/learning_rate
'''
grad = (w0_0 - w0_0_new)/0.01 |
H: Test for feature dependencies in time series modelling
I have time-series data that track event occurrence in 3 locations. Here's a sample:
Count Total
Location A B C
Date
2018-06-22 0 1 1 2
2018-06-23 2 1 0 3
2018-06-24 0 0 1 1
2018-06-25 2 2 1 5
2018-06-26 0 3 1 4
I would like to use the data to predict the total number of event occurrences at a given date in the future. How do I test if an event happening in one location has an impact on events happening in another location (dependency)? I believe that if an event happening in locations B and C are dependant, I should sum the 2 columns together as 1 feature in my model.
AI: How do I test if an event happening in one location has an impact on events happening in another location (dependency)?
Pearson correlation between the two columns would already give you a simple indication of whether there is a dependency relation.
A $\chi$-square test would tell you whether there is a significant difference between an observed variable (e.g. count in one location) and an expected variable (count in the other location). In other words, it can tell you whether the variables are independent or not.
The conditional probability $p(A|B)$ of a variable A given the other variable B tells you how likely the event A is assuming the event B happens. $A$ and $B$ are independent if $p(A|B)=p(A)$ (note that it's unlikely to be exactly equal in the case of a real sample).
I believe that if an event happening in locations B and C are dependant, I should sum the 2 columns together as 1 feature in my model.
Unless you have a specific reason to do that (e.g. you want to consider a large area which includes locations B and C), this doesn't make a lot of sense:
first dependency is not "all or nothing", two variables can have a certain degree of dependency but it doesn't mean that they follow each other exactly. Therefore you would lose some information by merging them into one feature.
this would make it impossible to predict future events for a specific location, for instance B, if the two values for B and C are combined together. |
H: What does it exactly mean when we say that PCA and LDA are linear methods of learning data representation?
I have been reading on representation learning and I have come across this idea that PCA and LDA are linear methods of data representation, however, auto-encoders provide a non-linear way. Does this mean that the embedding learned by PCA can be transformed only linearly to reproduce data points ?
AI: LDA is linear classifier since the classification boundary in LDA has the following form :
As you can see, the above equation is linear in $x$. In above equation, ${\Pi_i}$ is the prior probability of class $i$, which is estimated from the training data and ${\mu_i}$ is the mean of class $i$ also estimated from the training data. ${\Sigma}$ is the common covariance matrix of all classes (it is assumed to be same in all classes in LDA, and that's why we get linear boundary, if you don't assume the same covariance matrix, the classification boundary no longer remains the linear in $x$).
PCA on the other hand is not a regression/classification algorithm. It is rather a feature extraction/ dimensionality reduction method, which helps you to represent your data in lower dimensions. It generally extracts the most significant $k$ features of your data. The value of $k$ is decided by you based on how many features you want to keep in your data. Or in what dimension you want to represent your data. PCA is the best representation of your data in to a lower dimensional space. This is linear transformation because of following equation :
$$ y \space = \space W^Tx $$
As you can see, this equation is also linear in x. In this equation, the matrix W is a matrix obtained from the covariance matrix of your data. The first row of matrix $W$ corresponds to the eigen vector of covariance matrix of $x$ corresponding to highest eigen value. This is because that eigen vector gives the maximum variation in your data. (This is proved in the optimization of PCA). The second row corresponds to second max eigen value because it represents direction with second highest variation in your data. You can refer to the notes of Prof. Ali Ghodsi or Prof. Andrew NG for the proof of why we select the eigen vectors in this order in PCA.
Now coming back to your question about reproducing the data only linearly. Yes, you can only reproduce back your data by linear transformation in PCA. Why do you want want to reproduce your data non linearly when your transformation was linear? Even if you want to reconstruct your data non linearly from transformed data $y$, do you have any enough information to do so? You just have a matrix $W$ using which you linearly transformed your data $x$ into a lower dimensional space. A non linear reconstruction might look like this :
$$ x \space = \space y^TW_1^Ty \space+\space W^{-T}y $$
This is a non linear reconstruction. But do know what is $W_1$ here? Even if you try to estimate $W_1$, you'll endup getting it as a zero matrix since $y = W^Tx$ and so $W^{-T}y = x$. So, any other factor in the estimation will give you nothing but the zeros as a coeffocients of the other non linear factors (factors like $y^Ty$). |
H: Mathematics: Can the result of a derivative for the Gradient Descent consist of only one value?
I have a problem of a task using the formula of the Gradient Descent:
Perform two steps of the gradient descent towards a local minimum for the function given below, using a step size of 0.1 and an initial value of [1, 1]
I only get as result of the derivative 0.4x1, because x2 does not exist.
Is this correct or should the result for the derivative be (0.4, 0)?
Note: Sorry, if my equation of the derivative is bad. I'm not a mathematician. Please, correct me, if my equation is absolutely wrong.
AI: Gradient, g(x) = 0.4*x
At [x1 = 1], Gradient,
g(1) = 0.4
x2 = x1- step*gradient
=>x2 = 1 - 0.1*0.4
=>x2 = 0.96
At [x2 = 0.96], Gradient,
g(0.96) = 0.4*0.96 = 0.384
=> x3 = 0.96 - 0.1*0.384 = 0.9216
Continue following the same steps and will reach near the minima. |
H: What kind of data (in context of trends in data) is Logistic Regression appropriate for?
I'm not able to visualise what kind of 'trends' I would have to observe in multi-featured data to be able to say 'Logistic Regression would work well here'.
For example if I have only 1 feature and if the data is something like all negative data is class 0 and all positive data is class 1. Then I can clearly say that Logistic Regression works well here.
So what kind of 'analysis' of the data (multi-featured) would I have to do to decide whether Logistic Regression would work well or not?
AI: As you would be aware that, logistic regression is a standard linear model, following these steps would be helpful :
Plot the input data, just observe how would probably the decision boundary look like.
If it's completely non-linear then other models will give you a good result than logistic regression.
If there is a high chance of linearity possible then you can go with logistic regression |
H: How does the validation set get used in the training phase?
I am confused about how the validation set is used during the training phase (neural network like CNN)?
In a platform like Matlab or python(Keras), I split my dataset into train set, validation set and test set.
I knew that validation set is used to tune hyperparameters(like the number of neurons and learning rate), suppose SDG optimizer is used, how the tuning hyperparameters happened based on validation set?
Does the validation set just give an indicator of how the neural network performs on unseen data(validation set) then based on that I manually set hyperparameters? or something automatically(the optimizer) tune hyperparameters?
AI: The three way - train, validation(dev), test split helps in unbiased evaluation of the model on unseen data , that is the test set. The train and validation sets are used for training weights and hyperparameter tuning respectively. In Keras if you have set the 'validation_split' parameter in 'model.fit' then you can look at the performance of your model on the validation set and tune the hyperparameters manually in order to get a good validation set performance.Once you are satisfied with the performance of your model on the validation set, you finally evaluate your model on the test set.
Hyperparameter tuning is done manually and not automatically by Keras. |
H: Is using samples from the same person in both trainset and testset considers being a data leakage?
Suppose a neural network is built for a binary classification problem such as recognize the face as a smiley face or not, by using a dataset of 1000 persons and each person has ten images of his face.
If the dataset randomly spilt into trainset and testset by a ratio of 70:30, in this case, there is a big chance face image of same persons will be used in both the trainset and testset, so is this considered to be data leakage (train-test contamination)?
AI: Yes, this is a form of data leakage. The testing data should not be linked to the training data in any way.
Another way to think of it is, if someone were to try replicating your results with their own test set, would your test set have given you an advantage such that your results are generally better than theirs? |
H: What is the most straightforward way to visualize color-coded clusters along with the cluster centers?
I have applied the kMeans Clustering algorithm to a dataframe and have gained cluster labels for each row. I had selected only two features.
There are 4 clusters.
I want to visualize the datapoints in 2D plane with color-coded clusters which I want to look like this-
Ignore the labels. I would like to plot cluster centers instead.
I have looked in many blog posts, articles etc. None was helpful.
What is the most straightforward to achieve this?
AI: There can be multiple ways, one can be -
- Plot the points with hue=cluster_number
- Plot the Centroid with a different markers
Code for 3 Clusters on 2 Iris Features -
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target
X=X[:,:-2]
X = (X - X.mean())/X.std()
def create_cluster(k=3):
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=k)
kmeans.fit(X)
return kmeans
kmeans = create_cluster()
y_pred = kmeans.predict(X)
centroid = kmeans.cluster_centers_
_, ax = plt.subplots(1,1,figsize=(10,6))
color = ["#e74c3c", "#34495e", "#2ecc71"]
sns.scatterplot(X[:,0],X[:,1], hue=y_pred, palette=sns.color_palette(color),ax=ax)
sns.pointplot(centroid[:,0], centroid[:,1],markers='^',join=False,ax=ax)
$\hspace{2cm}$Output - Ignore the convergence quality
$\hspace{2cm}$ |
H: Is there a deep learning method for 3D labels?
As the question says, I want to feed labels into a neural net that are three dimensional. Let's say that I have 3 possible labels and each one of my data points corresponds to a percentage of those labels. e.g, my first datapoint contains 20% of label A, 30% of label B, and 50% of label C.
Is there any architecture able to deal with this shape of label data?
AI: Since the probability are summing up to zero, so you can simply treat it as Multi-class problem and use a network with Softmax at the end.
Last layer and compile -
model.add(keras.layers.Dense( 3, activation="softmax"))
model.compile( optimizer='adam, loss="categorical_crossentropy", metrics='accuracy')
Metrics - Accuracy is not appropriate. Define a custom metrics based on the interpretation of 3 probabilities
The labels will be as per the probability-
e.g. This is for MNIST 10 digits -
Digit 1 - [0.05, 0.55, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05]
Prediction - [0.064, 0.356, 0.059, 0.069, 0.068, 0.050, 0.044, 0.122, 0.064, 0.101]
Code for MNIST - Colab link |
H: Training CNN on a huge data set
I am trying to train an AlexNet image model on the RVL-CDIP Dataset. The dataset consists of 320,000 training images, 40,000 validation images, and 40,000 test images.
Since the dataset is huge I started training on 500 (per class) sample from the training set. The result is below:
we can see from the graph above that the validation loss started decreasing at a much slower rate around epoch 20 while training loss continued decreasing the same. This means our model started overfitting the data? I assume that this is probably because the data i have in the training set is not enough to get better results on the validation set? (validation data is also a 500 (per class) sample from the whole validation set)
is it a correct approach to train the model on a small sample (eg. 500 images per class), save the model, load the saved model weights and then train again with a larger sample (eg 1000 images)? My intution is that this way the model would have new data every new run that helps it to learn more about the validation set. And if this approach is correct, when training the model for the second time with a larger sample, should the training sample include images (some or all) that were trained in the first model?
You can find the full code with results here
AI: It reminds me of this question, the training loss is decreasing faster than the validation loss. I understand there is some overfitting, as the model is learning some patterns that are only in the training set, but the model is still learning some patterns that are more general, as the validation loss is decreasing as well. To me it would be more of an issue if the validation loss increased, but it is not the case.
Edit
Usually neural networks are trained with all the data, training by using mini-batch gradient descent already does what you mention in your approach without the need of storing the model in memory. So, I would train with as much data as possible, to have a model with the least possible variance. If you are not feeding the data using generators and the whole dataset doesn't fit into memory, I recommend to use them, or train with a model which is as big as possible given your memory limitations. |
H: How to compare two clustering solutions when their labelling differs
I am planning to test the reliability of a clustering approach for some data. My plan is to repeatedly (with replacement) draw a number of random subsample pairs (e.g. 2x 10% of the total data), run the clustering on both individually, and then compare the results. The issue is that I am using HDBSCAN, which not only creates a non-fixed number of clusters (for different sets of data but same params), but it also therefore labels clusters differently since k is not defined, and the input data will always have slightly different structure due to variability.
I tested this by using the same HDBSCAN parameters on two subsamples (A, B) of my data, and my issue is quite easy to see. The cluster labels with corresponding samples for A were:
{-1: 4306, 0: 1737, 1: 2999, 2: 72068, 3: 20628, 4: 3120}
while for B they were:
{-1: 4478, 0: 1711, 1: 3048, 2: 72089, 3: 3123, 4: 20408}.
From this, it seems that the solution is very close until we compare label 3. It looks like label 3 of A corresponds to label 4 of B.
My initial thought was that I could just relabel them both in order of each cluster's sample size. But this assumes that the two solutions will be similar across many tests (which is ultimately the whole point of the testing in the first place). So my next thought is I could set the constraints that (1) there should be a "similar" number of samples in the noise group, and (2) there should be the same number of clusters found. If these two conditions are met then I could relabel the clusters by order of their sample size, and then make my comparison using ARI or AMI.
I am doubtful that this is good, because I don't believe it is necessarily true that (even given the two constraints) two clusters labelled the same on the basis of their sample should necessarily correspond to the same "global" cluster. It therefore seems problematic to me but I can't think of an alternative.
Is the above approach generally reasonable? If not, is there something else I could do to assess the reliability/stability of HDBSCAN solutions? As an alternative, would it be better to just compute the DBCV score, %noise, and the number of clusters, and then use this as an indication of the quality of the clustering?
AI: This is only a partial answer since I'm not familiar with HDBSCAN, hopefully somebody else can provide a more complete answer.
As far as I understand you need to find which cluster in A corresponds to which cluster in B, i.e. an alignment between the clusters labels of A and of B. It's not recommended to match based only on the size, since it could happen by chance that a cluster in A has the same size (or similar size) as a cluster in B.
Since the instances are different, you would have to rely on how the method represents the clusters.
For example probabilistic clustering methods represent each cluster as a distribution over the features, so one can use a distance/similarity measure between these distributions.
With k-means one would compare the centroids and match the pairs of clusters for which the distance is the shortest.
I'm not familiar with HDBSCAN so I don't know how the clusters are represented inside the model: whatever this is, the idea would be to compare each internal representation of a cluster in A vs. the same in B. |
H: how to create sklearn pipeline object using predtrained standardscalar object
I am having pretrained Sklearn model and pre-trained Standard scalar object saved as pickle . And now I want to create Sklearn pipeline using both of it.
I need sklearn pipeline to convert it into ONNX format.
I couldnt do it as pipeline takes standard scalar class and then we need fit pipeline using data but in my case models and scalar both are already fitted.
AI: This works the way you would want out of the box.
pipeline takes standard scaler class
No, pipelines get initialized with estimator instances, not the classes. (This is why you need the parentheses in the steps, e.g. StandardScaler().)
That is, the following works:
from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
X, y = load_breast_cancer(return_X_y=True)
scaler = StandardScaler()
lr = LogisticRegression()
X_sc = scaler.fit_transform(X)
lr.fit(X_sc, y)
pipe = Pipeline(steps=[('scale', scaler),
('lr', lr)])
# Predicting would fail if the pipeline had unfitted estimators:
pipe.predict_proba(X) |
H: Does the test set has to be in [0,1] range?
I have standardized training set using
mean = XTrain.mean()
XTrain-=mean
std = XTrain.std()
XTrain/=std
And then used mean and std to standardize validation and test sets. The training and validation sets have values that are greater than 1 and less than zero is that okay?
AI: Standardization centers the values around a mean of $0$ with standard deviation $1$. Therefore, having values smaller than $0$ or greater than $1$ is to be expected. If you want to make sure values are between $0$ and $1$ you need to normalize the data instead.
Here is an example of the two procedures taken from the book "Python Machine Learning" by Raschka:
Be aware though to apply the procedure to your test data with parameters obtained from the training data (in case of standardization: mean and std. dev. of train data).
Sklearn has methods for standardization and normalization which you might want to have a look at. |
H: Fully-Connected DNN: Compute the numbers of free parameter in a DNN
A fully-connected DNN has layer sizes of 3-3-4-2, where the first layer size represents the input layer. We assume that all layers are affine ones (no ReLU). Give the dimensions of all weight matrices and all bias vectors in the network and compute the total number of free parameters in this DNN.
According to this task the first layer size represents the input layer, so it must be 3.
a(0) = x = 3
If I know the input layer, so 3-4-2 are also the sizes of the bias vectors.
I know now the dimension of the input layer and the bias vector. The columns of W must be also 3, because of the size of the input layer. The rows of W must be equal to the size of the bias vector.
So I computed the following dimensions:
W(1) = W33, b3, a(1) = 3
W(2) = W43, b4, a(2) = 4
W(3) = W24, b2, a(3) = 2
But how to compute the total number of free parameters in this DNN?
AI: Weights at first layer will depend on the Dimension of the input too.
Let's assume, This is your Network. Input has 5 Features
$\hspace{6cm}$ Image credit - ML Visuals by dair.ai
Per layer weights and biases
\begin{array} {|r|r|}
\hline
&Weights &Biases\\
\hline
Layer-1 &5*3 = 15 &3\\
\hline
Layer-2 &3*3 = 9 &3\\
\hline
Layer-3 &3*4 = 12 &4\\
\hline
Layer-4 &4*2 = 8 &2\\
\hline
\end{array}
Check with Code
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential()
model.add(keras.layers.Dense( 3, activation="linear",input_shape=(5,)))
model.add(keras.layers.Dense( 3, activation="linear"))
model.add(keras.layers.Dense( 4, activation="linear"))
model.add(keras.layers.Dense( 2, activation="linear"))
model.summary()
#model.get_weights() #Can check weights matrix with this
Output |
H: LSTM Sequential Model question re: ValueError: non-broadcastable output operand with shape doesn't match broadcast shape
This is probably a very simplistic question but I have not been able to find resources that directly address this. I know I must be understanding this incorrectly; I'm not quite sure how.
I've noticed that if the number of units in the last Dense output layer of my LSTM sequential model does not equal the number of features (columns), I get an error.
If you wanted to output 1 feature in the output (Dense) layer, and you had several input features, how would you:
Do that without errors
identify which feature is being outputted, or does Keras provide outputs for each feature and you have to identify which one you want?
I want to train the model with multiple features, but I'm only interested in one feature's prediction.
Example: I have data with 'open', 'low', 'high' and 'close' and 'volume columns (5 features). If I set the number of units in the last Dense layer to anything other than 5, I get a broadcast error telling me I have inconsistent shapes in the model. If I put 5 units in the last Dense output layer, I get no errors.
Example:
def create_model(self, epochs, batch_size):
model = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
model.add(LSTM(units=128, return_sequences=True,
batch_size=batch_size, input_shape=(TIME_STEP, self.X_train.shape[2])))
# Adding a second LSTM layer and some Dropout regularisation
model.add(LSTM(units=128, return_sequences=True))
model.add(Dropout(DROPOUT))
# Adding a third LSTM layer and some Dropout regularisation
model.add(LSTM(units=128, return_sequences=True))
model.add(Dropout(DROPOUT))
# Adding a fourth LSTM layer and some Dropout regularisation
model.add(LSTM(units=128, return_sequences=False))
model.add(Dropout(DROPOUT))
# Adding the output layer
model.add(Dense(units=5))
model.summary()
# compile model
adam = optimizers.Adam(lr=LR)
model.compile(optimizer=adam, loss='mae')
model.fit(self.X_train, self.y_train, epochs=EPOCHS, batch_size=BATCH_SIZE)
If I enter 'units=1' in the Dense layer, I get the following error:
ValueError: non-broadcastable output operand with shape (11784,1) doesn't match the broadcast shape (11784,5)
AI: The final dense layer's units should be equal to the number of features in your y_train.
Suppose your y_train has shape (11784,5) then dense layer's units should be 5 or if y_train has shape (11784,1), then units should be 1. Model expects final dense layer's units equal to the number of output features.
You have to identify which features you need in input and output.
'open', 'low', 'high' and 'close' and 'volume' - these are your features.
What do you want to predict? Is this a classification or regression problem? What is your problem statement?
Based on these, identify your input and output features.
Put input features in x_train and output features in y_train.
Now, in the final dense layer, you should use units equal to the number of features in y_train. |
H: Is providing class weight to neural network enough for imbalanced binary classification?
I have a highly imbalanced binary classification problem, probably 95:5 for two classes. I don't want to perform resampling as the data is already huge and training it would just take more time. (I'm also aware of down sampling)
But my question is , is providing class weights (let's say computed by scikit-learn's compute class weight) enough?
or there is any other method ?
model.fit(X,y,class_weight=class_weight)
AI: Referring to an answer to a similar question, you don't have any reason to handle unbalance from the beginning. An imbalance of 95:5 isn't that big, I'd start with the regular training and if that doesn't work try more sophisticated things. |
H: Should you use random state or random seed in machine learning models?
I'm starting to study machine learning. All the examples I saw, the person that created the ML model used a random state or a random seed to stop the randomness of the process. But, in real life, when you're trying to apply a machine learning model into an actual project of a company, should you use any random state or seed? Is it right (in data science terms) to set a random state to the machine learning model and make reproducible results?
AI: Your intuition is correct. You can set the random_state or seed for a few reasons:
For repeatability, if you want to publish your results or share them with other colleagues
If you are tuning the model, in an experiment you usually want to
keep all variables constant except the one(s) you are tuning.
I usually set the random_state variable, not the random seed while tuning or developing, as this is a more direct approach. When you go to production, you should remove the random_state and/or random_seed settings, or set to None, then do some cross validation. This will give you more realistic results from your model. |
H: Fitting multiple line
Short version:
How can I find a function that maps X to Y when data looks like this.
Note:
For a pair of emissivity and distance relation between temperature and
raw_thermal_data is linear.
Long Version:
I am working on a project which uses thermal(IR) camera. Now we extract temperature from sensor reading (raw thermal data )
For some reason I need to find a function that maps temperature data to raw thermal data.
Now,
temperature = F ( raw_thermal_data, emissivity, distance )
I am trying to find,
raw_thermal_data = F1 ( temperature, emissivity, distance )
For a pair of emissivity and distance relation between temperature and raw_thermal_data is linear.
Looks like for every pair of emissivity and distance, intercept of the line is different.
Any thoughts?
AI: Not knowing the data in detail, this look like a linear model with "dummys".
A standard linear model looks like ($\beta_0$ is the intercept, $\beta_1$ is the slope):
$$ y = \beta_0 + \beta_1 x + u.$$
Now, when you have two distinct "groups" for which there is a "flag" in the data, you can assign a indicator variable or "dummy" (say $d$, a vector with =1 "true" or =0 otherwise). You can add this to your linear model:
$$ y = \beta_0 + \beta_1 x + \beta_2 d + u. $$
$d$ introduces a separate intercept for the assigned group ($d=1$).
You can also add "interaction terms" to allow for a separate slope for group $d$ by simply multiplying $x$ and $d$.
$$ y = \beta_0 + \beta_1 x + \beta_2 d + \beta_3 x d + u. $$
Note that since there already is one intercept in the model (the $\beta_0$), you can only add "contrasts" to the intercept. So when you have $i$ groups for which you want to have an individual intercept, you would add $i-1$ indicators/dummys to the model. For the "reference group", the intercept will be $\beta_0$ and for the group which is identified by the "dummy" ($d$ above), the intercept would be $\beta_0 + d$. |
H: Pandas copy() different columns from different dataframes to a new dataframe
I have 2 dataframes that are coming from 2 different Excel files. I want to extract some columns from one file and other columns from the second file to print a new dataframe with the copied columns.
I copied 2 columns from different dataframes (df1 and df2) but I get print only one of them (the last one) in df3. How can get all of them in the df3?
import pandas as pd
df1 = pd.read_csv('ASX_public_DB.csv', sep=';', skiprows=5)
df2 = pd.read_excel('ASX_delisted_6months.xlsx')
df3 = df1[['ticker_symbol']].copy()
df3 = df2[['ASX code']].copy()
print(df3)
AI: You are overwriting df3.
Method-1:
import pandas as pd
df1 = pd.read_csv('ASX_public_DB.csv', sep=';', skiprows=5)
df2 = pd.read_excel('ASX_delisted_6months.xlsx')
df3 = df1[['ticker_symbol']].copy()
df3['ASX code'] = df2[['ASX code']].copy()
print(df3)
Method-2:
import pandas as pd
df1 = pd.read_csv('ASX_public_DB.csv', sep=';', skiprows=5)
df2 = pd.read_excel('ASX_delisted_6months.xlsx')
df3 = pd.DataFrame()
df3['ticker_symbol'] = df1['ticker_symbol'].copy()
df3['ASX code'] = df2['ASX code'].copy()
print(df3)
``` |
H: Mathematics: Writing down a three-class classifier confusion matrix
Confusion matrix 2A three-class classifier is evaluated on a test set
of 900 samples which containsall three classes in equal
proportions.
• Classes 2 and 3 are always classified correctly
• Class 1 is confused with class 2 in 50% of the cases,
and with class 3 in 20% of the cases.
Write down the confusion matrix!
Like the name "confusion matrix" already says, I'm confused.
I'm using a binary classification confusion matrix which looks like that
I show you this matrix, because there are a lot of different looking classification matrices with a different order of the true positives, true negatives and so on.
According to this task, I must use a three-class classifier matrix. So first I divided the 900 Samples to 3, so I get 300 for each class. Then I substracted 50% and 20% of 300 to get 90 samples for the first class.
I read, the row index is for the true class and the column index is, what the classifier says.
My confusion Matrix looks like this:
I think, the confusions of class 1 are false positives.
Did I write the matrix correctly?
AI: No, you did it incorrectly, you need to switch rows with columns (transpose). Next time, please check if a similar question has not been asked before. For the explanation, see https://stats.stackexchange.com/questions/318780/how-to-understand-confusion-matrix-for-3x3 . |
H: Handling features with multiple values per instance in Python for Machine Learning model
I have a dataset which contains medical data about children and I am developing a predictive machine learning model to predict adverse pregnancy outcomes. The dataset contains mostly features with a single value per child, e.g. gender = ["Male", "Female].
However, I have some features that have multiple values per child, such as the abdominal circumference which has been recorded multiple times per child, as such:
ChildID abdomcirc
0 1 273
1 1 267
2 1 294
3 2 136
4 2 248
So in the above table child 1 has 3 values for abdomcirc and child 2 has two values for abdomcirc. Adding this feature to the remaining dataset (comprised of single observational features) will result in nearly duplicate rows, apart from the different values for abdomcirc, like so:
ChildID gender diabetes birthroute abdomcirc
0 1 Male No Normal 273
1 1 Male No Normal 267
2 1 Male No Normal 294
3 2 Female Yes csection 136
4 2 Female Yes csection 248
I am unsure what the best way to deal with these features is, without merging the data and having near-duplicate rows. I have considered the following:
Using python list type for abdomcirc. However, I do not know if a machine learning model can handle this data type. So my data will look something like this
ChildID gender diabetes birthroute abdomcirc
0 1 Male No Normal [273, 267, 294]
1 2 Female Yes csection [136, 248]
Transforming abdomcirc into a single observational feature by calculating the mean (although I am not sure how useful this information would be for my predictive model) like so:
ChildID gender diabetes birthroute abdomcirc
0 1 Male No Normal 278
1 2 Female Yes csection 192
I have tried looking for resources to help me with this but have not been very successful, maybe because I am not typing the correct keywords or something. So, I would appreciate your opinions and helpful resources. Many thanks!
AI: A possible resource is featuretools, they do feature engineering on data that has many records. Their examples are not from medical cases but I think it should work for you too.
You can also manually build several features. For instance, given a list of abdomcirc, you can compute its:
mean
maximum
minimum
variance
difference from minimum to maximum
last value (if they are sorted by date)
number of unique values
These features would get most of the information of the abdomcirc list, and this should help your modelling.
I wouldn't go for the first approach of giving lists to the algorithm, although it is possible, I think it is a relatively advanced thing and I wouldn't go for it unless the simpler approaches don't work. |
H: Does label encoding an entire dataset cause data leakage?
I have a dataset on which one of the features has a lot of different categorical values. Trying to use a LabelEncoder, OrdinalEncoder or a OneHotEncoder results in an error, since when splitting the data, the test set ends up having some values that are not present in the train set.
My question is: if I choose to encode my variables before splitting the data, does this cause data leakage?
I'm aware that I shouldn't perform any normalization or educated transformations on the data before splitting the dataset, but I couldn't find a solution for this problem inside scikit-learn.
Thanks in advance for any responses.
Edit: This particular features has very high cardinality, with around 60k possible values. So using scikit-learn's OneHotEncoder with handle_unknown set to ignore would introduce too many new columns to the dataset.
AI: The cleanest solution would be to apply scikit's OneHotEncoder with the handle_unknown parameter set to "ignore":
handle_unknown{‘error’, ‘ignore’}, default=’error’
Whether to raise an error or ignore if an unknown categorical feature is present during transform (default is to raise). When this parameter is set to ‘ignore’ and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as None.
Other manual solution are described in this and this question on Stackoverflow, for example. |
H: Which is the best method for Neural Network Layers in Keras
In keras we can create neural network layers in many ways.
1. Sequential API.
for example
model=sequential()
2. Functional
for example
x1=Input(shape=(2,)
x2=Dense(2)(x1)
3. Subclassing
for example
class Mymodel(keras.layers.Layers):
def __init__(self,output):
My Question is:
1. Whether which one to use and their pros and cons.
AI: The sequential API is the simplest. It allows you to declare a sequence of layers with an single input and a single output. Naturally, this API is a good choice for sequential networks - those where data flows through each layer in sequence.
Pros: straightforward to read and write.
Cons: Difficult to accommodate multiple inputs/outputs. Cannot define arbitrary acyclic networks. No layer reuse.
The functional API allows you more freedom in how you connect inputs and outputs of different layers. You can route the output of any layer to any other layer, which allows you to define arbitrary acyclic graphs. Use this API when you need a network architecture which is not sequential.
Pros: More flexibility. Can define arbitrary networks with multiple inputs/outputs, residual layers, etc.
Cons: More effort to read/write. Higher potential for bugs (since you have to connect layers manually)
Subclassing is actually a different mechanism than the sequential or functional API. The APIs above are used to build custom networks by linking together pre-existing layers.
By contrast, subclassing keras.layers.Layer is used to build your own custom layers. This is useful if you are doing cutting-edge research or if you are trying to implement some technique from the literature that is not part of the standard Keras library. After you have created a custom layer, you can use it with either the sequential or functional APIs to include your layer in a network.
Pros: Ultimate flexibility - you can make your layer do whatever you like.
Cons: Massive effort compared to using pre-built layers. |
H: Any useful tips on transfer learning for a text classification task
I am doing a supervised binary text classification task.
I want to classify the texts from site A, site B, and site C.
The in-domain performance looks OK for texts of each site. (92%-94% accuracy).
However, if I applied the model trained on texts of one site directly onto texts of another site(without fine-tuning), the performance downgrades a lot. (7%-16% downgrade for accuracy).
Approaches I already tried:
Doc2vec embedding(trained on texts from one site) + logistic regression.
Bert embedding + logistic regression. (Using bert-as-a-service to generate the embeddings based on google pre-trained bert models).
TF-IDF + logistic regression.
Pre-trained Word2vec embedding(average word embedding for text) + logistic regression.
All of those approaches don't work very well.
I knew that the performance downgrade is unavoidable, but I would like to get a maybe 3% - 5% downgrade.
AI: Generally the task of recognizing one type of text against "anything else" is a quite difficult problem, since there is so much diversity in text that there cannot be any good representative sample of "anything else".
Typically this problem is treated as a one-class classification problem: the idea is for the learning algorithm to capture what represents the positive class only, considering anything else as negative. To my knowledge this is used mostly for author identification and related stylometry tasks. The PAN workshop series offer a good deal of state of the arts methods and datasets around these tasks.
It is also possible to frame the problem as binary classification, but then one must be very creative with the negative instances in the training set. Probably the main problem with your current approach is this: your negative instances are only "randomly selected among all other topics of the site". This means that the classifier knows only texts from the site on which it is trained, so it has no idea what to do with any new text which doesn't look like anything seen in the training data. A method which has been used to increase the diversity of the negative instances is to automatically generate google queries with a few random words which appear in one of the positive instances, then download whatever text Google retrieves as negative instance.
Another issue with binary classification is the distribution of positive/negative instances: if you train a model with 50/50 positive/negative, the model expects that by default there is 50% chance for each. This can cause a huge bias when applied to a test set which contains mostly negative instances, especially if these don't look like the negative instances seen during training.
Finally be careful about the distinction semantic topic vs. writing style, because the features for these two are usually very different: in the former case the stop words are usually removed, the content words (nouns, verbs, adjectives) are important (hence one uses things like TFIDF). In the latter it's the opposite: stop words and punctuation should be kept (because they are good indicators of writing style) whereas content words are removed because they tend to bias the model the topic instead of the style. In stylometry features based on characters n-grams have been shown to perform well... even though it's not very clear why it works! |
H: how come accuracy_score recognizes the positive label and precision_score does not?
I am executing this code which works perfectly for me:
(I only have 'positive' and 'negative' sentiments):
from sklearn import metrics
print('Accuracy:',metrics.accuracy_score(test_sentiments, predicted_sentiments))
print('Precision:',metrics.precision_score(test_sentiments, predicted_sentiments, pos_label='positive'))
My question is: how come accuracy_score recognizes the positive label and precision_score does not?
ps: if I execute:
print('Precision:',metrics.precision_score(test_sentiments, predicted_sentiments))
or
print('Accuracy:',metrics.accuracy_score(test_sentiments, predicted_sentiments, pos_label='positive'))
They both fail.
AI: Accuracy is symmetric in the naming of positive/negative classes, but precision is not: for accuracy, it doesn't matter which class is "positive." So accuracy_score doesn't have a parameter pos_label, and will error if you try to pass that parameter; meanwhile precision_score has default pos_label=1, so if your labels don't include 1 and you leave the parameter to the default, you'll get an error. |
H: Back propagation through a simple convolutional neural network
Hi I am working on a simple convolution neural network (image attached below). The input image is 5x5, the kernel is 2x2 and it undergoes a ReLU activation function. After ReLU it gets max pooled by a 2x2 pool, these then are flattened and headed off into the fully connected layer. Once through the fully connected layer the outputs are converts into Softmax probabilities. I've propagated froward through the network and am now working on back propagation steps. I have taken the derivative of cross entropy and softmax, and calculated the weights in the fully connected layer.
Where I get confused is how to preform back propagation through Max pooling and then ultimately find the derivatives of the weights in the convolution layer.
What I've found online is that you need to find the derivative of the loss with respect to flattened layer, but I am unsure on how you do that.
If I could get some help with an explanation, ideally with equations it would be awesome.
Cross posted in stack overflow (https://stackoverflow.com/questions/63022091/back-propagation-through-a-simple-convolutional-neural-network)
AI: The backpropagation algorithm attributes a penalty per weight in the network. To get the associated gradient for each weight we need to backpropagate the error back to its layer using the derivative chain rule.
Flattening layer
The derivative of a layer depends on the function that is being applied. In the case of the flattening layer it is simply reshaping (a mapping) the values. Thus no additional loss will be added at this layer. All you need to know is how the flattening occurs.
For example if the forward pass flattening is
$flatten\begin{pmatrix}
a & b\\
c & d
\end{pmatrix} = \begin{pmatrix}
a \\
b \\
c \\
d
\end{pmatrix}$,
then you can easily map the associated cost so far back to the $2 \times 2 \times 1$ feature map.
Max pooling layer
In the forward pass the max pooling layer is taking the maximum value in a $3 \times 3$ window that is passed along your image. For example the bold values in the first $3 \times 3$ window would have a maximum of $11$.
$maxpooling \begin{pmatrix}
\bf{1} & \bf{2} & \bf{3} & 4 \\
\bf{5} & \bf{6} & \bf{7} & 8 \\
\bf{9} & \bf{10} & \bf{11} & 12 \\
13 & 14 & 15 & 16
\end{pmatrix} = \begin{pmatrix}
\bf{11} & 12\\
15 & 16
\end{pmatrix}$
Thus the resulting error backpropagation would only pass through the maximum values which were passed down by the forward pass. For all other values the error term would not backpropagate. Thus the current error matrix you had backpropagating until this point would be multiplied by
$\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 1 & 1
\end{pmatrix}$
Thus only 4 error terms would continue onto earlier layers.
Convolutional layers
I have gone in detail about how to do backpropagation through convolutions here: CNN backpropagation between layers. |
H: Can features negatively correlated with the target be used?
In feature selection (for a regression problem), can features that are negatively correlated with the target variable be chosen to predict the target?
I don't think negative correlation means the predictor does not provide any information about the target.
Some feature selection methods (like Filter method) are based on using only those predictors that have high correlation to the target variable, and dropping those with low correlation.
My question is - shouldn't negative correlated features also be considered? I think the problem of feature selection should be whether a feature is "simply correlate " with the target or not, rather than whether it is a Positive/Negative correlation. Am I right? Can someone please clear my confusion?
AI: You are correct, don't mistake a low correlation (absolute value close to 0) with a negative correlation.
A large negative correlation is just as useful as a large positive correlation. The only difference is that for a positive correlation, as the feature increases, the target will increase. For a negative correlation, as the feature decreases, the target will increase.
Any model you choose should be able to handle the correlation sign (+/-).
If you are looking at feature reduction, select features with a correlation close to 0. This means that the feature does not have a useful relationship to the target, and won't help with any prediction. |
H: Random Forest with 2D features
I try to predict the position of a specific point (crest) in a 1D signal (elevation profile). Until now, I computed gradient at every point of my signal and combined that with additional features or heuristics to find approximate position of the expected output (position of the crest).
But there are some limits of this approach and I've found that ML techniques, and especially Random Forest classifiers could perform well in this kind of situation.
I would like to train my RF to find the most probable point (point_index) being the "output" based on a profile input.
Yet, I only found examples of training RF models with 1D inputs (like a time series). In my case, I have 2D input data (one signal is composed of N points with 2 features associated to each point) like the following dataframe :
profile_index point_index z z' crest
0 0 1 -0.885429 0 false
1 0 2 -0.820151 0.02 false
2 0 3 -0.729671 -0.1 true
3 0 4 -0.649332 0.1 false
4 1 1 -0.692186 0 false
5 1 2 -0.885429 0.1 true
6 1 3 -0.820151 -0.05 false
3 1 4 -0.649332 0.2 false
I can map my data to split the dataframe for every profile, and get the output point_index as a feature, but how do I manage the fact that 2 of my features are arrays ?
Edit: here is another representation for my data
profile_index points_z points_z_prime crest_index
0 [-0.05, ..., 2.36] [0, ..., -0.01] 150
1 [-0.02, ..., 4.41] [0, ..., -0.02] 162
(this is probably irrelevant regarding the method, but I work with Python and scikit-learn)
AI: If the number of points is constant in your array you can flatten your array and use each element as a feature in your RF. I worked on a similar problem (If I understood your problem correctly) where I predict the return of a stock based on his return on a given window of a fixed number of days and I have used the RF this way and it performs pretty well.
If your number of points isn't fixed then I suggest that that you use LSTM Neural Network where you can introduce a sequence of data (could be arrays) and it can predict the output that you are looking for. |
H: Logistic regression vs Random Forest on imbalanced data set
I have an imbalanced data set where positives are just 10% of the whole sample. I am using logistic regression and random forest for classification. While comparing the results of these models, I have found that the probability output of logistic regression ranges between [0,1] while that of random forest ranges between [0, 0.6].
I cannot share the data set but my doubt is around the working of these algorithms. How can random forest generate probability less than 0.6?
AI: To have a probability of 1 in a RF, it means that your algorithm can construct a leaf containing only positive sample. Since it doesn't, this means that your features are not explaining the variance of the output or that your algorithm is under-fitted.
I suggest that you try optimize the hyper-parameters of your RF by using cross-validation and use some oversampling to reduce the bias in your dataset. |
H: How to handle sparsely coded features in a dataframe
I have a dataset that contains information regarding diabetes patients, like so:
id diabetes diet insulin lifestyle
0 No NaN NaN NaN
1 Yes Yes Yes NaN
2 No NaN NaN NaN
3 Yes NaN NaN NaN
4 Yes Yes NaN Yes
5 Yes Yes Yes Yes
Features diet, insulin and lifestyle have a high percentage of missing data (around 95% each). So initially, I excluded these features from my dataset. However, after taking a closer look at the data, I found the values for diet, insulin and lifestyle to be associated with the value for diabetes feature. This makes sense as diabetes patients would be recommended treatment relating to diet, insulin intake and lifestyle changes.
So in cases where diabetes='No', values for features diet, insulin and lifestyle are missing.
And in cases where diabetes='Yes', I have found that in most cases, at least one feature from diet, insulin and lifestyle to have a value of 'Yes', and the remaining values are missing.
After some reading, I believe features diet, insulin and lifestyle are Missing at Random (MAR), and clearly not missing completely at Random (MCAR) as is explained here.
Anyway, so my question is, should the nature of the missing data here change my decision to remove these features from the dataset due to their high percentage of missing values. Or, should I impute the data for these features, by filling in missing values with "No", like so:
imputer = SimpleImputer(strategy='constant', fill_value='No')
x[:, 2:5] = imputer.fit_transform(x[:, 2:5])
AI: The first question you have to answer is whether these are actually missing or simply sparsely coded!
E.g. if the variables are "supposed" to only show doctors recommended diet, insulin or lifestyle change then we could naturally conclude that any NaN is actually a "No" in which case you do not even have to impute the data but rather straight-up replace with "yes = 1 and NaN = 0.
How do you differentiate this case (sparsely coded) from the other case: actual missing values? Besides applying your domain knowledge you should also test whether the following rules apply:
If diabetes == No then all treatment variables are missing
If diabtes == Yes then at least one treatment variable is "Yes"
Treatment variables are always only "Yes" or NaN
If this is the case your dataset is likely sparsely coded, otherwise you have actual missing values.
Take note however that should the values be actual missing values (e.g. because you identify some cases where non-diabetics have a treatment or some cases are labeled "No" in the treatment variables) you can assume they are MAR but not MNAR. In this case I would recommend to remove these variables because imputation as "No" or o does not make sense if you do not have any information what the NaN actually means. |
H: Is it possible to have stratified train-test split of a set based on two columns?
Consider a dataframe that contains two columns, text and label. I can very easily create a stratified train-test split using sklearn.model_selection.train_test_split. The only thing I have to do is to set the column I want to use for the stratification (in this case label).
Now, consider a dataframe that contains three columns, text, subreddit, and label. I would like to make a stratified train-test split using the label column, but I also want to make sure that there is no bias in terms of the subreddit column. E.g., it's possible that the test set has way more comments coming from subreddit X while the train set does not.
How can I do this in Python?
AI: One option would be to feed an array of both variables to the stratify parameter which accepts multidimensional arrays too. Here's the description from the scikit documentation:
stratify array-like, default=None
If not None, data is split in a stratified fashion, using this as the class labels.
Here is an example:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# create dummy data with unbalanced feature value distribution
X = pd.DataFrame(np.concatenate((np.random.randint(0, 3, 500), np.random.randint(0, 10, 500)),axis=0).reshape((500, 2)), columns=["text", "subreddit"])
y = pd.DataFrame(np.random.randint(0,2, 500).reshape((500, 1)), columns=["label"])
# split stratified to target variable and subreddit col
X_train, X_test, y_train, y_test = train_test_split(
X, pd.concat([X["subreddit"], y], axis=1), stratify=pd.concat([X["subreddit"], y], axis=1))
# remove subreddit cols from target variable arrays
y_train = y_train.drop(["subreddit"], axis=1)
y_test = y_test.drop(["subreddit"], axis=1)
As you can see the split is stratified to subreddit too:
Train data shares for subreddits
X_train.groupby("subreddit").count()/len(X_train)
gives
text
subreddit
0 0.232000
1 0.232000
2 0.213333
3 0.034667
4 0.037333
5 0.045333
6 0.056000
7 0.056000
8 0.048000
9 0.045333
Test data shares for subreddits
X_test.groupby("subreddit").count()/len(X_test)
gives
text
subreddit
0 0.232
1 0.240
2 0.208
3 0.032
4 0.032
5 0.048
6 0.056
7 0.056
8 0.048
9 0.048
Naturally, this only works if you have sufficient data to stratify to subreddit and the target variable at the same time. Otherwise scikit learn will throw an exception. |
H: Measure correlation for categorical vs continous variable
Given a variable which is categorical that depends on continuous variables, I would like to know how to check wether these continous variable explain the categorical one.
So:
Y = cagetorical
X1 = continous
X2 = continous
X3 = continous
I'd start with a correlation but which? I've seen How to get correlation between two categorical variable and a categorical variable and continuous variable? but there it is explained wether there is a difference in categorical variables explaining a continous variable, so I think it's another topic?
I'm fine with tool advices in R and python as well.
edit: I'm not sure wether cateogrical is correct here. The values of $ Y $ are $ 0, 1, 2, 3 $ but I could also use $ A, B, C, D $. They represent a classification of the measure of cleanliness of a room.
AI: By saying you want to "explain Y by X" it sounds that you try to build a classifier F that can map X values into expected Y: F(X) --> Y.
If so, you don't have to search for "correlation" necessarily.
There are various methods to build such a classifier. You can use logistic regression \ SVM \ Neural network \ etc.
Besides, if it make more sense for you, you can always first discretize the continuous variables into categorical vars and than use also other methods such as decision trees \ Naive Bayes and more. |
H: Binary classification using images and an external dataset
I currently have a project in which I must create a binary classifier to detect defective products. I have image data which has already been labeled (each part has been labeled as a pass or fail), as well as an external dataset which has specific measurements for each image (it contains parameters which could not be accurately captured by a camera, such as temperature, length of a particular piece, etc.). I have achieved decent accuracy by applying basic classification algorithms such as logistic regression and support vector machines (in scikit-learn) to only this external dataset, but I would like to incorporate the image data to improve accuracy. I have read about training a CNN on the images, but I do not know how to incorporate the external dataset to train the model (in fact, I don't even know if this is possible). Is there any way to use both the image data as well as the external dataset to classify these images? Any help is much appreciated.
AI: Yes you can most certainly combine the two ways of information in CNNs.
Let's take a look at a CNN:
Roughly speaking, CNNs consist of two parts:
one that has convolutional layers and optinally pooling ones
and one that has fully connected layers.
The first is used for extracting features from the image, while the second is used for using these extracted features to classify the image. The last part essentially is like a regular fully-connected Neural Network.
If you want to add more information to the CNN, which is not an image, you can add this to the first fully connected layer of the network. This way this information bypasses the feature extraction step (which is relevant only for imaging data) and is used only for classification. |
H: When does Adam update its weights?
I have a dataset with at least 70% of labels incorrect.
I'd expect that incorrect labels would compensate each other while true labels will be taught properly (given a very large dataset).
For example, if I have 300 samples saying a => -1 and 300 samples saying a => 1, the result for the input "a" eventually will be 0 (for a regression problem).
If I use Adam for the example above, won't it affect the results for the inputs with noisy labels due to its adaptive nature? Won't it be better to use SGD instead and decay the learning rate, or does Adam change its weights only at the end of every epoch?
AI: Adam works in the same way as SGD does in this regard, it updates the weights at the end of each iteration, so at the end of an epoch multiple weight updates have been applied.
Inherently neither Adam nor SGD do anything to counteract the noisy labels, they just try to find the best parameters that minimize a loss function. I don't think anyone can answer apriori if it will be better to use Adam or SGD for your problem. |
H: What is the right way to store datasets for a CNN project
Our image classification project has thousands of raw photos, masks and reshaped images. We store source code in git. But datasets don't belong to source code version control. How should we store thee sets of images?
AI: You can use google drive to save these and import the drive on your code. |
H: Similarity Measure between two feature vectors
I have face identification system with following details:
VGG16 model for feature extraction
512 dimensional feature vector (normalized)
I need to calculate similarity measure between two feature vectors. So far I have tried as difference measure:
Pairwise cosine, euclidean distance
Dot product (both vectors are normalize, so their dot product should be in range [-1, 1])
These methods are working fine when I want find closest feature vector from set of Feature Vectors. By sorting difference(distance) measure. But this method is relative measurement.
I want to have percentage of similarity. So I can say given image is Person XXX with degree of certainty of x percentage
Lets say that I have 2 feature vectors ( of person a and b).
I want to calculate accuracy level (in a percentage) that person a is
indeed person b, by probability of some percentage.
AI: So, thank you for clarifying the question. Just to confirm that the question is asking how to set an appropriate threshold for face feature vectors (represented a a and b, for example).
What I would recommend is to look at either cosine similarity or euclidean distance, which you have implemented. From here, I would then look the distribution fo the similarity metric over all face vector pairings. Here, this can give you an idea of general level of similarity over all face vector pairings (e.g. if most of the distribution is skewed towards lowers facial similarities, then this could indicate overall lack of similarity between face vector pairings).
If the distribution is skewed to higher similarity values, then you could take, say, the 90th percentile over this distribution as the threshold for determining whether a == b.
Another idea would be to use the idea of softmax in the following way. This method would allow for others to replicate your methodology precisely:
For each face $f_i \in F$, you compute the softmax over the face vector pairings between $f_i$ and $f_j \in F, i \neq j$. Then, from this, you select the index $j$ which has the highest value, which can then be assumed to be such that $f_i$ == $f_j$ (a == b) |
H: Pandas/Python - comparing two columns for matches not in the same row
I have this data:
I wanted to compare A and B for matches not by row but rather search A0 if it is in column B and so on. Moreover, I wanted to ignore the .AX in column A because it would not find any matches in column B anyway.
I used this, but it matches values row by row and it returns False or True. I would like to print the matches in a new Column C:
df3['match'] = df3.A == df3.B
Thank you.
AI: To clarify, this question is about comparing two columns to check if the 3-letter combinations match.
So, I would approach this in the following manner:
# Extract the 3-letter combinations from column a
df3["a normalised"] = df3["a"].str[:3]
# Then check if what is in `a normalised` is in column b
b_matches = list(df3[df3[“b”].isin(list(df3[“a normalised”]))][“b”].unique())
df3.loc[:, "match"] = False
b_match_idx = df3[df3["a normalised"].isin(b_matches)].index
df3.at[np.array(b_match_idx),"match"] = True
EDIT: The parentheses have now been resolved. Also the .loc warning can now be mitigated. |
H: what is the difference in terms namely Correlation, correlated and collinearity?
A website says Correlation refers to an increase/decrease in a dependent variable with an increase/decrease in an independent variable. Collinearity refers to two or more independent variables acting in concert to explain the variation in a dependent variable.Could someone clarify the terms ?
AI: Collinearity usually refers to any linear relationship or association between 2 or more features.
Correlation and correlated are more general, and can refer to any type of relationship between features and responses, including log, exponential and linear associations.
The word "correlation" is a noun. And its strength is measured by specific formula that depends upon the data-type and assumptions such as parametric or non-parametric.
The word "correlated" is adjective and indicates loose association between two variables i.e. it does not indicate causal relationship. |
H: How to grid search feature selection and neural network hyperparameters in the same grid?
I'm using the GridSearchCV () class from scikit to perform hyperparameter optimization in a sequential neural network. I've built a pipeline to also find the best number of features by putting a feature selector inside the pipeline. The problem is how to define the input_shape, since this depends on the k parameter from the feature selector. Is it possible to set the value of classifier__input_shape to be the same value (at all times) of feature_selector__feature__selector_k?
I've provided the correspondent piece of code below.
def create_model (learn_rate = 0.01, dropout_rate = 0.0, weight_constraint = 0, input_shape):
model = Sequential ()
model.add (Dense (units = 64, activation = 'relu',
input_shape = (input_shape, )))
model.add (Dropout (dropout_rate))
model.add (Dense (32, activation = 'relu'))
model.add (Dense (1, activation = 'sigmoid'))
model.compile (loss = 'binary_crossentropy',
optimizer = Adam (lr = learn_rate),
metrics = ['accuracy'])#, metrics.CategoricalAccuracy ()])
return model
standard_scaler_features = remaining_features
my_scaler = StandardScaler ()
steps = list ()
steps.append (('scaler', my_scaler))
standard_scaler_transformer = Pipeline (steps)
my_feature_selector = SelectKBest ()
steps = list ()
steps.append (('feature_selector', my_feature_selector))
feature_selector_transformer = Pipeline (steps)
clf = KerasClassifier (build_fn = create_model, verbose = 2)
clf = Pipeline (steps = [('scaler', my_scaler),
('feature_selector', feature_selector_transformer),
('classifier', clf)],
verbose = True)
param_grid = {'feature_selector__feature_selector__score_func' : [f_classif],
'feature_selector__feature_selector__k' : [7, 9, 15],
'classifier__input_shape' : [7, 9, 15],
'classifier__epochs' : [2, 3, 4]}
cv = RepeatedStratifiedKFold (n_splits = 5, n_repeats = 1, random_state = STATE)
grid = GridSearchCV (estimator = clf, param_grid = param_grid, scoring = 'f1',
verbose = 1, n_jobs = 1, cv = cv)
grid_result = grid.fit (X_train_df, y_train_df)
And the error:
ValueError: Input 0 of layer sequential_9 is incompatible with the layer: expected axis -1 of input shape to have value 9 but received input with shape [None, 7]
AI: I see two solutions:
either you pass a list of dictionnaries to param_grid avoiding irrelevant combinations
or you use a single variable in your pipeline for feature_selector__feature__selector_k and classifier__input_shape
First solution: you can generate the right list of combinations using something close to this:
param_grid = [
{
'feature_selector__feature_selector__score_func' : [f_classif],
'feature_selector__feature_selector__k' : [k],
'classifier__input_shape' : [k],
'classifier__dropout_rate' : [0.0, 0.5]
}
for k in [7, 9, 15]
]
Second solution, you can use a specific class that create your model when fitting based on the shape of X. Here is a code sample:
class MyKerasClf():
def predict(self, X):
y_pred_nn = self.clf.predict(X)
return np.array(y_pred_nn).flatten()
def create_model(self, learn_rate = 0.01, weight_constraint = 0 ):
model = Sequential ()
model.add (Dense (units = 64, activation = 'relu',
input_shape = (self.input_shape, )))
model.add (Dropout (self.dropout_rate))
model.add (Dense (32, activation = 'relu'))
model.add (Dense (1, activation = 'sigmoid'))
model.compile (loss = 'binary_crossentropy',
optimizer = Adam (lr = learn_rate),
metrics = ['accuracy'])
return model
def fit(self, X, y, **kwargs):
self.input_shape = X.shape[1]
self.clf = KerasClassifier(build_fn = self.create_model, verbose = 2)
self.clf.fit(X, y, **kwargs)
def set_params(self, **params):
if 'dropout_rate' in params:
self.dropout_rate = params['dropout_rate']
else:
self.dropout_rate = 0.0
Then you can use the class in your pipeline
X, y = make_classification(n_features=50, n_redundant=0, n_informative=2,
random_state=42, n_clusters_per_class=1)
my_scaler = StandardScaler ()
steps = list ()
steps.append (('scaler', my_scaler))
standard_scaler_transformer = Pipeline (steps)
my_feature_selector = SelectKBest ()
steps = list ()
steps.append (('feature_selector', my_feature_selector))
feature_selector_transformer = Pipeline (steps)
# Create a specific clf
my_clf = MyKerasClf( )
pip_clf = Pipeline (steps = [('scaler', my_scaler),
('feature_selector', feature_selector_transformer),
('classifier', my_clf)],
verbose = True)
param_grid = {'feature_selector__feature_selector__score_func' : [f_classif],
'feature_selector__feature_selector__k' : [7, 15],
'classifier__dropout_rate' : [0.0, 0.5]
}
cv = RepeatedStratifiedKFold (n_splits = 5, n_repeats = 1, random_state = 42)
grid = GridSearchCV (estimator = pip_clf, param_grid = param_grid, scoring = 'f1',
verbose = 1, n_jobs = 1, cv = cv)
grid_result = grid.fit(X, y)
Note: I although added the dropout to be tested in the gridsearch as an example. |
H: How can i tell if my model is overfitting from the distribution of predicted probabilities?
all,
i am training light gradient boosting and have used all of the necessary parameters to help in over fitting.i plot the predicted probabilities (i..e probabililty has cancer) distribution from the model (after calibrating using calibrated classifier) i.e. their histogram or kde. as you can see from below the probabilities for my class 1 are concentrated on the upper and lower end.
i have tried playing around with bandwith too to smooth this a little and it doesn't smooth the bumps too much. what do you think this shows about my model? isn't it a good thing that the model for class 1 (which is has cancer) is assigning a greater probability for this class?
i am unsure how to interpret this or where i could be going wrong
the red curve is positive class (has cancer) and the blue curve is hasn't. below is plot used to generate.
results = df[['label','predicted_prob']]
colors = ['b', 'r']
for label in [0, 1]:
results[results['label'] == label]['predicted_prob'].plot.kde(bw_method=0.35,color=colors[label])
plt.xlim(0,1)
AI: Such a plot doesn't really tell you much about overfitting.
First, check that your calibration has worked well; it's possible that an incorrect calibration has pushed the probabilities to the extremes. Otherwise, the distribution of probabilities being so extreme suggests the data just naturally separates into a segment of easy-to-detect cancers and the rest. Among the latter, it looks like you get reasonably good but not great rank-ordering of cases. |
H: Is always converting a input vector into matrix and apply cnn's good idea?
I know the benefits of using cnns(reduced size weight matrices). Is it a good idea to convert a input vector(which is not a image) into a matrix and apply cnn's. What I understand is that it should not be done because this would enforce some relationship between input vector value which actually doesn't exist.
Am I correct or there is some way we can apply cnn's to reduce computation?
If cnn's can't be applied what could be the method for reducing computation for very high dimensional input
AI: This is a job for dimensionality reduction techniques. PCA is a simple and intuitive method that you should probably try first. If you find it is insufficient or ineffective, you could try using an autoencoder.
As you said, using a CNN would imply a locational relationship between variables. If such a relationship does not exist, the CNN could have difficulty finding suitable weights. It is possible that forcing the CNN to link seemingly unrelated variables could help the model generalize better or reveal underlying structure that you cannot perceive, but there are other problems that you will run into. For example, CNNs generally use pooling layers as a form of nonlinear dimensionality reduction. However, if your variables are completely independent and unrelated, you would simply be throwing away possibly critical information using this down-sampling technique.
So while it might be feasible to use a CNN, you should probably use the right tool for the job. |
H: Machine learning detect changes in components
I am a student who will finish my studies next year and I want to analyze the job market in advance. I have found an interesting job where it says:
The department of assemblies and systems of the X deals with the simulation, testing and evaluation of components, assemblies and complete systems mainly from the automotive sector. In the future, machine learning methods will be used to detect changes in components or test benches at an early stage
I find the last sentence very interesting, but I don't quite understand it. If you expect a change to a component (e.g. the component starts to break), you can hard code it to detect this change. Could somebody explain to me with a simple example how machine learning can be helpful in this area?
AI: Defect Detection and Failure Detection/Predictive Maintenance are two areas of modern manufacturing and engineering where machine learning can provide substantial and even radical solutions.
For example, intelligent systems may detect defects at an early stage with enough accuracy and generality so as to save both costs and increase reliability.
On the other hand, intelligent systems may monitor operation logs of systems (eg an aircraft) and predict component failure exactly when needed with accuracy and generality so as to reduce downtime, save costs and increase safety.
Both these approaches require sophisticated learning methods (ie machine, deep, learning) so as to handle complexity and at the same time be general enough and accurate enough for practical purposes.
Sample references:
AI-Based Visual Inspection For Defect Detection
Study on Machine Learning Based Intelligent Defect Detection System
How to Find the Right Machine Learning Techniques for Predictive Maintenance?
Machine Learning Techniques for Predictive Maintenance |
H: How is this score function estimator derived?
In this paper they have this equation, where they use the score function estimator, to estimate the gradient of an expectation. How did they derive this?
AI: This is simply a special case (where $p_\psi = N(0,1)$) of the general gradient estimator for Natural Evolution Strategies (proved in another reference, look it up):
Outline of derivation based on the general formula for the gradient estimator:
$$\nabla_\psi E_{\theta \sim p_\psi} \left[ F(\theta) \right] = E_{\theta \sim p_\psi} \left[ F(\theta) \nabla_\psi log({p_\psi}(\theta)) \right]$$
If
$$\epsilon \sim \mathbb{N}(0, 1) = \frac{1}{\sqrt{2 \pi}}e^{-\frac{\epsilon^2}{2}}$$
then
$$\psi = \theta + \sigma \epsilon \sim \mathbb{N}(\theta, \sigma) = \frac{1}{\sigma\sqrt{2 \pi}}e^{-\frac{(\psi-\theta)^2}{2\sigma^2}}$$
Thus: $\psi = \theta + \sigma \epsilon \sim \mathbb{N}(\theta, \sigma) \Longleftrightarrow \epsilon = \frac{\psi-\theta}{\sigma} \sim \mathbb{N}(0,1)$
So:
$$\begin{align}
\nabla_\theta E_{\psi \sim N(\theta,\sigma)} \left[ F(\theta + \sigma \epsilon) \right] &= E_{\psi \sim N(\theta,\sigma)} \left[ F(\theta + \sigma \epsilon) \nabla_\theta (-\frac{(\psi-\theta)^2}{2\sigma^2}) \right] \\
&= E_{\epsilon \sim N(0,1)} \left[ F(\theta + \sigma \epsilon) \nabla_\epsilon (-\frac{\epsilon^2}{2}) \frac{d(\frac{\psi-\theta}{\sigma})}{d\theta} \right] \\
&= \frac{1}{\sigma} E_{\epsilon \sim N(0,1)} \left[ F(\theta + \sigma \epsilon) \epsilon \right] \\
&= \nabla_\theta E_{\epsilon \sim N(0,1)} \left[ F(\theta + \sigma \epsilon) \right]
\end{align}$$
note: scalar variables were considered in above steps for simplicity, but easy to extend/derive for vector variables |
H: what make lightGBM run faster than XGBoost?
I am curious on what differences in implementation allow speed up of lightGBM over XGBoost, some times up to magnitude of orders.
AI: First of all both the GBM methods are great and superiority of each algorithm is dependent on the data.
Major Differences between the two is that LightGBM uses a novel technique of Gradient-based One-Side Sampling (GOSS) to filter out the data instances for finding a split value while XGBoost uses pre-sorted algorithm & Histogram-based algorithm for computing the best split.
GOSS assumes that data points with small gradients tend to be more well trained. This means that it is more efficient to concentrate on data points with larger gradients.To attenuate the problem of biased sample it also randomly samples from data with small gradients.
To read more you can refer:
Paper
LightGBM_Documentation |
H: How do we know a neural network test accuracy is good enough when results vary with different runs?
In every paper I read about prediction models, the training accuracy and the test accuracy (sometimes also the validation accuracy) is stated as a discrete number. However, in experience, depending on how the weights are initialized, different training runs result in different testing results.
How does a standard data science researcher pinpoint the accuracy metric to be written in a paper, and how does ze get certain about its validness?
AI: So, the question is about how to report test accuracy, etc, when you see variation over executions.
As @Nikos M. has eluded to, you typically train and test the model at least 3 times and then report the average test evaluation metrics (along with the standard deviation to show the level of variation). This is why you are seeing 'discrete' values reported in papers, this is because they are averaged over x train/test runs. |
H: pytorch convolution with 0-stride along one dimension
For some square images, I'd like to use torch.nn.Conv2d with the kernel as a vertical block. As in, the kernel size is defined as max value of the first dimension by 1. Since the first dimension has no more room, I'd like to have 0 stride along that first dimension. I tried using the following definition:
self.conv1 = torch.nn.Conv2d(3, 32, (max_dim_0, 1),
stride=(0, 1),
padding=0,
dilation=1,
groups=1,
bias=True,
padding_mode='zeros')
However, this causes my training loop throws a RuntimeError: non-positive stride is not supported exception. Is there a way to define the kernel as a vertical block?
AI: Setting stride to 0 is not necessary, torch will simply compute with respect to the input tensor sizes, so you can set stride to (1,1).
For x of size (batch_size, 3, max_dim_0, max_dim_0) (square image) the tensor output will be of size (batch_size, 32, 1, max_dim_0). |
H: "Change the features of a CNN into a grid to fed into RNN Encoder?" What is meant by that?
So in the paper for OCR pr LaTex formula extraction from image What You Get Is What You See: A Visual Markup Decompiler, they pass the features of the CNN into RNN Encoder. But there is problem that rather than passing the features directly, they have proposed a solution to change it into the grid.
Extract the features from the CNN and then arrange those extracted features in a grid to pass into an RNN encoder. This is the exact language they have used.
What is meant by that? Theoratically speaking, if I have an CNN without any Dense/Fully Connected layer and produces an output of [batch,m*n*C], then how can I change it in the form of a grid?? Please see the picture below. So after getting the output from the CNN, they have chnged it somehow before passing it to RNN. What is the method that one can use to get this transformation?
So if I have to pass something to keras.layers.RNN()(that_desired_grid_format), what should be this grid format and how can I change it?
AI: It seems they use a shared RNN which process each row sequentially on the sequence of concatenated channels of individual pixels. From the paper
Implementation with channels last
Let the output of the ConvNet be of size (batch_size, height, width, channels). The RNN expects an input of size (batch_size, sequence_length, input_size)`. So you have to reshape it with the following correspondence.
batch_size*height -> batch_size
channels -> input_size
width -> sequence_length
And process each row (along height dimension) with the same RNN and concatenate the result.
To do that, we simply reshape to merge the batch and height axis into one dimension so that our RNN will process columns independantly.
rnn_input = keras.layers.Reshape((batch_size*height, width, channels))(convnet_output)
rnn_output = keras.layers.RNN(hidden_dim, return_sequences=True)(rnn_input)
rnn_output will have shape (batch_size*height, width, hidden_dim). You can then combine this tensor into a context vector using a dense layer with tanh activation, as it is written in the paper.
The paper also use trainable initial state for the RNN, you might be interested in this library to implement it.
Implementation with channels first
If you set your Conv2D layer with "channels_first", the output convnet_output will be of size (batch_size, channels, height, width). Therefore you need first to permute the dimensions before reshaping.
convnet_output = keras.layers.Permute((0, 2, 3, 1))(convnet_output)
After this step, convnet_output has dimension (batch_size, height, width, channels). You can then proceed as previously, reshaping and feeding to the RNN.
rnn_input = keras.layers.Reshape((batch_size*height, width, channels))(convnet_output)
rnn_output = keras.layers.RNN(hidden_dim, return_sequences=True)(rnn_input) |
H: What criteria use in order to select the best internal validation for clustering?
I am doing homework about how to evaluate a clustering algorithm both hierarchical and partitional.
For doing this I have a dataset that I can plot as you can see:
The clustering algorithms that I am using are K-Means, Gaussian mixture, and Agglomerative.
The problem is that I don't know which criteria use for selecting an internal validation for K-MEANS, for example, I read that silhouette is better for a convex cluster, but I am not able to see in the image if any of groups are or aren't convex.
I am read some papers like:
http://datamining.rutgers.edu/publication/internalmeasures.pdf
https://arxiv.org/pdf/1905.05667.pdf
but the problem is that I am not found any criteria to select A or B method for internal evaluation with facts, they're only a test about all vs all.
Where can found or what is found the criteria for choosing one or the other method for evaluation criteria?
AI: There is no catch-all metric that can be used for evaluation (internal or otherwise) of the clustering achieved. This is why machine learning is also art. There are no hard limits, many things depend on application, domain, and data themselves.
TL;DR
The purpose of the homework is to familiarise yourself with the problem of clustering, but also with the fact that there is no definite best method nor evaluation metric as panacea for all cases.
As such, you can try various criteria (see below references) on your data and algorithm results and ponder on their effectiveness for your problem at hand.
An Impossibility Theorem for Clustering
Although the study of clustering is centered around an intuitively
compelling goal,it has been very difficult to develop a unified
framework for reasoning about it at a technical level, and profoundly
diverse approaches to clustering abound in the research community.
Here we suggest a formal perspective on the difficulty in finding
such a unification, in the form of an impossibility theorem: for a set
of three simple properties, we show that there is no clustering
function satisfying all three. Relaxations of these properties expose
some of the interesting (and unavoidable) trade-offs at work in
well-studied clustering techniques such as single-linkage,
sum-of-pairs, k-means, and k-median.
Wikipedia has a nice summary of internal evaluation metrics:
Therefore, the internal evaluation measures are best suited to get
some insight into situations where one algorithm performs better than
another, but this shall not imply that one algorithm produces more
valid results than another. Validity as measured by such an index
depends on the claim that this kind of structure exists in the data
set. An algorithm designed for some kind of models has no chance if
the data set contains a radically different set of models, or if the
evaluation measures a radically different criterion. For example,
k-means clustering can only find convex clusters, and many evaluation
indexes assume convex clusters. On a data set with non-convex clusters
neither the use of k-means, nor of an evaluation criterion that
assumes convexity, is sound.
More than a dozen of internal evaluation measures exist, usually based
on the intuition that items in the same cluster should be more similar
than items in different clusters. For example, the following methods
can be used to assess the quality of clustering algorithms based on
internal criterion:
Davies–Bouldin index
The Davies–Bouldin index can be calculated by the following formula:
$$DB={\frac {1}{n}}\sum _{i=1}^{n}\max _{j\neq i}\left({\frac {\sigma _{i}+\sigma _{j}}{d(c_{i},c_{j})}}\right)$$
where $n$ is the number of clusters, $c_{x}$ is the centroid of cluster $x$, $\sigma _{x}$ is the average distance of all elements in
cluster $x$ to centroid $c_{x}$, and $d(c_{i},c_{j})$ is the distance
between centroids $c_{i}$ and $c_{j}$. Since algorithms that produce
clusters with low intra-cluster distances (high intra-cluster
similarity) and high inter-cluster distances (low inter-cluster
similarity) will have a low Davies–Bouldin index, the clustering
algorithm that produces a collection of clusters with the smallest
Davies–Bouldin index is considered the best algorithm based on this
criterion.
Dunn index
The Dunn index aims to identify dense and well-separated clusters. It is defined as the ratio between the minimal inter-cluster distance
to maximal intra-cluster distance. For each cluster partition, the
Dunn index can be calculated by the following formula:
$$D={\frac {\min _{1\leq i<j\leq n}d(i,j)}{\max _{1\leq k\leq n}d^{\prime }(k)}}\,,$$
where $d(i,j)$ represents the distance between clusters $i$ and $j$, and $d'(k)$ measures the intra-cluster distance of cluster $k$.
The inter-cluster distance $d(i,j)$ between two clusters may be any
number of distance measures, such as the distance between the
centroids of the clusters. Similarly, the intra-cluster distance
$d'(k)$ may be measured in a variety ways, such as the maximal
distance between any pair of elements in cluster $k$. Since internal
criterion seek clusters with high intra-cluster similarity and low
inter-cluster similarity, algorithms that produce clusters with high
Dunn index are more desirable.
Silhouette coefficient
The silhouette coefficient contrasts the average distance to elements in the same cluster with the average distance to elements in
other clusters. Objects with a high silhouette value are considered
well clustered, objects with a low value may be outliers. This index
works well with k-means clustering, and is also used to determine the
optimal number of clusters.
Furthermore:
An Evaluation of Criteria for Measuring the Quality of Clusters
An important problem in clustering is how to decide what is the best
set of clusters for a given data set, in terms of both the number of
clusters and the member-ship of those clusters. In this paper we
develop four criteria for measuring the quality of different sets
of clusters. These criteria are designed so that different criteria
prefer cluster sets that generalise at different levels of
granularity. We evaluate the suitability of these criteria for
non-hierarchical clustering of the results returned by a search
engine. We also compare the number of clusters chosen by these
criteria with the number of clusters chosen by a group of human
subjects. Our results demonstrate that our criteria match the
variability exhibited by human subjects, indicating there is no
single perfect criterion. Instead, it is necessary to select the
correct criterion to match a human subject's generalisation needs.
Evaluation Metrics for Unsupervised Learning Algorithms
Determining the quality of the results obtained by clustering
techniques is a key issue in unsupervised machine learning. Many
authors have discussed the desirable features of good clustering
algorithms. However, Jon Kleinberg established an impossibility
theorem for clustering. As a consequence, a wealth of studies have
proposed techniques to evaluate the quality of clustering results
depending on the characteristics of the clustering problem and the
algorithmic technique employed to cluster data.
Understanding of Internal Clustering Validation Measures
Clustering validation has long been recognized as one of the vital
issues essential to the success of clustering applications. In
general, clustering validation can be categorized into two classes,
external clustering validation and internal clustering validation. In
this paper, we focus on internal clustering validation and present a
detailed study of 11 widely used internal clustering validation
measures for crisp clustering. From five conventional aspects of
clustering, we investigate their validation properties. Experiment
results show that _ is the only internal validation measure
which performs well in all five aspects, while other measures have
certain limitations in different application scenarios. |
H: Why naive bayes is "naive"
Some articles say that naive Bayes is naive because of "independence of attributes". Whereas others say "independence of attributes within a class". Can anybody please clear this confusion?
Thanks
AI: Naive Bayes doesn't assume independence of attributes ... It assumes conditional independence (or what you call independence within a class). This allows us to write the likelihood in the bayes rule P(X | Y) as the product of all P(Xi | Y), where X = (X1, ... Xi, ... Xn) and n is the number of attributes. |
H: Logic behind the Statement on Non-Parametric models
I am currently reading 'Mastering Machine Learning with scikit-learn', 2E, by Packt. In Lazy Learning and Non-Parametric models topic in Chapter 3- Classification and Regression with k-Nearest Neighbors, there is a paragraph stating-
Non-parametric models can be useful when training data is abundant and you have little prior knowledge about the relationship between the response and the explanatory variables. kNN makes only one assumption: instances that are near each other are likely to have similar values of the response variable. The flexibility provided by the non-parametric models is not always desirable; a model that makes assumptions about the relationship can be useful if training data is scarce or if you already know about the relationship.
My doubt is that I am able to reason out the logic behind the statement- "a model that makes assumptions about the relationship can be useful if training data is scarce or if you already know about the relationship." But, I am not able to see any logic for the first statement- "Non-parametric models can be useful when training data is abundant and you have little prior knowledge about the relationship between the response and the explanatory variables."
Please help me to find out the logic behind it other than for computation. Thanks for your time and consideration.
AI: Non-parametric machine learning algorithms try to make assumptions about the data given the patterns observed from similar instances.By not making assumptions, they are free to learn any functional form from the training data.
Unlike parametric approach, where the number of parameters are fixed,
in non-parametric approaches the number of parameters grow with
training data.
If your data set is too small or otherwise is a set that is not representative of the entire population, then your result will be biased in more ways than possible with parametric methods.
So,to get better results in Non-parametric machine learning algorithms we need large amount of data where relationship between features are not known.
While non-parametric machine learning algorithms are often slower and require large amounts of data, they are rather flexible as they minimize the assumptions they make about the data. |
H: Python writing to Excel file: writerow() takes no keyword arguments
I have this script:
import requests
from requests import get
from bs4 import BeautifulSoup
import csv
import pandas as pd
f = open('olanda.csv', 'wb')
writer = csv.writer(f)
url = ('https://www......')
response = get(url)
soup = BeautifulSoup(response.text, 'html.parser')
type(soup)
table = soup.find('table', id='tablepress-94').text.strip()
print(table)
writer.writerow(table.split(), delimiter = ',')
f.close()
When it writes to a CSV file it writes everything in a single cell like that:
Sno.,CompanyLocation,1KarifyNetherlands,2Umenz,Benelux,BVNetherlands,3TovertafelNetherlands,4Behandeling,BegrepenNetherlands,5MEXTRANetherlands,6Sleep.aiNetherlands,7OWiseNetherlands,8Healthy,WorkersNetherlands,9&thijs,|,thuis,in,jouw,situatieNetherlands,10HerculesNetherlands, etc.
I wanted to have the output in a single column and each value (separated by comma) in a single row.
I tried to use delimiter = ',' but I got:
TypeError: a bytes-like object is required, not 'str'.
Screesnhot of table:
How can I do this? Thanks!
AI: Please see doc of csv.writer
delimiter param is used when instantiating the writer, not on writerow method.
Furthermore from what I understand, your approach is wrong.
For example you want to scrape the html table and store it as csv file.
But what you do is get the text from table
table = soup.find('table', id='tablepress-94').text.strip()
which is a concatenated string of all columns with no structure and try to split by nothing, which is wrong
while you should be getting a dict of table values, eg using:
table = soup.find('table', id='tablepress-94')
# assuming table has a
# <thead><tr><td>Column 1</td>..<td>Column n</td></tr></thead>
# section with column headers, else adjust accordingly
column_headers = [td.get_text() for td in table.find('thead').find('tr').find_all('th')]
print(column_headers)
data = {}
row_index = 0
for row in table.find('tbody').find_all('tr'):
row_index += 1
column_index = 0
columns = row.find_all('td')
for td in columns:
if column_index >= len(column_headers):
column_headers.append('Column_'+str(column_index+1))
if column_headers[column_index] not in data:
data[column_headers[column_index]] = []
data[column_headers[column_index]].append(td.get_text())
column_index += 1
print(data)
# now output your scarped table data, data into csv properly
writer.writerow(column_headers) # print headers
# print each row of data
for row in range(row_index):
writer.writerow([data[column][row] for column in column_headers]) |
H: Why Adaboost SAMME needs f to be estimable?
I am trying to understand the mathematics behind SAMME AdaBoost:
At some stage, the paper adds a constraint for f to be estimable:
I do not understand why this is required. Can someone explain a bit better why this restriction is needed?
As well, would be possible to use a different constraint than the one added in the paper that would make f estimable?
AI: Think for a while, if $f$ is not estimable, it can have any constant added to it with no difference on the result of the process.
This means that if no other constraints are imposed, $f$ is not well/uniquely defined and in fact represents a whole class of functionals. This obviously needs to be fixed and some natural constraint needs to be added (to uniquely fix $f$).
On the other hand, the additional constraint can also be seen as a free parameter of the process which nevertheless needs to be fixed somehow in a concrete instance of the process in order to take place.
In an analogy with physics, the algorithm is "gauge-invariant", but for any concrete physical problem to be solved some "gauge" needs to be chosen and fixed.
They choose to impose the symmetric constraint which reduces to the usual AdaBoost in the 2-class case.
One can impose another constraint on $f$ (than the symmetric one), choosing to satisfy other criteria if so desirable (as long as it can uniquely fix $f$).
For example, the general non-symmetric constraint is also valid:
$$f_1 + \dots + f_K = c$$
for arbitrary constant $c$. This also fixes $f$ uniquely and it can introduce bias favoring certain classes over others (eg for an imbalanced problem). Additionaly, unless $c=0$, does not reduce to the (symmetric) AdaBoost in 2-classes case (which may or may not be desirable). |
H: Using word embeddings with additional features
I have the set of queries for classification task using Gradient Boosting Classifier of scikit learn. I want to enrich the model by feeding additional features along with GloVe. How should I approach scaling in this case? GloVe is already well scaled, however, features are not.
I have tried StandardScaler, but this reduced the performance in comparison with just using GloVe. The problem maybe with the feature itself, however, I need your opinion on scaling starategies in case of glove and dummy variable.
AI: My first comment would be that you have to remember that Tree-based models are not scale-sensitive and therefore scaling should not affect model's performance, so as you well mention it should a problem with the feature itself.
If anyway you want to scale all your features you could use MinMaxScaler with the min and max values, being the min and max fo the Glove Vectors so that all the features are on the same scale |
H: Learn (common) grammar / pattern from set of sample strings?
So I currently have a text pattern detection challenge to solve at work. I am trying to make an outlier detection algorithm for a database, for string columns.
For example let's say I have the following list of strings:
["abc123", "jkj577", "lkj123", "uio324", "123123"]
I want to develop an algorithm that would detect common patterns in the list of strings, and the indicate which strings are not in this format. For example, in the example above, I would like this algorithm to detect the following regular expression:
r"[a-z]{3}\d{3}"
given that the majority of the entries in the list obey this pattern, except the last one, which should be marked as an outlier.
The first idea that come to my mind was to use a genetic algorithm to find the regular expression pattern, where the fitness function is the number of entries on the list that match the pattern. I haven't worked out the details (crossvers function, etc..), and there is already the difficulty in the sense that the pattern ".*" will match everything, hence will always maximize the fitness function.
Anybody already worked on a similar problem? What are my options here? Thank you!
AI: The problem you face is part of what is called in literature grammar learning or grammar inference which is part of both Natural Language Processing and Machine Learning and in general is a very difficult problem.
However for certain cases like regular grammars/languages (ie learning regular expressions / DFA learning) there are satisfactory solutions up to limitations.
A survey and references on grammar inference and inference of regular grammars:
Learning DFA from Simple Examples
Efficient learning of DFA is a challenging research problem in
grammatical inference. It is known that both exact and approximate
(in the PAC sense) identifiability of DFA is hard. Pitt, in his
seminal paper posed the following open research problem:“Are DFA
PAC-identifiable if examples are drawn from the uniform distribution,
or some other known simple distribution?”. We demonstrate that the
class of simple DFA (i.e., DFA whose canonical representations have
logarithmic Kolmogorov complexity) is efficiently PAC learnable
under the Solomonoff Levin universal distribution. We prove
that if the examples are sampled at random according to the
universal distribution by a teacher that is knowledgeable about the
target concept, the entire class of DFA is efficiently PAC learnable
under the universal distribution. Thus, we show that DFA are
efficiently learnable under the PACS model. Further, we prove
that any concept that is learnable under Gold’s model for
learning from characteristic samples, Goldman and Mathias’
polynomial teachability model, and the model for learning from
example based queries is also learnable under the PACS model
An $O(n^2)$ Algorithm for Constructing Minimal Cover Automata for Finite Languages
Cover automata were introduced in [1] as an ecient representation of
finite languages. In [1], an algorithm was given to transforma DFA
that accepts a finite language to a minimal deterministic finite
cover automaton (DFCA) with the time complexity $O(n^4)$, where $n$ is
the number of states of the given DFA. In this paper, we introduce a
new efficient transformation algorithm with the time complexity
$O(n^2)$, which is a significant improvement from the previous
algorithm.
There are even libraries implementing algorithms for grammar-inference and DFA learning:
libalf
gitoolbox for Matlab
source: stackoverflow |
H: What is Happening in the training process when we are fitting a model to the data
In any prediction task, the process of “fitting” a model to the data observed in the training process can be best described as...
Assessing all observations available and then backsolving for the dimensionality of best fit.
Making use of all observations available to create a model, then using distance or other measures to assess how well the model characterizes or "fits" its data.
Starting with a linear model and then expanding the dimensionality of that model until it fits the data
Starting with a linear model and then regressing the shape of the line until it fits the data.
These are the four points I am confused about. Which one/ones is/are true?
AI: Options 2 and 4 are correct.
During fitting a model the training process Makes use of all observations available to create a model, then using distance or other measures to assess how well the model characterizes or "fits" its data. Also, Starting with a linear model and then regressing the shape of the line until it fits the data. |
H: Wrong result when solving: "chance that two random cards differ in color and value?"
I'm trying to build a simulation for this question:
"There are 50 cards of 5 different colors. Each color has cards
numbered between 1 to 10. You pick 2 cards at random. What is the
probability that they are not of same color and also not of same
number?"
(From Glassdoor)
I should have a result like "73%" but with my code I get (consistently) "72%" or "71.8%".
Here is my code:
import numpy as np
# Building a deck of 10 cards for each of the 5 colors
cards = np.array([c+str(n) for c in ("A", "B", "C", "D", "E") for n in range(1, 11)])
def random_cards_differ():
"""Returns True if two random cards differ"""
a, b = np.random.choice(cards, 2, replace=False)
if a[0] != b[0] and a[1] != b[1]:
return True
else:
return False
nb_success = 0
nb_tries = 100000
for i in range(nb_tries):
if random_cards_differ():
nb_success += 1
print(nb_success / nb_tries)
>>> 0.71892
Is this normal? Is there a mistake in my code or is it a "random gotcha" caused by some seed or something else?
AI: There's an error in your code:
cards = np.array([c+str(n) for c in ("A", "B", "C", "D", "E") for n in range(1, 11)])
will produce "A10" and "A1" among other values, and
if a[0] != b[0] and a[1] != b[1]:
will return true when a=A10 and b=A1, for example. This is why you're probably consistently underestimating the number of differences.
An easy fix would be to use:
cards = np.array([c+str(n) for c in ("A", "B", "C", "D", "E") for n in range(0, 10)])
instead, which is more readable anyway. But if I were doing this, I might use itertools.product.
Even after this fix it's normal to not always get exactly the theoretical value, but it's bad if there's a bias (i.e. consistent under- or over- estimation). |
H: Multiple output size in neural network
In the paper "A NOVEL FOCAL TVERSKY LOSS FUNCTION WITH IMPROVED ATTENTIONU-NETFOR LESION SEGMENTATION" the author use deep supervision by outputing multiple outputmask which have different scale.
I do not understand how it can work with regards to the loss function. y_pred and y_true doesnt share the same dimension exept for the final output.
model = Model(inputs=[img_input], outputs=[out6, out7, out8, out9])
the input seems to only be the one with real resolution
I checked the code (https://github.com/nabsabraham/focal-tversky-unet/blob/master/newmodels.py) and I haven't see anything special that would make it works. The loss function doesn't explicitly handle it neither.
AI: As you can see in lines 286-296 in newmodels.py the model can use two different loss functions for the four different outputs.
loss = {'pred1':lossfxn,
'pred2':lossfxn,
'pred3':lossfxn,
'final': losses.tversky_loss}
loss_weights = {'pred1':1,
'pred2':1,
'pred3':1,
'final':1}
model.compile(optimizer=opt, loss=loss, loss_weights=loss_weights,
metrics=[losses.dsc])
The first three outputs of the model, out6, out7, and out8 uses the loss function, lossfxn, which is given as the third argument to the attn_reg function. In isic_train.py, this happens to be the Focal Tversky Loss. For the final output of the UNet (out9) the Focal Tversky Loss is always used. The total loss for the model is then the weighted sum of the different losses for the four model outputs, which is simply equal to the sum given that all weights are set to 1 in loss_weights. |
H: Which deep learning network would be suitable for classifying this kind of text data?
I have some experience with images and have played around with image classification using CNN's but have limited knowledge when it comes to text data.
The input that I currently want to classify is written as:
hjkhghkgfghjkhghkgfghfefdefdcdefghjkjh-hjhgfe
fdcd-dd-fdc-dad-ad-dfe-cde-dggf-ghd-gg-bcd
hjkhghkgfghjkhghkgfghfefdefdcdefghjkjh-gh-gfed
dh-hg-gf-gh-dh-hg-gf-gh-hkhg-kh-hg-gf-gh-hkhg-kh-hg-gf-ghh-hgfg-dfd-dc-fgf-gh
I have over 2000 rows of this data, that needs to be classified. I know that for regular text data RNN networks and LSTM cells have been known t be very effective. Using RNN+LSTM good results can be achieved by pre-processing the data using the usual approaches such as stemming, lemmitization, stop word filtering, tokenization etc. But the same cannot be applied to the text data I have.
Would RNN and LSTM still work on my data? If not which networks do you guys suggest I explore for such a task?
AI: You need character embeddings. I assume you are already familiar with word2vec technology. Its goal it to make a model "learn" the relative meaning of words, placing them into a highly dimensional space.
The same can be done with single characters, instead of whole words. The preprocessing steps you need will be a little bit different, but the embedding technique is the same. In that way, you can generate representations of characters, feed their sequences into some RNN model, and perform the final classification task.
Therefore, RNNs are perfectly suitable for this task. If you are working with tensorflow.keras you can simply tokenize characters, and feed them through an Embedding() layer that will do the job for you. An alternative to RNNs is 1D conv layers, that can do the job as an alternative to recurrent cells. That's up to your preference. |
H: Why Davies-Bould chose a number ob cluster higher than Silhouette or Calinsky Harabasz?
I am doing use several metrics in order to know what number of clusters is correct in order to do this I selected 3 clustering algorithms and 3 internal evaluation metrics, Silhouette, Calinsky Harbasz and Davies Bouldin.
The results of this was the following:
S CH DB
Kmean 3 3 9
Agglo 2 2 9
Gauss 3 3 10
The original dataset has 3 groups, and in general S and CH works well, the questions is what DB always returns a high value for the number of clusters?
Thanks
AI: Each clustering evaluation metric follow different ideologies:
Silhouette analysis can be used to study the separation distance between the resulting clusters.Silhouette coefficients near +1 indicate that the sample is far away from the neighboring clusters. A value of 0 indicates that the sample is on or very close to the decision boundary between two neighboring clusters and negative values indicate that those samples might have been assigned to the wrong cluster.
Calinski–Harabasz rewards clusterings in which the cluster centroids are far apart and the cluster members are close to their respective centroids.
Davies-Bouldin is defined as the average similarity measure of each cluster with its most similar cluster, where similarity is the ratio of within-cluster distances to between-cluster distances. Thus, clusters which are farther apart and less dispersed will result in a better score. It prefers clusters equally-distanced from each other.
We can't really say what clustering quality measure is good or not.It depends on what you want to evaluate.You have to look it seems relevant for the kind of clustering you are doing.
For more info refer this discussion. |
H: How to cluster government census data in order to group Metropolitan statistical areas
I have collected a bunch of census data from 2012 - 2018. I wanted to apply some clustering algorithms in order to compare Metropolitan statistical area (MSA's). Ideally once I run the clustering algorithm I would like to see which MSA is comparable to another.
The features that I am choosing to govern the clustering is below:
'Bachelors+',
'Estimate Total $10,000 to $14,999',
'Estimate Total $100,000 to $124,999',
'Estimate Total $125,000 to $149,999',
'Estimate Total $15,000 to $19,999',
'Estimate Total $150,000 to $199,999',
'Estimate Total $20,000 to $24,999',
'Estimate Total $200,000 or more',
'Estimate Total $25,000 to $29,999',
'Estimate Total $30,000 to $34,999',
'Estimate Total $75,000 to $99,999',
'Median Age',
'Median Gross rent as % of household inc',
'Number of educational and health service workers',
'Number of finance and real estate workers',
'Number of people in management, business, science, and arts',
'Number of service workers',
'Number of tech workers',
'Pct Asian',
'Pct Black',
'Pct Other Race',
'Pct White',
'Total Population',
'Total Population over 25'
Now a question I have is the data I have is on the tract level for every MSA in the United States from 2012 - 2018. Would I first need to aggregate the data so that I have the above features by their associated MSA then do the clustering algorithm from there?
From there how do I identify the MSAs by cluster?
AI: If you want to identify the distance between MSAs. Then yes, I think it would be best to first aggregate your features such that each instance (row) represents an MSA. From there you will have an $n\times m$ matrix where $n$ is the number of MSA, and $m$ is the number of features you end up with.
You can then apply your clustering algorithm, there are many to choose from, among my favorites I always try are:
K-means
K-nearest neighbors
Spectral clustering
DBSCAN
Others can be found here.
Once you train the clustering algorithm then you will get an associated cluster values for each of the $n$ rows in your input matrix. With this you will know what MSAs are similar in nature given the selected set of features. |
H: How to Make Meaningful Conclusions here?
I recently appeared for an Interview for my college and I was asked the following question. The Interviewer said that this question was a Data Science question.
The question-
Suppose 7.5% of the population has a certain Bone Disease. During COVID pandemic you go to a hospital and see the records. 25% of the COVID Infected patients also had the Bone Disease. Can we say for sure if the Bone Disease is a symptom of COVID-19?
My Reponse-
I said No, and explained it as it's not necessary that COVID-19 is causing these symptoms, it could very well be possible that the 7.5% of the country's population which already had the disease is more susceptible to the virus due to lowered immunity. Hence making conclusions is not possible.
Then the interviewer asked me How can we be sure if it is a symptom or not?
I replied saying we can go to more Hospitals, collect more data and see if it correlates everywhere.
The Interviewer then said If we have the same results everywhere will you conclude it's a symptom?
I had no good answer but I replied that Just correlation of data is not sufficient, we also need to check if the people who have COVID-19 had the bone disease prior to getting infected or not. See if that percentage also correlates and stuff.
Here he stopped questioning however I couldn't judge If I was right or wrong.
I am in Grade-12 so I have no experience in Data Science as such. I do know a fair bit of statistics however I have never solved such questions. Can someone provide me insights on how to solve such questions and make meaningful conclusions?
AI: It is very difficult (arguably impossible, if you want to get philosophical about it) to be absolutely, 100%, for sure about anything. For this reason, we talk in terms of probability/significance/confidence sets. A refresher on statistical hypothesis testing might help.
So to answer this type of question, people would usually try to attain a well-agreed-upon p-value for their problem, below which we can reject the null hypothesis, and above which we accept the null hypothesis. The null hypothesis in this case is that the bone disease is not a symptom of COVID, and the alternative would be that it is a symptom.
Edit for demonstration as requested in the comments:
Firstly, these methods are purely for association analysis, not for proving whether bone disease is a symptom of COVID—again, correlation $\ne$ causation!
Given that we are dealing with binary variables, you could use the Phi coefficient to measure the association of bone disease with COVID.
consider this contingency matrix:
|----------|---------|---------|-----------|
| | Bone =0 | Bone =1 | total |
| COVID =0 | A | B | I=A+B |
| COVID =1 | C | D | J=C+D |
|------------------------------|-----------|
| total | K=A+C | L=B+D | E=I+J+K+L |
|----------|---------|---------|-----------|
Which we can represent visually via a Venn diagram:
then you can calculate
$ \phi=\frac{AD-BC}{\sqrt{IJKL}}=\frac{ED-IK}{\sqrt{IK(E-I)(E-K)}} $
This is related to the Chi-squared test: $ \phi= \sqrt{\frac{\chi^2}{n}} $ So you can easily retrieve the p-value given that you know the degrees of freedom (in this case, it is just 1).
And you interpret it similar to the Pearson correlation coefficient (both from the same statistician—Pearson).
So given $7.5\%$ of the total has the bone disease, and $25\%$ of COVID patients have it, we can construct our contingency table in terms of $N$ (the number of samples). Where $Q$ is the percent of people with COVID:
$ I=N(1-Q)= $ number of people without COVID
$ J=NQ= $ number of people with COVID
$ K=N(1-0.075)=$ number of people without bone disease
$ L=N(0.075)=$ number of people with bone disease
We know that 25% of people with COVID also have the bone disease, so $D=J(0.25)\Rightarrow$
$\phi=\frac{E(0.25\cdot J)-IK}{\sqrt{IK(E-I)(E-K)}}$
Finally, we can calculate:
$\phi=\frac{0.25\cdot EQ-N(1-Q)(1-0.075)}{\sqrt{(1-Q)(1-0.075)(E-N(1-Q))(E-N(1-0.075))}}$
From here, we can find the associated p-value easily by looking it up in a Chi-Square p-value table, such as this one: http://chisquaretable.net/. Then you can accept/reject the null hypothesis given your predefined $\alpha$ threshold. |
H: How to encode ordinal data before applying linear regression in STATA?
I have a data set that has student performance marks (continuous and dependent variable), Teacher Qualification (Ordinal and independent variable containing categories: Masters, Bachelors, High School). I want to apply the regression analysis to check the impact of teacher qualification on student's marks.
How can I encode ordinal data before applying linear regression?
AI: I think the best way is to dummy-encode teacher qualification. So each level of qualification enters the regression with a separate intercept term. Note that dummy-encoding always works against a contrast level. So when "Master degree" is the base-level, you will see the effect of "Bachelor" compared to "Master" etc.
You can dummy-encode in Stata by using the i. prefix, e.g. summarize i.size. In a regression you would use reg y i.x.
See the Stata docs for details. |
H: Anybody know what this type of visualisation is called?
I think this is a pretty cool way to visualise changes in values but I can’t find any name for this type of visualisation.
I
Source: https://www.economist.com/graphic-detail/2020/07/28/americans-are-getting-more-nervous-about-what-they-say-in-public
AI: This type of visualization is called Arrow and Range plot.
You can refer here |
H: Consolidating multivariate time-series information from many data sets
I am having trouble setting up a problem with regards to time series analysis. I have 30 data sets, where each set corresponds to a certain project. Each project has 7 features, and each feature has time series information sampled every week from 2018 to today.
One of the features is how much the project is under/over budget and I wish to use this as the label.
If I was learning on a single project it could be a straight forward multivariate time-series. As an example I could transform the data from this:
Week X1 Y1
1 0.5 3
2 1 5
3 1.5 8
to this:
X1 X2 X3 Y
- - 0.5 3
0.5 3 1 5
1 5 1.5 8
Then I would use the second table as my input data. However, with 30 different projects all with the same time-steps I'm not sure how to combine this information so a single model could learn it. One solution I thought I could do was a bagging approach. I would train 30 models and I could do a voting/weighted average for predictions, but I feel like this isn't the best approach. If anyone has dealt with a problem like this before, please let me know. Thanks in advance!
AI: Let a single observation in your dataset be the target variable y for the jth project in the ith week. Then, use a cateogorical variable to indicate the jth project, along with other features you think are relevant (such as lagged values which are likely to be important to include). Finally, for extrapolation purposes and possible seasonality include features that represent time in some way. A natural variable to include is of course the observed week, but perhaps months or even years (if you have data that spans many years for example). Other variables might be holiday indicators, etc.
Overall, your data could look like this. Assume you have three projects and observed values for 3 weeks.
| Week | Project | ... | y |
|------|---------|-----|---|
| 1 | 1 | ... | 3 |
| 1 | 2 | ... | 8 |
| 1 | 3 | ... | 9 |
| 2 | 1 | ... | 3 |
| 2 | 2 | ... | 7 |
| 2 | 3 | ... | 4 |
| 3 | 1 | ... | 7 |
| 3 | 2 | ... | 0 |
| 3 | 3 | ... | 1 |
And so on. "..." refers to other features you can add to capture autocorrelation and/or seasonality like I described above (and as you have demonstrated) along with the 7 other features you said you have. The one week lagged effects that you presented will of course be the lagged values for the same jth project (i.e. when you calculate these features group by project number). In total, you will have 30 * N number of observations, where N is the total number of weeks you have per a single series.
As an aside, this is how the Rossman and the recent M5 competitions were presented, albeit, in the latter case the data was hierarchical as well. |
H: How do we decide between XGBoost, RandomForest and Decision tree?
What do we take into consideration while deciding which technique should be used when dealing with a particular dataset? I understand that there isn't any hard and fast rule to this. Do we use XGBoost only when there are a lot of features in the dataset and RandomForest for otherwise cases? Or are we suppose to hit and trial and find whichever gets us better results everytime?
AI: Decision Tree is very useful if you want to be able to explain where your result comes from you can often print the tree and see how your model came to this answer.
Random Forest can also provide such information, but you'll have to browse all trees and make some "stats" into them, which is not as easy. But Random Forest often give better results than Decision Tree (except on easy and small datasets).
Finally, XGBoost could give a better result than Random Forest, if well-tuned, but you can't explain it easily. It's also harder to tune, and takes a lot more time to train. If you don't mind about results-explanation, I'd suggest you to try both XGBoost and RandomForest, with a bit on tuning, to see which one is best fitting on your dataset. |
H: How is the backbone of two neural networks trained?
Suppose, I have a backbone network(convolutional neural network). After this network ends, the output is fed into two neural networks. Both building on the outputs of the feature extractor(CNN). Now if I want to train this complete network from scratch on two different tasks, the weights of the layers after the backbone network can be updated easily, but how should I update the weights of the backbone network. I mean I can compute gradients with respect to two losses, shall I take the mean for the gradients in the backbone or it has to be some weighted sum? if it is the weighted sum then how would the parameters of the weighted sum be updated?
Thanks
AI: In General any sort of Gradient-based learning is done on scalar functions, so those functions f: ℝ^n ↦ ℝ. (In fact this is the meaning of a Gradient). Mainly if you want to define any minimization problem, you need a single value to minimize and not more.
This means: Ultimately your loss always has to be a scalar (a single number). Combining the gradients in the middle (so before backpropping into your backbone) would ultimately be equal to just combining the losses. And a weighted loss will be easier to implement.
For reference, you can look this talk by A. Kaparthy on how Tesla does multi-task learning on a single Backbone and how they deal with combining different losses. |
H: For NLP, is GPT-3 better than RoBERTa?
I am learning deep learning and I want to get into NLP. I have done LSTM, and now I am learning about vectorisation and transformers. Can you please tell me, which algorithm is more effective and accurate?
AI: They are meant for different purposes and they are hardly comparable.
RoBERTa is meant for text classification and tagging tasks. The idea is that you take a pretrained RoBERTa model and finetune it on your (potentially small) classification or tagging dataset. Some examples of tasks where RoBERTa is useful are sentiment classification, part-of-speech (POS) tagging and named entity recognition (NER).
GPT-3 is meant for text generation tasks. Its paradigm is very different, normally referred to as "priming". You basically take GPT-3, give it some text as context and let it generate more text. The context should give GPT-3 the "pattern" of what it must generate. You don't finetune it, just give it some example of acceptable text generation pattern and then let it generate more alike. |
H: How do I know what the best number of layers is required to achieve the highest accuracy
I'm learning from Udacity using this video.
I saw this piece of code:
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
# nn.Linear(64, 32),
# nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
if e == 1 and running_loss == 0:
print(loss)
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
Then, I was playing around with layers and neurons.
I have tried:
Adding 1 more layer (64, 32) between (128, 64) and (64, 10), and changing the (64, 10) to (32, 10), increasing the epochs.
Replacing the (128, 64) with (128, 32), increasing the epochs
Adding 1 layer (784, 256), (256, 128), increasing the epochs
Etc.
From what I saw here, they can all achieve good accuracy with enough epochs of training.
Then, my question is how can I find the best number of layers and neurons in each layer (architecture) which can:
Achieve the highest accuracy
Still be the simplest model (least number of parameters)
AI: The number of layers and the number of nodes in each layer are model hyperparameters that you must specify. So, we cannot directly say this no of hidden layers / this no of hidden units / this learning rate .. are better. But we can use some techniques :
Intuition:
So based on the complexity of your model, you need to choose a value that you feel suits it the most.
Experiment:
Now, if your intuition fails, then you should experiment and see the nature of the model. So, try to plot the cost function for different values of the hyperparameter and understand the behavior of the model, with which you can probably get the best value for your hyperparameter.
Generally, if your model is expected to learn complex patterns, then you should go for depth.
For no of neurons in a layer, there are some rule-of-thumb methods:
The number of neurons should be between the size of the input layer and the size of the output layer. (or mean of both)
The number of neurons should be : (2/3 the size of the input layer + the size of the output layer)
There are some other rules and formulas, but I think these rules should give you a good start. But still, the key is your intuition and experimentation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.