text
stringlengths 83
79.5k
|
---|
H: GridSearchCV Decrease performance RF
Can Gridsearchcv params perform worst than default RF?
RF with default values performs rmse_train=4886,r^2_train=0.84, rmse_test=11008,r^2_test=0.22.
RF after GridSearchCV tuning performs worst on train set (rmse_train=9104,r^2_train=0.45, rmse_test=11091,r^2_test=0.21). This is the code (my first ML algorithm implementation)
#data
features = pd.read_csv("dati_nn.csv")
labels = np.array(features['Cost_damage']) #regression problem
features = features.drop('Cost_damage', axis=1)
features = np.array(features)
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size=0.2, random_state = 123)
#RF default
rf = RandomForestRegressor(random_state = 123)
rf.fit(train_features, train_labels)
pred_train = rf.predict(train_features)
predictions = rf.predict(test_features)
# compute rmse and r_score
rmse_train = np.sqrt(metrics.mean_squared_error(train_labels, pred_train))
rmse_test = np.sqrt(metrics.mean_squared_error(test_labels, predictions))
r2_train = r2_score(train_labels, pred_train)
r2_test = r2_score(test_labels, predictions)
#GridSearch
param_grid = {
'bootstrap': [True],
'max_depth': [20, 30],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 3],
'min_samples_split': [5, 8, 10, 12],
'n_estimators': [500, 600, 700, 800, 1000]
}
rf = RandomForestRegressor(random_state = 123)
grid_search = GridSearchCV(estimator = rf,
param_grid = param_grid,
scoring = 'neg_mean_squared_error',
cv = 10,
n_jobs = -1,
verbose = 2
)
grid_search.fit(train_features, train_labels)
best_grid = grid_search.best_estimator_
pred_train = best_grid.predict(train_features)
predictions = best_grid.predict(test_features)
#rmse and r2_score as above
In my opinion, the code is correct but I don't understand why rmse grows.
I searched for many days on the internet to find an answer...no success. I tried to use RandomizedSearchCV, same problem.
Any suggest? Is my workflow correct?
Thanks in advance!!
AI: These are different for a valid reason
The standalone model is overfitting. You can see that with r2score of 84 Vs 22.
The reason for that is, the standalone model goes to full depth and hence overfit the train data and badly fit the test data.
max_depthint, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
While with CV, you have parm for max_depth. Hence bad but Truth. |
H: Multiple solutions with same minima in MLP with same weights
I came across an excercise on deep learning from here.
It goes as follows:
Consider a simple MLP with a single hidden layer of $d$ dimensions in the hidden layer and a single output. Show that for any local minimum there are at least $d!$ equivalent solutions that behave identically.
As the network is a MLP with one hidden layer, the equation would be:
$O = W^{(2)}(W^{(1)}x + b_1) + b_2$
Assuming I am correct, where do I need to go from here to get to the solution?
AI: If one permutes the connections of the hidden layer ($d!$ ways to do that), and move and rename connections appropriately, then one effectively has the same MLP with the exact same minima, yet the configuration has changed (in a trivial sense). Thus there are (at least) $d!$ configurations only trivialy different with the exact same minima.
To see it in your notation, effectively the hidden layer output is the following sum:
$$O_{\text{hidden}} = \sum_{i=1}^d w_{\pi_i} \cdot x + b_{\pi_i}$$
Where $\pi_i$ is some order of the connections. For example $\pi_i = i$
But for $d$ items there are $d!$ permutations thus $d!$ order functions $\pi(i) = \pi_i$. Yet the difference is only in re-ordering the configuration and is trivial. The rest follow from that.
This is one reason why Neural Networks are non-convex models.
See also: Explanation of why Neural Networks are non convex |
H: Syntax error in function but everything seems correct
I am custom defining a function and following is my function but I get error as below
def saveSlice(img, fname, path):
img = np.uint8(img * 255)
fout = os.path.join(path, f'{fname}.png')
cv2.imwrite(fout, img)
print(f'[+] Slice saved: {fout}', end='\r')
I get error message as below
File "ipython-input-79-e8d731623c8b", line 3
fout = os.path.join(path, f'{fname}.png')
^
SyntaxError: invalid syntax
What am I doing wrong?
AI: If you use Python earlier than 3.6 f-strings are not available, and that code has an incorrect syntax. In that case you can use the format method of the strings instead like that (2 lines are modified):
def saveSlice(img, fname, path):
img = np.uint8(img * 255)
fout = os.path.join(path, '{}.png'.format(fname))
cv2.imwrite(fout, img)
print('[+] Slice saved: {}'.format(fout), end='\r') |
H: BERT Self-Attention layer
I am trying to use the first individual BertSelfAttention layer for the BERT-base model, but the model I am loading from torch.hub seems to be different then the one used in hugginface transformers.models.bert.modeling_bert:
import torch, transformers
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
torch_model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'bert-base-uncased')
inputs = tokenizer.encode_plus("Hello", "World", return_tensors='pt')
output_embedding = torch_model.embeddings(inputs['input_ids'], inputs['token_type_ids'])
output_self_attention = torch_model.encoder.layer[0].attention.self(output_embedding)[0]
# compare output with using the huggingface model directly
bert_self_attn = transformers.models.bert.modeling_bert.BertSelfAttention(torch_model.config)
# transfer all parameters
bert_self_attn.load_state_dict(torch_model.encoder.layer[0].attention.self.state_dict())
# <All keys matched successfully>
output_self_attention2 = bert_self_attn(output_embedding)[0]
output_self_attention != output_self_attention2 # tensors are not equal?
Why is output_self_attention2 different from output_self_attention? I thought they would give the same output given the same input.
AI: The problem is that you are using the models in training mode (the default mode), and the stochastic elements like dropout are active in training mode, therefore you obtain different results, not only between different models, but also on different runs of the same model.
You should invoke model.eval(), which makes some elements like dropout or batch normalization behave in inference mode, i.e. dropout is disabled altogether, batch-normalization uses the remembered statistics.
Also, it is better to avoid gradients being computed with torch.no_grad(), as you do not need them. The same effect can be achieved with torch.set_grad_enabled(False).
import torch, transformers
torch.set_grad_enabled(False) # avoid wasting computation with gradients
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
torch_model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'bert-base-uncased')
torch_model.eval() # <------ set model in inference mode
inputs = tokenizer.encode_plus("Hello", "World", return_tensors='pt')
output_embedding = torch_model.embeddings(inputs['input_ids'], inputs['token_type_ids'])
output_self_attention = torch_model.encoder.layer[0].attention.self(output_embedding)[0]
# compare output with using the huggingface model directly
bert_self_attn = transformers.models.bert.modeling_bert.BertSelfAttention(torch_model.config)
bert_self_attn.load_state_dict(torch_model.encoder.layer[0].attention.self.state_dict())
bert_self_attn.eval() # <------ set model in inference mode
output_self_attention2 = bert_self_attn(output_embedding)[0]
output_self_attention != output_self_attention2 # tensors are equal |
H: How to create multi-hot encoding from a list column in dataframe?
I have dataframe like this
Label IDs
0 [10, 1]
1 [15]
0 [14]
I want to create a multihot encoding of the feature IDs. It should look like this
Label ID_10 ID_1 ID_15 ID_14
0 1 1 0 0
1 0 0 1 0
0 0 0 0 1
The goal is to use them as features in Keras. So it is also acceptable to make this transformation using a Keras API.
AI: You can try using sklearn's MultiLabelBinarizer (https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MultiLabelBinarizer.html):
mlb = MultiLabelBinarizer()
mlb.fit(d['IDs'])
new_col_names = ["ID_%s" % c for c in mlb.classes_]
# Create new DataFrame with transformed/one-hot encoded IDs
ids = pd.DataFrame(mlb.fit_transform(d['IDs']), columns=new_col_names,index=d['IDs'].index)
# Concat with original `Label` column
pd.concat( [d[['Label']], ids], axis=1 ) |
H: GPA prediction of college student
I have a dataset consist of 8 columns and 15600 rows with the following columns:-
1.Entry_academic_year which have 5 discrete value (2558,2559,2560,2561,2562)
2.Faculty (It is the faculty that student has taken like engineering)
3.branch (It is the branch that student has taken like software engineering)
4.Admission type (how the student enter the college)
5.Graduated_high_school (it is the high school where student got graduated)
6.province_of_school
7.GPA_high_school(It is the GPA of student in high school)
8.GPA_college(It is the GPA of the student during college)
I am trying to predict the GPA of the student at the college by dividing the GPA into 4 quartiles with respect to percentile (25,50,75), The problem I faced is that the Graduated_high_school columns have around 1732 unique value with some school contain only one row which makes the prediction around 30-35 % accuracy
Any idea on how to fix it?
AI: Perhaps you can see if Graduated_high_school is correlated in any way to GPA_college? If there is no correlation, you can try to fit a model by dropping the Graduated_high_school column.
Else, you can try to drop rows belonging to under-represented high schools. However, one problem I foresee is that future predictions might have Graduated_high_school that are unseen in the training dataset, leading to problems (e.g. schools that weren't mentioned in the dataset, or if someone decides to use your model, on a dataset from another country). So, if the Graduated_high_school is not important, I would consider dropping it altogether.
Or, maybe you can change Graduated_high_school to something else that is related, such as number of teachers in high school, teacher-student ratio etc. |
H: Exploration in Q learning: Epsilon greedy vs Exploration function
I am trying to understand how to make sure that our agent explores the state space enough before exploiting what it knows. I am aware that we use epsilon-greedy approach with a decaying epsilon to achieve this. However I came across another concept, that of using exploration functions to make sure that our agent explores the state space.
Q Learning with Epsilon Greedy
$sample = R(s,a,s') + \gamma \max_{a'}Q(s',a')$
$Q(s,a) = (1 - \alpha)*Q(s,a) + \alpha*sample$
Exploration function
$
f(u,n) = u + k/n $
$Q(s,a) = R(s,a,s') + \gamma*max_{a'}f(Q(s',a'), N(s',a'))$
Where N(s',a') counts the number of times you have seen this (s',a') pair before.
Now my question is, are these 2 above strategies just 2 different ways of Q learning? Because I can't seem to find much details on exploration function but can find plenty on epsilon greedy Q learning. Or can the exploration function be used along with the epsilon greedy Q learning algorithm as a form of some optimization? I am confused as to where exactly would we make use of this exploration function Q learning strategy. Any help/suggestions are much appreciated!
AI: Any exploration function that ensures the behaviour policy covers all possible actions will work in theory with Q learning. By covers I mean that there is a non-zero probability of selecting each action in each state. This is required so that all estimates will converge on true action values given enough time.
As a result, there are many ways to construct behaviour policies. It is possible in Q learning to use equiprobable random action choice - ignoring current Q values - as a behaviour policy, or even with some caveats learn from observations of an agent that uses an unknown policy.
There are practical concerns:
If the behaviour policy is radically different from optimal policy, then learning may be slow as most information collected is not relevant or very high variance once adjusted to learning the value of the target policy.
When using function approximation - e.g. in DQN with a neural network - the distribution of samples of state, action pairs seen has an impact on the approximation. It is desirable to have similar population of input data to that which the target policy would generate*.
In some situations, a consistent policy over multiple time steps gives better exploration characteristics. Examples of this occur when controlling agents navigating physical spaces that may have to deviate quite far from current best guess at optimal in order to discover new high value rewards.
These concerns drive designs of different exploration techniques.
The epsilon-greedy approach is very popular. It is simple, has a single parameter which can be tuned for better learning characteristics for any environment, and in practice often does well.
The exploration function you give attempts to address the last bullet point. It adds complexity, but may be useful in a non-stationary environment since it encourages occasional exploration paths far away from the current optimal one.
Now my question is, are these 2 above strategies just 2 different ways of Q learning?
Yes.
Or can the exploration function be used along with the epsilon greedy Q learning algorithm as a form of some optimization?
Yes it should be possible to combine the two approaches. It would add complexity, but it might offer some benefit in terms of learning speed, stability or ability to cope with non-stationary environments.
Provided the behaviour policy covers all possible actions over the long term, then your choice of how exploration for off-policy reinforcement learning works is one of the hyperparameters for a learning agent. Which means that if you have an idea for a different approach (such as combining two that you have read about), then you need to try it to see if it helps for your specific problem.
* But not necessarily identical, because the approximate Q function would then lose learning inputs that differentiate between optimal and non-optimal behaviour. This can be a tricky balance to get right, and is an issue with catastrophic forgetting in reinforcement learning agents. |
H: Machine Learning - Euclidian Distance Classifier exercise
I'm taking part in an elective subject at university which mainly focuses on the foundations of Machine Learning.
Now we got our first exercise - this task should be done practically in any language (I've chosen Python). Our teacher doesn't explain the relations between theory and practice well and so it's hard for all of us to follow along - so I decided to post this question here. I don't want anyone to give me a solution, I just don't understand what he wants and may be a hint how to approach this question.:
Here is the full exercise:
Euclidean distance classifier
Develop an Euclidian distance classifier as below: Generate 1000
random points corresponding to each class out of 3 classes with
feature size 2 for a 3-class classification problem. For the
simplicity consider the classes following N([0 1 2], I), N([0 0 1], I)
and N([1 0 0],I) respectively.
Generate the output an 1000-dimensional vector whose ith component
contains the class where the corresponding vector is assigned,
according to the minimum Euclidean distance classifier.
I understand that I should generate random points with two features which are belonging to one of the three classes - okay. But I don't get the second part of the sentence. The classes are normally distributes with a mean(?)-vector of [0, 1, 2], [0, 0, 1] and [1, 0, 0]?
For what does the second paramter I stand in the normal distribution
Does the vector stand for the position/mean of the multivariate normal distribution?
How would you approach this question? Using a k nearest neighbor algorithm?
Thanks for any helpful answers!
Max
AI: Clearly your exercice is not very clear, at least it is not to me.
I guess you should consider $I$ as the variance or std deviation of the variable, just make sure that is a parameter of your code so you can change it later in case it is not what you assumed it was. Check up in your course if it corresponds to something, but it doesn't seem anything familiar to me.
Not sure what your second question means, but here are the vectors that need to be created according to me :
Vector of points (1st question), each points lives in 2-D space (feature size).
Vector of classes (2nd question), this vector looks something like [1, 0, 2, 2, 1, 1, ...]. For this vector, it would mean that first value of your vector of points belongs to class 1, second value to class 0 , third to class 2, ...
As the exercise is asking about euclidian distance classifier, you just need to create an algorithm that takes a point, computes the euclidian distance with each class center, and classify it in the closest class. (Not sure that kind of classifier can be considered Machine Learning but w/e).
You probably noticed it, but the mean values given are 3-D vectors, which is quite weird as I said that our points live in 2-D space. I guess the last value of each mean vector is the class of the point generated (just my guess, not blaming your teacher, but as I said your exercise isn't very clear). So according to that interpretation, the classes would be distributed according to distribution :
[0, 1], I variance or std deviation for class 2.
[0, 0], I variance or std deviation for class 1.
[1, 0], I variance or std deviation for class 0.
Hope it helps, if you have any remaining questions, feel free to ask. |
H: How to interpret the rec curve for a regression task?
I am using forest fire dataset and applied neural network model. I tried to generate REC curve, this is how it looks like. Pretty weird!!!
I have also applied XGBoost but the REC curve is almost parallel to X-axis.
I don't understand how to make sense out of it?
I also want to understand how to interpret REC curve in general?
AI: The title of the graph says "REC curve for various models" but this curve is for a single model. It shows as Y the percentage of "correct" predictions by the model depending on what "correct" means, which is given as X: for example if correct means an error less than 10 ha (on X axis), then the model is correct around 55% of the time (on Y axis). In other words the graph splits the instances between correct/incorrect ones based on whether the error value (difference between true and predicted value) is lower than a threshold X.
In particular this graph shows that:
this model predicts very few values with an error less than 8
this model very often (around 50%) predicts values which are 8-9 ha off the true value.
there are couple more values that the model predicts with an error around 15.
more than 40% of the predicted values are not visible on this graph, which means that their error is higher than 20.
I'd suggest plotting the true vs. predicted value, it's easier to interpret (although it can be harder to see where are the large groups of instances). |
H: how to calculate similarity between users based on movie ratings
Hi I am working on a movie recommendation system and I have to find alikeness between the main user and other users. For example, the main user watched 3 specific movies and rated them as 8,5,7. A user who happened to watch the same movies rated them as 8,2,3 and an another user of the same kind rated those movies as 7,6,6 and some other user only watched first two movies and he rated them as 8,5. Now the question is to which user the main user is close. I tried to come up with functions, but they are prone to fail. Can you help me?
AI: Use euclidean distance
1
$d(p,q) = \sqrt{\sum_{i=1}^{n}(q_i - p_i)^2}$
and check main user vs all other users. |
H: Is probabilistic machine learning just the mathematical background of machine learning?
I wanted to begin with machine learning, I went through the contents of the course on ML by Andrew Ng and found that though the course was based on mathematics, but wasn't too much on the probability or statistics.
But in many university curriculua the book: "Probabilistic Machine Learning: An introduction" by K. Murphy is being used, and it seemed to be full of probability, statistics etc.
I wanted to know if the ML - which is being used in the industry, like the neural networks or other ML techniques - has any statistical or probabilistic foundation which is often not bothered or is it a new emerging field with mathematical basis which is not from probability or statistics?
I really want to know if the ML in academia is really the foundation part of the ML used in the industry or is it the ML in its old form?
And finally, are the contents of the book: "Probabilistic Machine Learning An Introduction", relevant in the ML being used today?
I am aware that these are quite broad and open ended questions from someone entering into the field, I would be happy to informative answers or links?
AI: Knowledge in Probability and Statistics is absolutely important. It might feel you can do quite a bit without that and can get things work using the high level ML libraries that we have today. However if you are a serious practitioner you will figure out that beyond a point your knowledge on how these techniques work and are tuned is very hollow and that will automatically lead you into the more mathematically inclined books that give you the foundation knowledge. |
H: Negative log-likelihood not the same as cross-entropy?
The negative log-likelihood
$$
\sum_{i=1}^{m}\log p_{model}(\mathbf{y} | \mathbf{x} ; \boldsymbol{\theta})
$$
can be multiplied by $\frac{1}{m}$ after which the law of large numbers can be used to get
$$
\frac{1}{m} \sum_{i=1}^{m}\log p_{model}(\mathbf{y} | \mathbf{x} ; \boldsymbol{\theta}) \rightarrow E_{}(\log p_{model}(\mathbf{y} | \mathbf{x} ; \boldsymbol{\theta}))
$$ as the sample size $m$ tends to infinity. This expectation is the "cross-entropy".
Now here comes my question: The book I am reading(Deep Learning by Goodfellow et al) mentions several attractive properties of using the negative log-likelihood(like consistency). But meanwhile, it also also uses cross-entropy directly as the loss function of maximum likelihood estimators:
This doesn't make sense to me - to talk about negative log-likelihood and cross-entropy as being identical. It would make sense for me to talk about NLL as an approximation of the cross-entropy.
I mean, they give different results - so why use one over the other? This seems like a valid question when they do not give the same results and must thus also affects the performance. Like, I am only aware of neural networks that use cross-entropy and not ones that use NLL - how come?
Maybe cross-entropy even holds other properties than negative log-likelihood?
AI: I think the issue is that while
$$
\frac{1}{m} \sum_{i=1}^{m}\log p_{model}(\mathbf{y} | \mathbf{x} ; \boldsymbol{\theta}) \rightarrow E_{}(\log p_{model}(\mathbf{y} | \mathbf{x} ; \boldsymbol{\theta}))
$$
is true, the right side is not what is being referred to as the cross-entropy. Indeed, we have no access to the expectation over the true population distribution / data generating process; a few paragraphs down, we find:
We can thus see maximum likelihood as an attempt to make the model distribution match the empirical distribution $\hat{p}_{data}$. Ideally, we would like to match the true data-generating distribution $p_{data}$, but we have no direct access to this distribution.
They also distinguish throughout that the expectations are over $\hat{p}_{data}$, the training data empirical distribution, not $p_{data}$, the underlying population distribution.
The left side of your limit is already an expectation, but over the finite training data, and that is what is referred to as the cross-entropy. ("Cross-entropy" is a broader term, for any pair of probability distributions. Goodfellow et al note this (my emphasis):
Any loss consisting of a negative log-likelihood is a cross-entropy between the empirical distribution defined by the training set and the probability distribution defined by model.
So, the answer to your questions is that the premise is incorrect: (this) cross-entropy is the same as negative log-likelihood. Taking your questions with the limiting and population cross-entropy instead, the answer is "we don't have access to the latter". It would be the better target to be sure, but our lack of that information is the point of modeling in the first place. |
H: Error is occurring setting an array element with a sequence
I have the columns in my Data Frame as shown below:
Venue city Venue Categories
Madison London [1, 1, 1, 1, 0, 0, 0, ...,0,0]
WaterFront Austria [0, 1, 1 0, 0, 0, 0, ....0,1]
Aeronaut Marvilles [0, 0, 0, 0, 1, 1, 1, ....1,1]
Aeronaut Paris [0, 1, 1, 0, 0, 0, 0, ....1,1]
Gostrich New York [0, 0, 1, 0, 0, 0, 0, ....1,0]
I am passing this data to my machine learning model , but model.fit is not accepting the input , My code is shown below , that I am trying ,
labelencoder = LabelEncoder()
dff['Venue']=labelencoder.fit_transform(dff['Venue'])
dff['city']=labelencoder.fit_transform(dff['city'])
Let's say , if I want to increase the number of features . I want to add more columns , then I again write everything for each feature just like shown below , if i want to add type column and owner column
city = dff['city'].values
owner = dff['owner'].values
type = dff['type'].values
categories = dff['Venue Categories'].values
labels = np.array(dff['Venue'].values)
data = np.array([make_sample(city[i], owner[i], type[i] categories[i]) for i in range(len(city))])
This will looks weird , I want to make it global , means there should not need to touch the code if we may increase the number of columns .
AI: I managed to make it work, by combining the city column with the venue categories column into a 2D (numpy) array which can be used by the RandomForestClassifier of sklearn.
Example code:
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
import numpy as np
def make_data(df, target_column='Venue', categories_column='Venue Categories'):
categories = df[categories_column].values
other_features = []
for col in df.columns:
if col in [categories_column, target_column]: continue
other_features.append(df[col].values);
return np.array([[col[i] for col in other_features]+categories[i] for i in range(len(categories))]), np.array(df[target_column].values)
labelencoder = LabelEncoder()
dff['Venue'] = labelencoder.fit_transform(dff['Venue'])
dff['city'] = labelencoder.fit_transform(dff['city'])
data, labels = make_data(dff, 'Venue', 'Venue Categories')
train_data,test_data,train_labels,test_labels = train_test_split(data,labels,test_size=0.20)
model = RandomForestClassifier()
model.fit(train_data,train_labels)
Note make sure that Venue Categories column has same number of elements for each row of data, else a new problem will arise again. If needed fill with dummy values |
H: ideal algorithms to demonstrate overfitting or underfitting
When one tries to look up concepts such as overfitting and underfitting, the most common thing that pops up is polynomial regression. Why is polynomial regression often used to demonstrate these concepts? Is it just because it can be easily visualised like the graphs here:
https://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html
But then most ml algorithm such as kmeans clustering can also be used. Then why is it usually polynomial regression only? Are there any other similar algorithms that could be used?
AI: Linear Algebra tells us that N linearly independent vectors spans all of N dimension space. In a regression setting this translates into the fact that if you have N observations and N features per observation then your regression model has a fair chance of achieving a 100 % accuracy on the NxN training data. And your chances are even better if the NxN feature set is mostly composed of noise because N randomly generated N-dimension vectors are linearly independent.
The high accuracy on the train set is because the model has overfit on the random noise in the data. Such models almost never generalize well on an out of sample test set.
So, what's happening in polynomial regression is that as you add newer features, they may add more noise (potentially useless info) but very capable of being used by the model to explain the variance in the train set but never on the test set.
That's why it is the ideally choice because it allows you to easily add newer features from existing ones and demonstrate overfitting on a train set. |
H: What if irrelevant features impact the outputs?
We know that weather conditions in China is not very useful to predict price of a house in Spain (as life experience). However, when we drop that China's weather condition, the accuracy is reduced largely. Will we keep it?
AI: It would mean that the model trained with some irrelevant features is overfit: a good model uses only features which have some predictive power on the target variable, so if the sample is representative enough any irrelevant feature is ignored or assigned very little weight. However if the training dataset is too small or the model too complex then it uses details which happen by chance, so it could use some irrelevant features: this is overfitting.
In case the performance on the test set was high despite the model being overfit, it's likely that there is some data leakage additionally. |
H: Low scale ML/statistical techniques for data poor settings
I have two separate problems, but both suffer from a paucity of data problems:
logistic regression
Time series prediction
For logistic regression, I have a tiny dataset with 10 observations which have variables such as:
age, Marital_Status, income, gender, and car_purchase_status (outcome flag with Yes/No values).
Now I have a new 11th customer with variables such as Age, gender, Marital_Status, and income. Now I would like to know whether this 11th customer will buy a car or not.
Should I spend resources to influence him to buy a car? Am I spending my resources on the right customer? For example: Is there any way that I can find out that the 11th customer has 70% or 80% pc chance of buying a car?
So spending some marketing efforts such as calls can help us convince him to buy a car (100%). So, how can I do this? Any advice, please? Should I just give up straight away that this problem is impossible to solve with such low data, or are there any simple statistical techniques that can help me gain some insights about the 11th customer?
For Time series prediction, I have only 10 observations, each spaced at 20 days gap. For ex: I have their revenue generated for day1,day21,day41,day61,day81.....day201.
Now, with the given 10 observations, I would like to predict the revenue generated for day500, day321, day621 etc.
So, is it possible to run time series forecasting with such a short series? Can you guys guide me on this, please? Here, also, should I give up because of low data points, or are there any methods that I can use to predict the future timestamp points based on short input time series?
Can you guys please help me with the list of steps/topics that can help me do this?
AI: Generally, I'd pick a very simple, transparent/explainable model and use the results in a semi-automated way. That is, do not just derive a prediction but rather insights. You could, for example, use a (or multiple) decision tree(s) which you pre or post prune. The result could be a tree with, let's say, just 1-3 features to find simple rules like "if a customer is married and at least X years old, they have a high chance of making a purchase". With logistic regression you may use coefficients to identify features which influence the dependent variable the most. These (qualitative or semi-quantitative) rules should then be validated with domain experts.
Moreover, you need to be transparent about the accuracy and precision of your estimates. In the above example, leave purity would provide some intuition. If you report any quantitative measures (which I'd be careful with), you may want to consider confidence intervals (see here or chapter 5 in Tom Mitchell's "Machine Learning", for example). (With only 10 samples typical assumptions about normal distribution will not hold here though)
Regarding the time series I would start even simpler. Depending on the number of customers, I'd start by visualizing some or all historical data in a line plot (sales per customer over time) and check the min, max and mean per customer. This gives some intuition regarding potential trends. For example, if all observations remain constant over time for a given customer, whether there is an upward/downward trend or if the data has high variance with no clear trend. Also, there may be clusters of customers which show similar patterns. Obviously, this is neither Machine Learning nor a rigorous statistical analysis but rather a pragmatic approach supported by some basic data analytics.
What you need to be very careful with is the time horizon of any quantitative prediction: based on 10 observations at $t \in \{1,21,...,201\}$ you may derive some conclusions for, let's say, $t<=301$ (to make up a total out of the blue ballpark figure) but $t=621$ is very far in the future. Also, you need to keep seasonality in mind. For example: If your observations are all from October to April of a given year and assuming you have a winter/summer seasonal pattern. Then you cannot infer a lot for months May to September. To understand the limitations and forecast potential of your time series better, I'd speak to subject matter experts, e.g. in sales & marketing. It could also be helpful to understand their forecasting approach and cross check any insights you derive with their predictions.
But, as Erwan pointed out, be very careful to derive conclusions. Applying some "ML magic" will not find a useful pattern if there insufficient data to find any signal. And of course additional data collection would be reasonable if that is an option. |
H: Keras: apply multiple filters to each feature map in CNN
I am new to Keras, and I want to do the following: take a 2D image, and apply four 2D convolution kernels to it, giving four 2D feature maps. I could accomplish this. But then I want to apply two distinct 2D convolutions to each of those 4 maps, giving 8 feature maps. Is that possible?
Here's what I have so far:
import keras
from keras.layers import Conv2D
input_img = keras.Input(shape=(N_rows, N_cols, 1))
x = Conv2D(4, (3,3))(input_img)
But then I don't know how to apply 2 kernels to each of the 4 channels, so that I have eight 2D maps.
AI: You may try Keras DepthwiseConv2D layer
Depthwise Separable convolutions consist of performing just the first step in a depthwise spatial convolution (which acts on each input channel separately). The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step.
It will convolute each Channel separately. As shown in this depiction.
$\hspace{6cm}$Image credit - Blog by Chi-Feng Wang
With depth_multiplier argument, you can add more Filetrs i.e. more copy of the "triplet" shown in the depiction.
depth_multiplier:
The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filters_in * depth_multiplier. |
H: Optimal points of $f(x,y)=x^2 + y^2 + \beta xy + x + 2y$
I am self-learning basic optimization theory and algorithms from "An Introduction to Optimization" by Chong and Zak. I would like someone to verify my solution to this problem, on finding the minimizer/maximizer of a function of two variables, or any tips/hint to proceed ahead.
For each value of the scalar $\beta$, find the set of all stationary points of the following two variables $x$ and $y$
$$f(x,y) = x^2 + y^2 + \beta xy + x + 2y$$
Which of those stationary points are local minima? Which are global minima and why? Does this function have a global maximum for some value of $\beta$.
Solution.
We have:
\begin{align*}
f(x,y) &= x^2 + y^2 + \beta xy + x +2y\\
f_x(x,y) &= 2x+\beta y + 1\\
f_y(x,y) &= \beta x + 2y + 2\\
f_{xx}(x,y) &= 2\\
f_{xy}(x,y) &= \beta \\
f_{yy}(x,y) &= 2
\end{align*}
By the first order necessary condition(FONC) for optimality, we know that if $\nabla f(\mathbf{x})=0$, then $\mathbf{x}$ is a critical point.
Thus,
\begin{align*}
f_x(x,y) &= 2x+\beta y + 1 = 0\\
f_y(x,y) &= \beta x + 2y + 2 = 0
\end{align*}
Solving for $x$ and $y$, we find that:
\begin{align*}
x = \frac{\begin{array}{|cc|}
-1 & \beta \\
-2 & 2
\end{array}}{\begin{array}{|cc|}
2 & \beta \\
\beta & 2
\end{array}}=\frac{-2+2\beta}{4-\beta^2}=\frac{2\beta-2}{4 -\beta^2}
\end{align*}
\begin{align*}
y = \frac{\begin{array}{|cc|}
2 & -1 \\
\beta & -2
\end{array}}{\begin{array}{|cc|}
2 & \beta \\
\beta & 2
\end{array}}=\frac{-4+\beta}{4-\beta^2}=\frac{\beta -4}{4 - \beta^2}
\end{align*}
The second order necessary and sufficient conditions for optimality are based on the sign of the quadratic form $Q(\mathbf{h})=\mathbf{h}^T \cdot Hf(\mathbf{a}) \cdot \mathbf{h}$.
The Hessian of $f$ is given by,
$$Hf(\mathbf{x})=\begin{array}{|c c|}
2 & \beta \\
\beta & 2
\end{array}$$
Thus, $d_1 = 2 > 0$ and $d_2 = 4 - \beta^2$. Thus, $f$ has a local minimizer if and only if $4 - \beta^2 > 0$. $g(\beta) = 4 - \beta^2$ is a downward facing parabola. So, the values of this expression positive, if and only if $-2 < \beta < 2$. The function $f$ has no global maximum.
Question. How do I find the actual global minima?
AI: If the function is convex, then all local minimum are global minimum.
If $4-\beta^2 >0$, then the function is convex and hence the local minimum is indeed the global minimum.
If $\beta = \pm 2$, then we have $f(x,y)=(x\pm y)^2+x+2y$, $f(x, \mp x)=x\mp2x$ of which we can make it arbitrary large or small.
If $4-\beta^2 < 0$, then it is indefinite, the stationary point is a saddle point.
It can't be negative definite, it doesn't have global maximum. |
H: Basic doubt regarding "training" of a YOLO model
So I have just recently started exploring machine learning, and for a project I was required to train the YOLO v5 model. I first tried it on the coco128 dataset:https://www.kaggle.com/ultralytics/coco128..
repository of the yolo v5:
https://github.com/ultralytics/yolov5
I followed this tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data step by step and managed to train the model succesfully, it is detecting objects as I intended it to do.
I wanted to have a very basic overview an what is basically happening when we are "training" the model.
The command used for training was:
python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt
From what I understand, this tells the command line to run the file "train.py" (which uses argeparse library) with the following arguments: 640 (image size), 16 (batch size for batch gradient descent), 5 (no. of epochs), coco128.yaml (the config file), and yolov5s.pt (the weights file).
Doubts
The tutorial mentions that the weights are pretrained. This is confusing me because I thought the entire objective of "training" a model is to find the correct weights for the parameters. If the weights are already supplied, what exactly is the training achieving? It has been fed a bunch of images along with the labels for the bounding boxes for that images. What does it do with these images and labels to become capable of predicting objects?
The coco128.yaml mentions a path for the "validation" folder. What exactly is a "validation" folder used for?
AI: About Pre-trained model : This is a very common practise (especially in image recognition) and here is how we use it.
Let's imagine you want to recognize different types of food (beef, pork, vegetables, ...). You know some networks already exist that recognize all types of objects (boats, cars, food, sofas, ...). This objective of transfer learning is to use these models and adapt it to your task, so you pick an already trained model, and train it again, but only with your data (in our example types of food). Being able to recognize and distingish different objects is not useless for your task, so even if your network doesn't do the the same thing as the initial model, what the initial model has learned is used as foundations to learn your task.
We call this fine-tuning the model, it is basically using a complicated network that would be way too heavy to train because the dataset is not big enough, and just adjusting it so it performs our specialized task.
In your tutorial, the dataset Coco128 is made of 128 images, which is not enough for the training of a model as big as YOLOv5. So you use the pretrained weights and fine-tune it with your dataset. As far I as understand, the objective of the tutorial is to see how the model overfits these 128 images.
About validation, we usually splits datasets into 3 parts :
Training set : used to train the models. (~80% dataset)
Testing set : used to test the trained models, this part is the one used to estimate the accuracy of the model. (~15% dataset)
Validation set : used at the very end, to validate the fact that our model works correctly. (~5% dataset)
In many cases, Testing and validation datasets are the same (because validation dataset often is not necessary).
In your tutorial, what they call validation set is what I usually call the testing dataset, and it is the same as the training set, so bascially all your sets are the same. (it is very very very uncommon to do so, but since the goal of the tutorial is to see how we overfit this set, it makes sense here). |
H: in binary classification where class labels are {-1, 1} is preprocessing needed?
In machine learning we convert labels using LabelEncoder to convert string ex:{"malignant", "benign"} -> {0, 1}
I am wondering if converting Labels to any other numbers matter, in my scenario to {-1,1}.
If it matters, reason provided will be helpful (or direction of helpful resource)!
Thanks in advance!
AI: You need {0,1} or {-1,1} labels depending on the output of your model. If you have a Sigmoid output use {0,1}, while TanH outputs work with {-1,1}.
No label choice is inherently right or wrong, as long as it's compatible with your model architecture and gives you good results.
EDIT:
In case of logistic regression you must use {0,1}, that is because this class of models has a Sigmoid output. Sigmoid function is always bounded in [0,1] and it can't take values outside of that range. It could never reach -1 and training won't work. |
H: reprocessing steps for images before training classification models
I have a data set of images for classification task.
I read some articles about image reprocessing (before training CNN models) which summarize in those steps:
scale image values (img / 255.0)
remove noise (using Gaussian blur)
morphology
I'm not sure when to use each step and what is the right order of those steps:
Do we need to remove noise before scaling images ? (or it dosn't matter) ?
I didn't found many articals about morphology step:
2.1 When will we use morphology ? (Is it always right to use this reprocessing step ?
2.2 What is the right order ? use morphology after scaling and removing noise ?
AI: I assume you talking about preprocessing instead reprocessing.
Divided the image with 255.0 value is a normalization technic called min/max normalization. Like the other normalization methods, min/max normalization used to improve performance of CNN's.
Image process methods like gaussian blur, average filter etc. are using to remove the noise as you said. You might want to denoise your input image to avoid performance losses.
Using purpose of morphology methods are depends on what is your data and what do you want to do your dataset with these methods. You can refer this link to see the methods that cv2 shared with code snippets.
Greetings |
H: How does the random forest vote work?
I have a question. How is the voting done in random forests. I can't understand rationally, since we have a bootstrap sample drawn, and have built dection trees based on them, where is the new data point taken from to do the voting and extract the results?
AI: Always separate training a model from using the model to make predictions. During training, bootstrap samples are drawn and trees built using those resamples. Voting only occurs when predicting, and all of the trees are used, whether your datapoint is from the original training set or not.
There is a bit of a caveat here though, because we may be interested in the "out-of-bag" scores of the random forest, in which case we want to let the trees that didn't see a given training datapoint vote on the outcome for that datapoint. In that case, we need to keep track of which bootstrapped samples contain which datapoints, and then predict for each using the appropriate subset of trees. |
H: Criteria for saving best model during training neural network?
I am doing 4-class semantic segmentation with U-net using generalised dice loss as loss function.
General approach to save best model during training is to monitor validation loss at each epoch and save the model if val loss decreases than previous minimum.
But, I am interested in the "model which gives best the average dice score of 4 classes".
During training in my case, using validation loss as criteria doesn't lead to best avg dice score. So what should I consider the best model the one with less validation loss or highest validation dice score?
Below is my validation loss and avg dice score after each epoch. Out of this which epoch gives the best model?
epoch 1/10 validation loss: 0.95 avg dice score: 0.17
epoch 2/10 validation loss: 0.86 avg dice score: 0.23
epoch 3/10 validation loss: 0.77 avg dice score: 0.34
epoch 4/10 validation loss: 0.74 avg dice score: 0.40
epoch 5/10 validation loss: 0.71 avg dice score: 0.45
epoch 6/10 validation loss: 0.69 avg dice score: 0.34
epoch 7/10 validation loss: 0.79 avg dice score: 0.45
epoch 8/10 validation loss: 0.75 avg dice score: 0.51
epoch 9/10 validation loss: 0.76 avg dice score: 0.36
epoch 10/10 validation loss: 0.75 avg dice score: 0.38
If I go by val loss as criteria epoch-6 gives the best model, if i choose avg dice score as critieria epoch-8 gives the best model? how to choose?
AI: The loss is mostly a measure that helps the model learn and is not looked at too much when deciding which model to select. A more business oriented measure is often used for this, e.g. accuracy. Since in this case you are mostly interested in the dice score it would make most sense to select the model from epoch 8. |
H: Slight confusion on the learning process
Hi guys I have a slight confusion on the learning process of neural networks.
When the input layer receives inputs, goes through the hidden layers and then into the output layer. How does the neural network know that the outputs at the output layer are incorrect?
When the error is calculated at the output layer, it needs the predicted output which is fine but how is the actual output found?
Thanks
AI: Well, you make a very good point, this is the most painful part of machine learning, we have to label data to make them usable to train models.By labelling the data, we mean to go through the dataset and say what the output of each input is. It means that to each input we attribute a true output.
The input of the learning function in sk-learn for example are always (almost) model.train(X, Y). with X the inputs and Y the corresponding expected outputs.
Labelling is a tedious and long task that must be hand made most of the time. Some website even pay people to label data (especially for image classification or segmentation), such as amazon mechanical turk. |
H: Linear Learning Machines
I was reading about Linear Learning Machines (LLMs) and learned that it is closely related with SVMs. Would like to know an example of any concrete problems that can be classified by LLM as I couldn't find any promising example or explanation for this.
Appreciate your time.
AI: Not sure at all since I am not very familiar with these old concepts.
But I think what you call LLM (not a very common concept it seems) are algorithms that solve linearly separable data classification. SVM are algorithms that look for an hyperplane that maximize the margin with data and thus can be considered LLM. Quick reminder how a SVM solves a linearly separable problem :
We escape LLM's grasp when trying to solve a nonlinear classification problem, as represented on the left of the following picture.
SVM can still solve these type of classification with the kernel trick, that applies a non-linear function to turn the classification into a linear problem.
Hope this can help your understanding of the difference, all images are from Wikipedia. My apologies for not knowing how to resize these images :( |
H: What's the issue with my code for visualizing linear regression in 3 dimensions with matplotlib?
I am trying to use linear regression that takes two variables "Idade" and "LF" and tries to predict a third one, "DGAF". I'm trying to both do the scatterplot with the observations and the model prediction on the same graph. Bellow is the code I used (using Python).
X = df[["Idade","LF"]]
y = df["DGAF"].values.reshape(len(df["DGAF"]),1)
reg = LinearRegression()
reg.fit(X, y)
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(X['Idade'],X['LF'], y,s=10)
x_pred = np.linspace(0,100,1000)
y_pred = np.linspace(0,100,1000)
xx_pred, yy_pred = np.meshgrid(x_pred, y_pred)
model_viz = np.array([xx_pred.flatten(), yy_pred.flatten()]).T
ax.set_xlabel('Years since oil change')
ax.set_ylabel('LF')
ax.set_zlabel('DGA Score')
ax.plot(model_viz[0], model_viz[1], reg.predict(model_viz),color='red')
plt.show()
I get this error:
ValueError: input operand has more dimensions than allowed by the axis remapping
I am able to do the scatterplot, the issue seems to be with plotting the prediction. How can I solve this?
AI: Following the suggestion given by @Ubikuity, I changed the ax.plot to ax.scatter, but there were still some issues with the dimensions. It was due to the transposition of the matrices, that was being done in the wrong place. The code below works correctly.
X = df[["Idade","LF"]]
y = df["DGAF"].values.reshape(len(df["DGAF"]),1)
reg = LinearRegression()
reg.fit(X, y)
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(X['Idade'],X['LF'], y,s=10)
x_pred = np.linspace(0,100,100)
y_pred = np.linspace(0,100,100)
xx_pred, yy_pred = np.meshgrid(x_pred, y_pred)
model_viz = np.array([xx_pred.flatten(), yy_pred.flatten()])
ax.set_xlabel('Years since oil change')
ax.set_ylabel('LF')
ax.set_zlabel('DGA Score')
ax.scatter(model_viz[0], model_viz[1], reg.predict(model_viz.T),color='red')
plt.show()
``` |
H: Can someone explain to me how to use a predictive model to predict something other than the training set
So let's say I create a logistic model to predict who will open a loan based on a based email list that includes who opened and who didn't that's 90% accurate. The model says age, income, bank engagement are three key variables that decide who opens a loan. Is there a way to apply this model to a different email list with the same variables to predict who will open a loan? Or what percentage of the people will open the loan? Or will I just need to analyze the data on the list myself to determine this.
Sorry, probably a dumb question, but it's one thing I've struggled to figure out on my data scientist journey.
AI: Yes, you can predict who will open a loan with this model. You have to keep several things in mind. The model expects the new data to be from the same distribution as the training data, meaning the variables have the same distribution and targets as well.
You can not train on one set where 3% opened a loan and then predict on a set where you expect that 90% open a loan. This data is then too different for the model to accurately predict on.
Remember that your model only has 90% accuracy, so it is not guaranteed that a specific costumer will open a loan when the model predicts so. |
H: What exactly is convergence rate referring to in machine learning?
My understanding of the term "Convergence Rate" is as follows:
Rate at which maximum/Minimum of a function is reached, so in logistic
regression rate at which gradient decent reaches global minimum.
So by convergence rate I am guessing it is measure of:
time measured from start of gradient descent until it reaches global maximum.
average number of distance our model went downhill(Do not know technical term...) for each iteration.
Can someone verify whether or not one of my guess is True if not explain what it means?
AI: A rate is always a gain per some time/step. A rate can exist even if the maximum is never reached. In supervised learning a loss function is defined, which is expected to have a global maximum, that we try to reach by gradient descent. How much closer we get with each timestep/iteration/epoch/batch is the rate of convergence. |
H: Best parameters to try while hyperparameter tuning in Decision Trees
I want to post prune my decision tree as it is overfitting, I can do this using cost complexity pruning by adjusting ccp_alphas parameters however this does not seem very intuitive to me.
From my understanding there are some hyperparameters such as min_samples_split, max_depth, min_impurity_split, min_impurity_decrease that will prune my tree to reduce overfitting.
Since I am working with a larger dataset it takes a long time to train therefore don't want to just do trial-error.
What are some possible combinations of above mentioned hyperparameters that will prune my tree? + reasoning behind choosing particular combination will be helpful.
Thanks in advance!
AI: There are no combinations that work for all cases, hyperparameter tuning is still something that is mostly done by trial and error. Things like Gridsearch and Randomsearch exist though.
A good start is always the default setting. An idea if performance is an issue is to tune on a small percentage of the training set to later switch to the full set. |
H: Using vgg16 or inception with wights equals to None
When using pre-trained models like vgg16 or inception,
it seems that one of the benfits of using pre-trained model, is to save up time of training.
Is there a reason to use the pre-trained models without loading the weights ? (use random weights) ?
AI: The advantage of using a pre-trained model without loading the weights (which would mean you are only use the model, not a pre-trained version) is that you can easily use an existing model architecture and applying it to your problem. This can save you quite some time since you don't have to build the model architecture yourself in tensorflow/keras/pytorch and can go straight to applying the model on your data. |
H: Grouping with non-sequential index (datetime) [Pandas] [Python]
I am working in Python, I have a DataFrame whose index is of type DateTime, the times of the index are not continuous. You can see that the first three data are in sequence and after the third data it goes directly to minute 50. The entire DataFrame has this feature.
datos_frecuencia
<class 'pandas.core.frame.DataFrame'>
State
2021-05-07 19:45:00 1.0
2021-05-07 19:46:00 -1.0
2021-05-07 19:47:00 0.0
2021-05-07 19:50:00 -1.0
2021-05-07 19:51:00 1.0
2021-05-07 19:52:00 1.0
2021-05-07 19:55:00 1.0
2021-05-07 19:56:00 -1.0
2021-05-07 19:57:00 -1.0
2021-05-07 20:00:00 -1.0
2021-05-07 20:01:00 1.0
2021-05-07 20:02:00 -1.0
2021-05-07 20:05:00 -1.0
2021-05-07 20:06:00 1.0
2021-05-07 20:07:00 -1.0
2021-05-07 20:10:00 0.0
2021-05-07 20:11:00 -1.0
2021-05-07 20:12:00 1.0
2021-05-07 20:15:00 -1.0
2021-05-07 20:16:00 -1.0
2021-05-07 20:17:00 -1.0
2021-05-07 20:20:00 -1.0
2021-05-07 20:21:00 -1.0
2021-05-07 20:22:00 -1.0
2021-05-07 20:25:00 -1.0
2021-05-07 20:26:00 0.0
2021-05-07 20:27:00 -1.0
I need to group this DataFrame in groups of 3 by 3, to perform the sum of the column "State".
I have tried using resample (), as follows.
datos_frecuencia["State"].resample("3min").sum()
2021-05-07 19:54:00 0
2021-05-07 19:57:00 -1
2021-05-07 20:00:00 -1
2021-05-07 20:03:00 -1
2021-05-07 20:06:00 0
2021-05-07 20:09:00 -1
2021-05-07 20:12:00 1
2021-05-07 20:15:00 -3
2021-05-07 20:18:00 -1
2021-05-07 20:21:00 -2
2021-05-07 20:24:00 -1
2021-05-07 20:27:00 -1
But the result is not what is expected, since in this way, resample () takes times that in the original DataFrame does not exist. For example resample shows: 2021-05-07 20:03:00 -1, when minute 3 of hour 20 is not found in the main DataFrame.
You would need it to be grouped as follows, taking the sum of the "state" column:
State
[2021-05-07 19:45:00 1.0
2021-05-07 19:46:00 -1.0
2021-05-07 19:47:00 0.0]
[2021-05-07 19:50:00 -1.0
2021-05-07 19:51:00 1.0
2021-05-07 19:52:00 1.0]
[2021-05-07 19:55:00 1.0
2021-05-07 19:56:00 -1.0
2021-05-07 19:57:00 -1.0]
[2021-05-07 20:00:00 -1.0
2021-05-07 20:01:00 1.0
2021-05-07 20:02:00 -1.0]
[2021-05-07 20:05:00 -1.0
2021-05-07 20:06:00 1.0
2021-05-07 20:07:00 -1.0]
[2021-05-07 20:10:00 0.0
2021-05-07 20:11:00 -1.0
2021-05-07 20:12:00 1.0]
[2021-05-07 20:15:00 -1.0
2021-05-07 20:16:00 -1.0
2021-05-07 20:17:00 -1.0]
[2021-05-07 20:20:00 -1.0
2021-05-07 20:21:00 -1.0
2021-05-07 20:22:00 -1.0]
[2021-05-07 20:25:00 -1.0
2021-05-07 20:26:00 0.0
2021-05-07 20:27:00 -1.0]
The end result should be a dataFrame with the following data:
State
2021-05-07 19:45:00 0.0
2021-05-07 19:50:00 1.0
2021-05-07 19:55:00 -1.0
2021-05-07 20:00:00 -1.0
2021-05-07 20:05:00 -1.0
2021-05-07 20:10:00 0.0
2021-05-07 20:15:00 -3.0
2021-05-07 20:20:00 -3.0
2021-05-07 20:25:00 -2.0
You know some function in Pandas, with which you can obtain this result.
AI: Try this:
If needed, reset the index (this should not be a date time)
datos_frecuencia.groupby(datos_frecuencia.index // 3).agg({"tiempo":lambda x: x.head(3).min(),"state":sum})
Output: |
H: Checking trained CNN on the images
I trained my CNN (model) classifier and want to check it on some new images. I have image x, so this syntax works for me for one image:
torch.argmax(model(x))
What if I want to classify 2 more images (different classes), let's say images y and z? Should I for every image write a new line or is it possible to put in the code above 3 together?
AI: torch.argmax has an extra argument dim which you can specify such that the maximum value is taken over a specific dimension. If you specify the dimension which represents the number of images it will return an array of indices where each value is for one image. For example:
import torch
# 3 images with 5 classes
t = torch.randn(3, 5)
# tensor([[-1.2917, 1.3740, 0.6967, -0.0575, 0.3702],
# [ 0.5428, 1.0863, 0.3951, 1.8535, 1.0926],
# [ 0.5865, 0.8522, -0.6858, 0.5297, -0.1320]])
# get the argmax over the first dimension, which specifies the number of images
torch.argmax(t, dim=1)
# tensor([1, 3, 1]) |
H: Do Activation Functions map to Higher Dimensions?:
I just started learning tensorflow and I have a question regarding activation functions used in neural networks, I watched a 3b1b video a while ago and it seems it squished the value into an interval like sigmoid does so by squishing it between 0 and 1 so we could make more concrete comparisons however while watching a tutorial today the instructor said that it projects the data points into higher dimensional spaces. I didn't really get how that's the case as it seems it's being converted to a scalar. Is there an interpretation/example for the latter claim?
This is the timestamped URL he talks about it during the couple of minutes following this.
AI: We explain AI with intuition rather than maths most of the time, so everyone has its own explanation and representation of things, here is how I would explain activation functions (I'll try to make is clearer instead of adding another version to the already 2 you know):
Just in case you don't know what a basis is, you should have a look at Wikipedia since it is the core concept of your problem (to my eyes at least).
You need to consider the whole input values, not just their values. All input values are reals, so their dimension is 1, yet if we consider them all together, their dimension is N (number of inputs). And the basis of this space is $(Input1, Input2, ..., InputN)$.
Neurons are made of 2 parts :
linear combination of the inputs.
activation function
Let's imagine we use neurons that are only a linear combination of inputs $(Output = WA + B)$. Then we project our initial points into a space that has exactly the same basis as the input space (since all outputs are linear combination of the input if we forget about biases). So this action is just a remapping of the inputs into the same space, but with a different basis. This may help if your problem is linearly separable, but doesn't suffice if your problem is not That's why we use activation function.
If we consider neurons with an activation function now. The particularity of activation functions is that they are non-linear. So the use of activation functions maps your inputs into a different space, basis needs to be different since all outputs are not anymore linear combination of the inputs. So this time the output space is different from the initial one.
The way I see it is that activation function do not generate information from nothing, but they allow to remap the available information into a higher dimension space where the problem is easier to solve.
Hope this helps, feel free to ask if you have any remaining question. |
H: Flag consecutive dates by group
Below is an example of my data (Room and Date). I would like to generate variables Goal1 , Goal2 and Goal3. Every time there is a gap in the Date variable means that the room was closed. My goal is to identify consecutive dates by room.
Room Date Goal1 Goal2 Goal3
1 Upper A 2021-01-01 1 2021-01-01 2021-01-02
2 Upper A 2021-01-02 1 2021-01-01 2021-01-02
3 Upper A 2021-01-05 2 2021-01-05 2021-01-05
4 Upper A 2021-01-10 3 2021-01-10 2021-01-10
5 Upper B 2021-01-01 1 2021-01-01 2021-01-01
6 Upper B 2021-02-05 2 2021-02-05 2021-02-07
7 Upper B 2021-02-06 2 2021-02-05 2021-02-07
8 Upper B 2021-02-07 2 2021-02-05 2021-02-07
df <- data.frame("Area" = c("Upper A", "Upper A", "Upper A", "Upper A",
"Upper B", "Upper B", "Upper B", "Upper B"),
"Date" = c("1/1/2021", "1/2/2021", "1/5/2021", "1/10/2021",
"1/1/2021", "2/5/2021", "2/6/2021", "2/7/2021"))
df$Date <- as.Date(df$Date, format = "%m/%d/%Y")
AI: Using dplyr:
R> df %>%
group_by(Area) %>%
mutate(Goal = cumsum(c(1, diff(Date) > 1))) %>%
group_by(Area, Goal) %>%
mutate(Goal2 = min(Date), Goal3 = max(Date))
# # A tibble: 8 x 5
# # Groups: Area, Goal [5]
# Area Date Goal Goal2 Goal3
# <chr> <date> <dbl> <date> <date>
# 1 Upper A 2021-01-01 1 2021-01-01 2021-01-02
# 2 Upper A 2021-01-02 1 2021-01-01 2021-01-02
# 3 Upper A 2021-01-05 2 2021-01-05 2021-01-05
# 4 Upper A 2021-01-10 3 2021-01-10 2021-01-10
# 5 Upper B 2021-01-01 1 2021-01-01 2021-01-01
# 6 Upper B 2021-02-05 2 2021-02-05 2021-02-07
# 7 Upper B 2021-02-06 2 2021-02-05 2021-02-07
# 8 Upper B 2021-02-07 2 2021-02-05 2021-02-07
The Goal column is calculated in the following way: use diff on Date column to identify gaps (difference > 1), afterwards apply cumsum to obtain the consecutive values. |
H: How to implement random cropping during training?
I'm developing a U-net like model which segments the damaged tissue of the brain between two time-points in Multiple Sclerosis patients. The model is given the baseline and follow-up images as x and the segmentation mask as y. The images are 3D (192, 218, 192) and the model input size is (128, 128, 128) and is being trained by the dice loss. The model is based on this paper: https://www.sciencedirect.com/science/article/abs/pii/S0895611120300732
What I'm currently doing is a centre crop for each image before the training, and the training dice score seems to be learning properly, but in the validation data it is not.
I have read that random cropping 10 times or so helps to reduce overfitting, but I don't exactly know how to implement it. I have written this function to do the random cropping:
def random_crop(img_bl, img_fu, mask, width=128, height=128, depth=128):
x = random.randint(0, img_bl.shape[1] - width)
y = random.randint(0, img_bl.shape[0] - height)
z = random.randint(0, img_bl.shape[2] - depth)
img_bl = img_bl[y:y + height, x:x + width, z:z + depth]
img_fu = img_fu[y:y + height, x:x + width, z:z + depth]
mask = mask[y:y + height, x:x + width, z:z + depth]
return img_bl, img_fu, mask
Do I just apply this function 10 times before the training for each image? Or is there a way to include the random cropping inside the model and overlap the predictions of these 10 subvolumes?
Any tips, links to articles or insights are greatly appreciated. Thanks!
AI: For segmentation, random cropping cannot be included inside the model definition, since random crop of same state also has to applied on mask. I suggest before each epoch, apply the 'random_crop' function on the entire dataset and shuffle(very important) the dataset. |
H: KNN Variance using a high value of K and cross-validation
it has come to my understanding, that a value of K=1, gives a high variance because we are only using only one data point, hence we are very likely to model the noise in that training example.
Bias: It will take the value of point 3 as it’s the closest one. It looks much better here (I know not the best example but you can consider another example such as house price and axis to be locality and price). It makes more sense that the point will be similar to the closest points compare to the point far away.
Variance: Let’s say point 11 was at age 32, then the closest point would have been 9 and hence the new predicted value would have been much different than the current one. Hence, it has a high variance.
They used a different data point from the test data, but why is point 11, at age 32?, what makes for the two data points to be related, because surely if I use age 50, I do not expect
a prediction close to the prediction at age 30, do they assume they have the same height, if then, why are we using our dependent variable to predict our dependent variable
Can someone provide me answers
Much appreciated
AI: It's just an example in the article trying to make the point that a small change with regards to the age of $p_{11}$ from $30$ to $32$ has relatively large impact on the prediction since with $k=1$ for
$age_{p_{11}}=30$, the model would predict $heigth_{p_{11}}\leftarrow heigth_{p_{3}}$ and for
$age_{p_{11}}=32$, the model would predict $heigth_{p_{11}}\leftarrow heigth_{p_{9}}$.
Moving $p_{11}$ to $age = 50$ would just not serve as a good example for the case they are making because it's not surprising that a model predicts different height values for $age=30$ and $age=50$.
To phrase it differently, the relation between $p_{11}$ being at $age=30$ or $age=32$ is that it's only a small difference in terms of age. |
H: The affect of bootstrap on Isolation Forest
I've been using isolation forest for anomaly detection, and reviewing its parameters at scikit-learn (link). Looking at "bootstrap", I'm not quite clear what using bootstrap would cause. For supervised learning, this should reduce overfitting, but I'm not clear what the effect on anomaly detection should be.
I think it would require the trees to achieve more "consensus" about what the anomaly is, therefore, reducing the effect of any single feature. I.e, an anomalous observation would probably need to be anomalous consistently and over a number of features (?).
Is this a correct interpretation of this parameter?
AI: This is well explained on the original paper Section 3.
As well as in the Supervised Random Forest, Isolation Forest makes use of sampling on both, features and instances, so the latter in this case helps alleviate 2 main problems:
Swamping
Swamping refers to wrongly identifying normal instances as anomalies. When normal instances are too close to anomalies, the number of partitions required to separate anomalies increases – which makes it harder to distinguish anomalies from normal in- stances.
Masking
Masking is the existence of too many anomalies concealing their own presence.
Contrary to existing methods where large sampling size is more desirable, isolation method works best when the sampling size is kept small. Large sampling size reduces iForest’s ability to isolate anomalies as normal instances can interfere with the isolation process and therefore reduces its ability to clearly isolate anomalies. Thus, sub-sampling provides a favourable environment for iForest to work well. Throughout this paper, sub-sampling is conducted by ran- dom selection of instances without replacement. |
H: What happens when the vocab size of an embedded layer is larger than the text corpus used in training?
Full disclosure this question is based on following this tutorial: https://tinyurl.com/vmyj8rf8
I am trying to fully understand embedded layers in Keras. Imagine having a network to try and understand basic sentiment analysis as a binary classifier (1 positive sentiment and 0 negative sentiment). The toy dataset for this is as follows:
# Define 10 restaurant reviews
reviews =[
'Never coming back!',
'horrible service',
'rude waitress',
'cold food',
'horrible food!',
'awesome',
'awesome services!',
'rocks',
'poor work',
'couldn\'t have done better'
]#Define labels
labels = array([1,1,1,1,1,0,0,0,0,0])
This data can be used to train a really simple network as follows:
Vocab_size = 50
model = Sequential()
embedding_layer = Embedding(input_dim=Vocab_size,output_dim=8,input_length=max_length)
model.add(embedding_layer)
model.add(Flatten())
model.add(Dense(1,activation='sigmoid'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['acc'])
print(model.summary())
In order to feed this data into he network, we can one hot encode it using Keras one_hot as follows:
encoded_reviews = [one_hot(d,Vocab_size) for d in reviews]
print(f'encoded reviews: {encoded_reviews}')
We get the following output:
encoded reviews: [[14, 45, 43], [8, 2], [6, 43], [24, 1], [8, 1], [11], [11, 21], [16], [34, 40], [2, 25, 36, 15]]
I understand that the purpose of setting Vocab_size = 50, even though there are only around 20 unique words in the corpus is to give a large enough hashing space for the hashing algorithm behind one_hot to avoid collisions when the text is encoded.
If I train the model on these words (assume fixed length input and padding) and then get the weights of the embedded layer:
print(embedding_layer.get_weights()[0].shape)
(50, 8)
We can see this it is an array of 50 vectors that look like this as an example:
[ 0.17051394 0.13659576 -0.05245572 -0.12567708 0.06743167 0.05893507
-0.14506021 0.06448647]
My understanding is that each of these vectors corresponds to a word embedding for each word in the corpus. But if there are only 20 unique words in the corpus and Vocab_size is set larger than this then that can't be completely true? If Vocab_size > corpus_vocab_size, then what do these embeddings represent? Any help would be appreciated.
AI: tf.keras.layers.Embedding(..., embeddings_initializer="uniform"*,..., *kwargs)
All the weights are initialized with the init strategy
All learn the optimum values with the backprop
Weights for which there is no input will have zero output every time, hence no learning.
Hence these extra weights will remain at their initialization value
You may check these extra weights before and after.
weight = model.layers[0].get_weights() # Save before training
history = model.fit(x, y)
# These two should be same
weight[0][-1] # Last weight - Before
model.layers[0].get_weights()[0][-1] # Last weight - After |
H: Handling categorical data with more over 100 unique classes
I am working with a pure categorical data set. And some classes have more than 100 unique values.
I could not find any appropriate encoding possibility. So I created a SQL table, where each value got its ID. Then I extracted the IDs and used it in ML Classification.
However the results are poor.
So anyone has an idea how to encode such values better?
AI: Even though you have 100 unique values - Do you see a pattern where perhaps 80 of those unique values appear only once or twice - and the remainder 20 occur much more often.
In that case - you may want to encode the 80 with a special "Other" value. By doing this you lose information - by not being able to distinguish between the 80 categories - but perhaps it doesn't matter to your problem.
The other option is - You can convert your categorical data into an integer (perhaps the ID approach you are talking about ?) and run a Decision Tree model (e.g. Random Forests) on it. My experience with Random Forests is that they work pretty well on such data. |
H: Transformer: where is the output of the last FF sub-layer of the encoder used?
In the "Attention Is All You Need" paper, the decoder consists of two attention sub-layers in each layer followed by a FF sub-layer.
The first is a masked self attention which gets as an input the output of the decoder in the previous step (and the first input is a special start token).
The second, 'encoder-decoder', attention sub-layer gets as an input queries from the lower self-attention sub-layer and keys & values from the encoder.
I do not see the use of the output of the FF sub-layer in the encoder; can someone explain where is it used?
Thanks
AI: We can see this in the original Transformer diagram:
The output of the last encoder FF layer is added to the original input of the same layer, then layer normalization is applied and that is the output of the whole encoder, which is used as keys and values for the encoder-decoder attention blocks in the decoder. |
H: pandas groupby.count doesn't count zero occurrences
I am using groupby.count for 2 columns to get value occurrences under a class constraint. However, if value $x$ in feature never occurs with class $y$, then this pandas method returns only non-zero frequencies. Is there any solution or alternate method?
The script is like:
combine = np.vstack([X_train[:,-1], y_train]).T
combine_df = pd.DataFrame(combine, columns = ['feature','class'])
class_count_groupby = combine_df[combine_df['class'] == 2]['class'].groupby(combine_df['feature']).count()
AI: Use crosstab instead:
pd.crosstab(combine_df.class, combine_df.feature) |
H: Pretrained models for Propositional logic
Are there any pretrained models which understand propositional logic?
For example, the t5 model can do question-answering. Given a context such as "Alice is Bob's mother. Bob is Charlie's father", t5 can answer the question "Who is Charlie's father" correctly, but it cannot say "Who is Charlie's grandmother".
Is there any model that has been/can be trained to do this kind of deduction and answer the question?
AI: As far as I know, automated reasoning with propositional logic can be done with a solver like Prolog (and it's not new). I don't know if it has been done but I don't think it would make sense to train a ML model for propositional logic, since it's entirely symbolic (as opposed to statistical): the right answer can be found deterministically.
The question of logic reasoning from text like in the proposed example is a bit different, because it involves a step of representing the text. I think it could make sense to train a model which converts a text into a formal logic proposition (and back). The logic reasoning should still be done with a tool meant specifically for that imho. Note that question answering doesn't involve any logical reasoning, even if it might look this way to a user. As far as I know a QA system learns patterns about matching a type of question to its corresponding answer, the system is completely oblivious to the meaning. |
H: How to know the state-of-the-art recommended approaches for data science?
Data science, AI, NLP, and visualization are changing so fast. I wonder if there is a way/blog that shares the latest updates and recommended approach using certain techniques or avoid using others. For example, many NLP books are old and they would provide examples using TF-IDF. However, nowadays there are much better approaches but they are also changing fast. I am hoping to find some source that would say use these new techniques and avoid using those old techniques. Searching the web can help, but will bring back a lot of noise.
AI: A few comments:
There is no scientific domain of "data science", instead there are multiple fields which are related to data science: statistics, ML, NLP, computer vision, signal processing... and a lot of other fields which overlap and/or focus on specific applications, for instance bioinformatics. All of these domains are highly active and specialized, so it would just be impossible to monitor every possible advance.
There is no unique recommended way: first, people disagree all the time about the best way to do X. Second, it's very rare that a method would become completely obsolete. For example TFIDF still makes sense in many use cases, with low-resource languages or when there are efficiency constraints for instance.
In order to comprehensively follow the state of the art, one would have to follow the research publications. At best it's doable for a specific domain, for example one can more or less get an idea of what happens in NLP by browsing through the main conferences. A more realistic option is to wait for the advances to reach the mainstream professionals, for example by browsing regularly through DataScienceSE and/or CrossValidated.
Final suggestion: old books are very useful to fully understand why/how things are done a certain way. We often see errors here on DataScienceSE which are due to people trying to apply methods without understanding them. |
H: Model that predicts probability of correctness of another model
Problem:
Given a neural network for image classification with $1000$ classes, the objective is to create another model which will output the probability of the neural network giving the correct prediction for a specific input image.
Thoughts:
My ideas so far have been:
Creating a convolutional network and training it with raw images together with a label which indicates if the first NN predicted it correct or not.
Creating a fully connected network and training it with outputs of hidden layers/features of the first NN instead of the raw image, together with the labels as before.
Creating a fully connected network and training it with the top-k outputs of the softmax layer of the first NN together with the labels.
The first method yielded an accuracy of $0.51$, the second $0.58$ and the last one $0.79$.
Can you suggest another method (or a modification of one from the above) which can achieve greater accuracy?
AI: This is a problem known as quality estimation (QE) in the application of Machine Translation (there's a regular Shared Task about it). The goal is to train a model to predict the quality of the output of a ML system.
There is an important theoretical obstacle to this kind of task: if such a QE model was able to predict perfectly, then in theory it would be possible to create a perfect model for the original task by trying many different models/parameters until the QE system predicts a high level of quality. What this means practically is that a QE model can only be as good as the system it's trying to estimate since it relies mostly on the same information, otherwise the system is clearly sub-optimal. Essentially the best the QE system can do is to determine how hard it is to correctly predict a particular input image, but it cannot predict with any certainty whether the actual prediction is correct or not. |
H: Why does this paper claim to have found a minimal width of $d_{in}+1$?
Why does this paper (click the link) claim to have found a minimal width of $d_{in}+1$ in the abstract? I mean, if you read the main result, it seems like they only find a universal approximator with width $d_{in} + d_{out}$. What am I missing or did they make a mistake? I do not completely understand all the math but I do think I get the gist of it, and also $d_{in}+1$ is not even mentioned when showing a bound of the difference between the function approximated by the neural network and the actual function...
AI: From the abstract of the paper:
Specifically, we answer the following question: for a fixed $d_{in} \ge 1$,
what is the minimal width $w$ so that neural nets with ReLU
activations, input dimension $d_{in}$, hidden layer widths at most $w$,
and arbitrary depth can approximate any continuous, real-valued
function of $d_{in}$ in variables arbitrarily well?
"Real-valued function" means that the output dimension $d_{out}=1$. Their Theorem 1 is more general and says that for any $d_{in}, d_{out} \ge 1$ you will need $d_{in}+1 \le w \le d_{in}+d_{out}$. |
H: Data Leakage when preprocessing categorical features?
I am fairly new to machine learning. I came across the concept of Data Leakage. The article says that always split the data before performing preprocessing steps.
My question is, do steps such as discretization, grouping categories to a single category to reduce cardinality, converting categorical variables to binary variables, etc. lead to Data Leakage?
Should I split the data to train and test set before applying these steps?
Also, which are the main preprocessing steps I really need to be cautious of in order to avoid data leakage?
AI: Think of the data being organized in rows (instances) and columns (features). Any pre-processing step which mixes information between/across rows can lead to data leakage.
A typical example is standardization or normalization. Applying min-max-scaling, for example, on the whole dataset would leak information since it is an aggregation across rows/instances. Another typical example is encoding categorical variables. When you train your encoder before the split you're taking into considerations values which you should not have seen before. Because when applied on real data, you might be presented unknown data values too.
The guiding principle here is, that your test strategy should resemble the real application in order to estimate the ability of your model to generalize to unseen data.
The safest way to avoid data leakage is to split the data before applying any pre-processing. Moreover, this approach supports designing the pipeline in a way that you can apply it to validation/test and later production data the same way you applied it on training data. |
H: How does epochs related with converging the model?
I have read on Internet that epochs is used to give the time for the model to converge but I don't know how ? . I was thinking that epochs is used because to train the model sufficient times . How does model convergence relates with epochs . Also tell me that why epochs is useful ?
AI: I don't know if I understood correctly the question, but basically, epochs are the unit of measure of the time spent on training the machine learning models.
An epoch is conventionally considered as a full pass on the dataset, meaning that for each epoch, you feed the data to the model from the beginning to the end of the dataset. Once an epoch is complete, you can start again another epoch training your model from the beginning of the dataset to the end, and so on and so forth.
The convergence of the performance of the models is of course dependant on the epochs, because usually what happens is that the more epochs you train your model, the less noisy (more stable) will be the predictions.
Hope I answered your question. |
H: Feature importance with Text features
I would like to determine features importance in several models:
support vector machine
logistic regression
Naive Bayes
random forest
I read that I will need an agnostic model, so I have thought to use performance_importance (in python).
My features look like
Text (e.g., The pen is on the table, the sky is blue,...)
Year (e.g., 2019, 2020,...)
#_of_characters (e.g., 34, 67,...): this value comes from Text
Party (e.g., National, Local, Green, ...)
Over18 (e.g., 1, 0, ...) : this is a Boolean variable
My target variable is Voted.
In the pre-processing phase, I am using BoW and TF-IDF for Text, OneHotEncoder for Party, SimpleImputer for numerical.
Using the following:
from sklearn.inspection import permutation_importance
import matplotlib.pyplot as plt
result = permutation_importance(clf, X_test, y_test, n_repeats=5, random_state=42, n_jobs=2)
sorted_idx = result.importances_mean.argsort()
plt.boxplot(result.importances[sorted_idx].T,
vert=False, labels=X.columns[sorted_idx]);
I am getting a similar output like the below(I forgot to include Over18, but it is just to give an idea of the output):
Although I have difficulties in interpreting the results, especially circles and negative values, I would like to understand if, in case of text classification, it makes sense to have Text for the importance and not, for example, the single words (unigrams, bigrams,...).
For instance, in my example, I have ['The','pen','is','on','table','sky','blue']. Would it make more sense to understand how much each word contributes to the model instead of Text, or this is just considered in Text (where there are many words that contributes to the model), which is the most significant feature in the model?
UPDATE: for the different features I am using the following pre-processors:
categorical_preprocessing = OneHotEncoder(handle_unknown='ignore')
numeric_preprocessing = Pipeline([
('imputer', SimpleImputer(strategy='mean')
])
# CountVectorizer
text_preprocessing_cv = Pipeline(steps=[
('CV',CountVectorizer())
])
# TF-IDF
text_preprocessing_tfidf = Pipeline(steps=[
('TF-IDF',TfidfVectorizer())
])
and then
preprocessing_cv = ColumnTransformer(
transformers=[
('text',text_preprocessing_cv, 'Text'),
('category', categorical_preprocessing, categorical_features),
('numeric', numeric_preprocessing, numerical_features)
], remainder='passthrough')
clf_nb = Pipeline(steps=[('preprocessor', preprocessing_cv),
('classifier', MultinomialNB())])
AI: permutation_importance is considering the top-level features. It is permuting each one sequentially and learning the importance.
So, the inner encoding i.e. OHE/tfid is not visible to it.
To get the importance of components of the top-level feature, you should encode it separately and then pass the encoded data to the permutation_importance
Get the pre-processed data using preprocessing_cv.fit_transform(X_train)
Call your permutation_importance code on the above data and any model of your choice
Edit
Adding the snippet. I am excluding ColumnTranformer as it is causing some issue.
data = {'Number':[1,2,3], 'Text':['pen is table', 'sky is blue','Sun is kool'], 'Cat':['A','B', 'C']}
df = pd.DataFrame(data)
categorical_preprocessing = OneHotEncoder(handle_unknown='ignore')
numeric_preprocessing = SimpleImputer(strategy='mean')
text_preprocessing_cv = CountVectorizer()
text_tfid = text_preprocessing_cv.fit_transform(df['Text']).toarray()
num = numeric_preprocessing.fit_transform(df['Number'].values.reshape(-1, 1))
cat = categorical_preprocessing.fit_transform(df['Cat'].values.reshape(-1, 1)).toarray()
data = np.concatenate((cat,num,text_tfid), axis=1)
cols = np.concatenate((categorical_preprocessing.get_feature_names(), text_preprocessing_cv.get_feature_names(), ['Num'])) # New cols name
df = pd.DataFrame(data, columns=cols) # Encoded DataFrame with col name
from sklearn.inspection import permutation_importance
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier().fit(df, [1,0,1])
result = permutation_importance(clf, df, [1,0,1], n_repeats=2, random_state=42)
sorted_idx = result.importances_mean.argsort()
plt.boxplot(result.importances[sorted_idx].T,
vert=False, labels=df.columns[sorted_idx]); |
H: Normalization vs standardization for image classification problem
For day and night image classification, is it better to normalize or standardize images? In general, when should I use each method? I am interested in with example why one method is preferred over another one?
Here, by normalization, I mean dividing pixel values by 255. Standardization means subtracting the mean value of pixels and then dividing by standard deviation. See the code samples below
# normalization
datagen = ImageDataGenerator(rescale=1.0/255.0)
or
# stardartization
datagen = ImageDataGenerator(samplewise_center=True, samplewise_std_normalization=True)
AI: To be able to answer your question, firstly we should take into account the differences between Normalization and Standardization.
Normalization: normalized_value=(raw_value-min)/(max-min)
Standardization: standarized_value=(raw_value-μ)/σ
Also, one additional famous method is there which is centering. Where you subtract the mean from the pixels.
In general, all of these serve one common and crucial purpose: to provide "fair conditions" for all features. I.o.w. since, for each feature, we have many shared parameters (e.g. learning rate) instead of custom parameters, we need to have similar ranges. Therefore, sharing of parameters would not be optimal because a parameter (e.g. learning rate) would not have the same power over features with different ranges. For example, converging to the global optimum value might be effective for one of the features, however, another one might require a smaller learning rate value because of its range.
Additionally, smaller values increase the calculation process, fastens the converging. Also, interpretation and comparison of weights would be easier if the features are standard. If ranges are dissimilar one of the coefficients might be very big while another one might be comparably very small.
However, I want to draw attention to your Normalization technique where you divide pixel values by 255, which just lowers the range from 0-255 to 0-1, but the spread of pixel values over the range is kept the same. Normalization is mostly used when we want to have an abounding range for features, however, RGB has already an abounding system of 0-255. However, theoretically, this would still help to fasten the training process.
It is hard to say that one of these (Normalization or Standardization) is better than the other because one might beat the other depending on the scenario. Commonly, both techniques are tried and compared to see which one performs better.
In many cases, to have a more accurate model, usage of these techniques is a must. Let's say you have two images where one of the images is dark where all pixels have values close to each other (narrow range, small deviations, high kurtosis), but another one is a complex image where parts of the image dissimilar to each other in terms of pixel values (wide range, large deviations, low kurtosis). In such a case, using normalization would help a model to better understand and capture the relations inside of the images. |
H: How the Support Vector Machine will perform if the bias b = 0 in the equation of hyperplane?
We have a soft margin linear SVM and the equation is as follows :
How the SVM will perform if b = 0, means the hyperplane is passing through the origin ?
AI: I think that it will simply behave as a normal SVM, with the difference that being the hyperplane anchored to the origin, you are blocking one degree of freedom and as such, the optimization algorithm used to find the parameters w might not find the best solution.
At this point, I am wondering why you should set b=0. |
H: How does a random forest algorithm deal with a few irrelevant input variables
I have a list of variables from which I would like to train a Random Forest Algorithm. I suspect that a few of my input variables, which have noisy distributions, won't be able to predict much. Can I use them anyway, knowing the algorithm will eliminate them in the process, or should I beware that these variables may biais my model?
AI: Random Forest tends to be not too sensitive to features with low predictive power. The reason is that RF looks for a "best split" given a subset of features (columns) and observations (rows) at each node. So "weak" features will likely be ignored in most cases (splits).
However, removing the $x$ percent weakest features may increase the model's performance. In case you use sklearn, there are convenience functions to do this, e.g. SelectFromModel(). See the docs for more details.
>>> from sklearn.ensemble import ExtraTreesClassifier
>>> from sklearn.datasets import load_iris
>>> from sklearn.feature_selection import SelectFromModel
>>> X, y = load_iris(return_X_y=True)
>>> X.shape
(150, 4)
>>> clf = ExtraTreesClassifier(n_estimators=50)
>>> clf = clf.fit(X, y)
>>> clf.feature_importances_
array([ 0.04..., 0.05..., 0.4..., 0.4...])
>>> model = SelectFromModel(clf, prefit=True)
>>> X_new = model.transform(X)
>>> X_new.shape
(150, 2) |
H: How do you choose an appropriate $k$ to achieve $k$-anonymity for data?
How do you choose an appropriate $k$ to achieve $k$-anonymity for a data? What methods exist that are agnostic to the business context for the problem?
AI: In most cases $k$ emerges from the volume and nature of data, plus trhe anonymity method used. Rarely does one have explicit control over $k$, except implicitly through these options.
Think of $k$ as a score instead of as a parameter.
It is possible, for example, some records will have higher $k$-anonymity than others. Then the average $k$ counts, or even the minimum.
If anonymity is a requirement, then the highest possible value of $k$ is what is needed. Since for each record there are only $k-1$ similar records, so methods can be used to exhaustively find the anonymised info, thus the highest possible $k$ is needed in order to slow down this process and make it practically impossible.
Of course the maximum $k$ is achieved when all data columns are anonymised, but this creates useless data, so the tradeoff between useful data and maximum anonymity results in a range of $k$ values to achieve (and this depends on the actual nature and volume of data). |
H: Do generative model produce varying outputs for same input
I am new to data sciences. I believe the generative model generate responses on-the-fly for a valid user input. Is it correct to assume that such models would generate different responses for the same question?
For e.g: if we trained the model on say medical data. Now if user 1 asked "what is fever" and user 2 asked the same question, could be that user 1 and 2 will receive different answers? if this is so then how to circumvent this problem?
thanks in advance
AI: This depends entirely on the specific model. There are generative models, like most Generative Adversarial Networks (GANs) that receive a random number and generate data. There are other generative models that generate a probability distribution over the output space (e.g. text generation models), and therefore whether the model generates data deterministically depends on the inference procedure (e.g. greedy, sampling, beam search).
If you want your model to generate outputs deterministically, you just select a model and inference method that assures that.
In your example, you may have a normal seq2seq model (e.g. Transformer) and use beam search for decoding, and the outputs will be the same given the same input. |
H: How to mantain the nested structure of a tf.dataset after applying map?
I'm creating a tf.dataset object containing 2 images as inputs and a mask as target. All of them are 3D in grayscale. After applying a custom map, the shape of the object changes from ((TensorSpec(shape=(), dtype=tf.string, name=None), TensorSpec(shape=(), dtype=tf.string, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) to (TensorSpec(shape=<unknown>, dtype=tf.float32, name=None), TensorSpec(shape=<unknown>, dtype=tf.float32, name=None), TensorSpec(shape=<unknown>, dtype=tf.int32, name=None)), losing the nested structure. When I fit the data, my model throws an error because it only detects one input instead of 2.
Here is what I'm doing:
x, y = get_filenames(train_data_path, img_type='FLAIR')
z = get_filenames(train_data_path, img_type='mask')
path_dataset = tf.data.Dataset.from_tensor_slices((x, y))
mask_dataset = tf.data.Dataset.from_tensor_slices(z)
dataset = tf.data.Dataset.zip((path_dataset, mask_dataset)).shuffle(50).repeat(10)
ds = dataset. \
map(lambda xx, zz: ((tf.py_function(load, [xx], [tf.float32, tf.float32])),
tf.py_function(load_mask, [zz], [tf.int32])),
num_parallel_calls=tf.data.AUTOTUNE)
ds = ds.map(lambda xx, zz: (tf.py_function(random_crop_flip, [xx, zz],
[tf.float32, tf.float32, tf.int32])),
num_parallel_calls=tf.data.AUTOTUNE)
ds = ds.batch(2)
ds = ds.prefetch(tf.data.AUTOTUNE)
I can't map separately the images and the masks because they need the same seed for the random cropping and flipping. Is it possible to change the shape after the map so that I can feed it to my 2 input model?
My random_crop_flip function is as follows:
def random_crop_flip(images, mask, width=128, height=128, depth=128):
img_bl, img_fu = images
img_bl = img_bl.numpy()
img_fu = img_fu.numpy()
mask = mask.numpy()
x_rand = random.randint(0, img_bl.shape[2] - width)
y_rand = random.randint(0, img_bl.shape[1] - height)
z_rand = random.randint(0, img_bl.shape[3] - depth)
img_bl_f = img_bl[:, y_rand:y_rand + height, x_rand:x_rand + width, z_rand:z_rand + depth, :]
img_fu_f = img_fu[:, y_rand:y_rand + height, x_rand:x_rand + width, z_rand:z_rand + depth, :]
mask_f = mask[:, y_rand:y_rand + height, x_rand:x_rand + width, z_rand:z_rand + depth, :]
flip_x = random.choice([True, False])
flip_y = random.choice([True, False])
flip_z = random.choice([True, False])
if flip_x:
img_bl_f = np.flip(img_bl_f, axis=2)
img_fu_f = np.flip(img_fu_f, axis=2)
mask_f = np.flip(mask_f, axis=2)
if flip_y:
img_bl_f = np.flip(img_bl_f, axis=1)
img_fu_f = np.flip(img_fu_f, axis=1)
mask_f = np.flip(mask_f, axis=1)
if flip_z:
img_bl_f = np.flip(img_bl_f, axis=3)
img_fu_f = np.flip(img_fu_f, axis=3)
mask_f = np.flip(mask_f, axis=3)
return (img_bl_f, img_fu_f), mask_f
The tuple in the output isn't solving my problem. Is it possible to modify the return to get my desired output?
AI: I have managed to solve this problem by "flattening" (eliminating the parenthesis) the return of the random_crop_flip, and applying another map on top of them, where I specified the shapes and returned my desired structure (x ,y), z:
def _set_shapes(img_bl, img_fu, mask):
img_bl.set_shape([128, 128, 128, 1])
img_fu.set_shape([128, 128, 128, 1])
mask.set_shape([128, 128, 128, 1])
return (img_bl, img_fu), mask
Then my code looks like this:
x, y = get_filenames(train_data_path, img_type='FLAIR')
z = get_filenames(train_data_path, img_type='mask')
path_dataset = tf.data.Dataset.from_tensor_slices((x, y))
mask_dataset = tf.data.Dataset.from_tensor_slices(z)
dataset = tf.data.Dataset.zip((path_dataset, mask_dataset)).shuffle(50).repeat(10)
ds = dataset. \
map(lambda xx, zz: ((tf.py_function(load, [xx], [tf.float32, tf.float32])),
tf.py_function(load_mask, [zz], [tf.int32])),
num_parallel_calls=tf.data.AUTOTUNE)
ds = ds.map(lambda xx, zz: (tf.py_function(random_crop_flip, [xx, zz],
[tf.float32, tf.float32, tf.int32])),
num_parallel_calls=tf.data.AUTOTUNE)
ds = ds.map(_set_shapes)
ds = ds.batch(2)
ds = ds.prefetch(tf.data.AUTOTUNE) |
H: SGDClassifier - Why do I need to use argmax instead of argmin to find the lowest threshold satisfying given precision?
I am an experienced programmer, but new to Python and data science. I am following Aurelien Gerone's book and I don't understand one thing.
I create SGDClassifier and calculate its precision_recall_curve(). Then I am trying to find the lowest threshold to satisfy precision equal to 90%:
precisions, recalls, thresholds = precision_recall_curve(y_train, y_scores)
threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]
Why on earth I am searching for argmax if I need to find the minimum threshold value? If I try to use argmin I get the wrong value, with precision equal to 0.1.
As I understand this:
precisions >= 0.90 creates an array with precision scores only above or equal to 0.90,
argmax returns an index, at which I find the highest value in the given array (so this should be as far from 90% as possible, but it's not!),
then I choose a threshold with returned index.
What am I missing?
AI: Okay, I solved this myself.
precisions >= 0.90 doesn't create an array with precision scores only above 90%, but it transforms this array to the array of Booleans, where precisions below 90% are turned to False and the others are True.
argmax, if there are multiple identical, maximum values (and True is max here) returns the first index of this occurrence.
I sometimes hate this book, why he just doesn't use method like "array.first_equals(True)"? |
H: Difference between PCA and regularisation
Currently, I am confusing about PCA and regularisation.
I wonder what is the difference between PCA and regularisation: particularly lasso (L1) regression?
Seems both of them can do the feature selection. I have to admit, I am not quiet familiar with the difference between dimensional reduction and feature selection.
AI: Lasso does feature selection in the way that a penalty is added to the OLS loss function (see figure below). So you can say that features with low "impact" will be "shrunken" by the penalty term (you "regulate" the features). Because of the L1 penalty, the $\beta_i$ can become zero (which is not the case with Ridge, L2). In the Lasso case you would "eliminate" a feature when it is "shrunken" to zero, and you could call this feature selection. Lasso can be used in "high dimensions", i.e. when you have many features ("columns") but not so many observations ("rows").
Principle components work in quite a different way. The first principle component is a normalised linear combination [of the original features] which has the largest variance. So you kind of "transform" the original features to a principle component (which is a "new feature" derived from the original ones), where you try to capture as much variance as possible in one principle component.
Principle components are uncorrelated (orthogonal). This can be very helpful when you do linear regression, in which (high) correlation between features can be a real problem. I see PCA as a tool for dimensionality reduction (not so much feature selection), since you can express many features in a (smaller) number of principle components.
So maybe a little too brief summary:
Lasso: "shrink" the estimated coefficients for features which are not too useful (but leaves the features as they are)
PCA: "combine" several features into one or more orthogonal "new" feature(s) (principle components) and use them in some type of model
For more details, refer to "Introduction to Statistical Learning" (available for free online). Chapter 6.2.2 covers the Lasso, chapter 10.2.1 covers PCA. |
H: LSTM classification with different sizes
I'm relatively new to the world of recurrent neural nets and I'm trying to build a classifier using an LSTM model to predict HIV activity from a given molecule (the original dataset can be found here ).
I have sequences of different lengths (from few dozen to almost 400 characters) but I'm not sure how to proceed. Let's say that I have a dataset structured like so:
import pandas as pd
import random
import string
random.seed(42)
seqs = [[random.choice(string.ascii_letters) for i in range(random.randint(1,10))] for i in range(5)]
classes = [random.randint(0,1) for i in range(5)]
df = pd.DataFrame({
"seq": seqs,
"class": classes
})
print(df)
seq class
0 [b, V] 0
1 [p, o, i, V, g] 0
2 [f, L, B, c, b, f, n, o, G] 1
3 [b, J, m, T, P, S, I, A, o, C] 0
4 [r, Z, a, W, Z, k, S, B, v, r] 0
I know I should:
one hot encode the elements,
perform masking and padding
But I don't know how to perform it in Keras/TF2 and I can't find any resources online that explain how to code something similar.
AI: masking and padding are indeed popular preprocessing choices when working with LSTM. The easiest way is to pad all the short sequences to the same length as the longest sequence.
Here is an example of masking and padding in Keras
https://www.tensorflow.org/guide/keras/masking_and_padding
Is that what you're looking for?
raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences(
raw_inputs, padding="post"
)
print(padded_inputs)
Output:
[[ 711 632 71 0 0 0]
[ 73 8 3215 55 927 0]
[ 83 91 1 645 1253 927]]
Besides, you can also add start and end tokens to your sequences, like this example (https://www.tensorflow.org/tutorials/text/nmt_with_attention)
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
``` |
H: Machine learning problem with only original data without test and validation data
I am new to machine learning and I am trying to solve a problem where I have to predict if a customer will buy a home insurance product or not.
I have got a dataset which tells me that which of the bank's customer bought a mortgage from the bank.
I have got another data of the customers who bought the mortgage first, then the bank ran a campaign to provide them with home insurance randomly and this dataset tells me that which of the mortgage customers actually bought home insurance from the bank.
Now my job is to predict which customers should I pick for the bank that will have the highest possibility to subscribe to the home insurance product.
I do not have a separate train/test/validation dataset, but just one dataset. How do I approach this problem? Should I create the validation and test data from my original dataset that I have been given? how should I approach this problem to predict correctly?
AI: Usually, with a predictive model, you generate a train and test sample from the original dataset, e.g. by using sklearn.
import numpy as np
from sklearn.model_selection import train_test_split
X, y = np.arange(10).reshape((5, 2)), range(5)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
Train some model on one set and test on the other by comparing predictions to actual (true) outcomes.
You could also use cross validation, in which case a model is trained on one part of the data (e.g. 4/5 of the data with 5-fold cv) and tested on the remaining 1/5 of the data. This is done for all of the „folds“. See Ch. 5.1. of „Introduction to Statistical Learning“ for more details on that. https://www.statlearning.com/
Simply make sure you obtain a test score based on data not used for training. |
H: How useful is Bayesian Inference
Last few months, I had been exposed to Bayesian Inference in ML course
With further investigation, I come to place where there is MCMC technique to simulate the posterior distribution.
It seems interesting. However, I am not sure if it is really useful in the industry?
Does anyone has experience with Bayesian Inference in practical?
Take customer_lifetime_value as X for example
Basically, my main question is how bayesian inference can be more useful than I just plotting frequency and cumulative frequency of historical X
Because with frequency , i can estimate the mean X,
With cumulative frequency, i can estimate prob[X>x]
What is the advantage of trying to do bayesian inference
AI: This may be an unpopular opinion to some, but in my experience Bayesian statistics is not particularly useful in data science in industry, for a couple of reasons:
A Bayesian approach is very useful when our questions are about statistical inference. However, in data science, more often than not, we are dealing with prediction. There may be some situations where a Bayesian approach works better than a frequentist approach, but I can't think of any off hand, apart from where a conjugate prior is available, in which case we are probably dealing with a very simple model.
Bayesian statistics usually requires a sampling, such as Markov Chain or Hamiltonian Monte Carlo, and this can be extremely computationally intensive, even for relatively small datasets. In industry we are often dealing with "big data" and a Bayesian model that requires MCMC or HMC just isn't practical.
Edit: To address a comment to this answer:
I have a question about this, how statistical inference is different from prediction. As for my understanding, prediction is usually about getting P(Y=y | X), where X is our data, which is similar with statistical inference
Prediction and inference are completely different. With prediction, all we really care about is the accuracy of the predictions, and that is a relatively easy thing to determine - because the test/validation procedure is minimising some measure of prediction error. On the other hand, with inference, we care about the coefficient estimates and often their standard errors. Often a researcher forms a hypothesis based on a theory of causes - eg, does drinking coffee cause cancer, and they will use a model to determine whether or not the data supports their theory, and to what extent. This is much, much more difficult than prediction. For one thing, in causal inference the basic requrement is that we do not want the estimates for $\mathbf{X}$ to be biased - whereas in prediction we don't care if they are biased, provided that we get "good" predictions for $\mathbf{y}$, and there are many, many sources of bias in a regression model - bias can and does arise from confounding, mediation, differential selection and colliders. Usually, the crux of the issue is to decide which variables to include in the model in order to eliminate or reduce these biases. With prediction we can use automated variable selection procedure in order to choose which variables to include (feature selection). With inference this is pretty much impossible because automated procedures cannot generally handle the above mentioned biases.
There is a detailed discussion of these issues in my answer to this question:
How Do DAGs Help To Reduce Bias In Causal Inference ? |
H: Is there such thing as linear and non-linear data?
While doing machine learning projects we've heard that logistic regression works well with "Linear data" and decision tree works well with "non-linear data"
However concept of linear and non-linear data does not make sense. To me only linearly separable data and non-linearly separable data makes sense to me, it only makes sense to say logistic regression works well with "Linearly separable data" since it is a linear function. In mathematics linear functions are polynomials with degree one and all other functions that are not linear are considered non-linear function.
What exactly is linear data and non-linear data?
AI: Naming linear data or non-linear data is a bit misleading and wrong I would say. Instead, there is linear relation and non-linear relation between variables would be better and correct naming. It can easily be explained by example from real life.
Non-linear relation: Any kind of relationship that is not linear can be put forward as an example (e.g. quadratic relation: y = intercept + x^2, square root relation: y = intercept + 6 * sqrt(x) etc.) Let's imagine a world where we are not aware of the free fall equation and we use ML to learn and predict it :). In that case (specific and simplified case), the algorithm would need to learn an equation in the following structures = v₀t + (1/2)gt², where t is our independent variable, s is our target or dependent variable and v₀, g would be our coefficients to be learned. In other words, you would want to predict free fall (distance or height) using time. In that fictional world, Newton’s Law of Universal Gravitation (F = G * (M1 * M2) / r^2), Area of a circle (A = π * r^2) would also be a quadratic problem for ML. Linear functions do not meet the requirements of such cases unless you use some kind of transformation, let's say take the square root of r^2 from Newton's law or the square root of t^2 from free fall then use them in a linear function. The most known example of non-linear real-life relation is a prediction of height using age. Height changes continuously, however, in different rates: until 13 gradually increases, between 13-18 significantly increases, between 18-25 slightly changes, and after 25 it does not change. Thus, it is impossible to fit it into a linear equation, because it is not dependent on a single coefficient or i.o.w. it does not fit into the formula height = intercept + b * age because coefficient (b) is not constant over time (age). Moreover, Graph, Tree, and other similar structures are also fitted into a non-linear relationship. In cases where the result is highly dependent on ifs, linear algorithms are useless because you cannot fit the relationship into y = a + b * x. Let's say, you want to predict who will win a chess game basing on the moves they make, you would use a tree-like (e.g.: alpha-beta pruning) algorithm to predict it.
Linear relation: It is simply the relations where those relations can be explained by the y = a + b * x formula or even y = a + b1* x1 + b2*x2 + .... The simplest example is predicting the cost that is spent in the bar (or any kind of entertainment venue). cost = intercept (let's imagine you pay money for entering to the bar) + (the price of a drink) * (the number of drinks bought) + (the price of appetizer) * (the number of appetizers bought) would be our formula. |
H: Training set Distribution and Activation function/Loss function correlation
How should the probability distribution of the training set influence the choice of the activation function / loss function?
For instance if I have a Multinoulli distribution, which activation function should I choose? And why?
I can't get this correlation between the probability distribution of the training set and the choice of the activation function / loss function.
AI: The probability distribution of the training set has normally nothing to do with the activation function/loss function. Instead, the activation function of the last layer and the loss function are directly defined by what you are trying to predict.
For instance:
If you have a regression problem where the output values are not bounded, you would probably use no activation in the last layers and a mean squared error as loss function.
If you want your network to perform binary classification, you would use a sigmoid activation in the last layer (which outputs a value between 0 and 1, associated with the probability of belonging to one class or the other), with a binary cross-entropy as loss function.
If you want your network to select among N elements (e.g. multiclass classification), you would probably want your network to predict a probability distribution over those N discrete elements. This is precisely a categorical/multinoulli distribution over a discrete output space (i.e. only one element is selected from N possible discrete alternatives). In this case, the activation of the last layer of your network should be a softmax, and the loss must be the categorical cross-entropy, also referred to as negative log-likelihood. Be careful, however, with frameword-specific details, because it is frequent that this pair of activation and loss is implemented with numerical stability considerations and you need to select the appropriate implementations. For instance, Pytorch, when you use NLLLoss, the last layer activation must be a LogSoftmax, not a normal Softmax. Alternatively, you can use CrossEntropyLoss which combines NLLLoss and LogSoftmax into a single class. These aspects are normally described in the framework documentation. |
H: CNN Design for Counting on Simple Images
This is the first CNN I'm designing following college examples and assignments. I'm working on a CNN that I'd like to use to classify images by the number of shapes on them. My basic problem is that I can't seem, to get the CNN to respond (accuracy and val_accuracy are flat) after n EPOCHS (I have varied n these along with Steps and Batch Size) The images are 98 x 150 pixels and look like this:
This is 10 data squares on a by background. The image set has been built by varying the position of the shapes on the image, changing the background color (4 colors), changing the shape color (3 colors), changing the shape type (square, circle, hexagon), flipping the images (vert/horiz). The images are stored in folders (1-26) with 1 having 1 shape per image, 26 having 26 shapes per image. There are 3300 images per folder totaling ~86k images (I have experimented with a lower number of shapes). Here's an images of the training/validation file counts:
The problem I'm seeing is the model never seems to "initialize" in other words the accuracy and validation accuracy do not change much while training the model:
The accuracy and val_accuracy are very different to any models I've worked with - normally I see a fast ramp rate which tapers off as the model trains. In this case both are flat and never increase. Even after larger EPOCHs.
Here's one of the models I've tried:
# Define the Model
model = models.Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.flatten(),
layers.Dense(512, activation='relu'),
layers.Dense(num_classes)
])
I've tried several variants of this model along with an AlexNET all yield the same results.
I'm thinking that maybe the images are too simple/varied for the model to train? Maybe putting the shapes on a pixelated background would help make them more complex/varied? I'm also wondering whether initializing the model with random weights/biasses might help? I'm using Tensorflow 2.x and don't know whether this is done automatically.
Does anyone have any thoughts/advice on how I may improve the model and at least see accuracy/validation increase rather than it being flat? I could deal with overfitting later (if it occurs).
I've be very grateful for any advice - thanks in advance!
AI: This doesn't seem like a Deep Learning problem, this could be solved by classic Computer Vision techniques, so maybe you could inspire your architecture by those.
Check is your model convolutional layers receptive field is large enough for the shapes you trying to count.
You probably won't be able to do counting without some kind of post processing
Check Yolo strategies, you probably will be able to do a counter with it.
This is a object detection problem, you will often have poor performance trying to use a architecture designed for image classification such as AlexNet while trying to face this as a regression problem.
Your output size should probably be a varying length vector from which the length is the desired count. |
H: Is it better to use F1 score or AUC metric for imbalanced data classification?
I have a text classification problem, where the "positive" examples are the minority. What metric is better to use for binary classification for this case - F1-score or AUC?
AI: F1-score and AUC are two evaluation measures for binary classification but they are not comparable:
F1-score measures the performance of a hard classifier, i.e. a system which predicts a class for every instance. Through precision and recall it compares for every instance the predicted class vs. the gold-standard class.
AUC measures the performance of a soft classifier, i.e. a system which predicts a probability (or a score) for every instance. The difference is that the system doesn't decide which class the instance belongs to, so informally it can be seen as an "unfinished classifier". If one decides a threshold on the probability to separate the classes then it becomes a hard classifier.
Both are fine to be used with imbalanced data, that's not a reason to pick one or the other.
AUC is useful to study the general behaviour of a method without deciding a particular threshold. Sometimes the choice of a particular threshold can have a strong impact on performance, so using AUC avoids the issue completely. However strictly speaking AUC doesn't give the performance of a classifier. |
H: Hidden Markov Model
I am trying to find answers to the following questions. Can someone please help. This is a Hidden Markov Model with 7 states and 4 observations. I have worked out the following solution but still need help with parts ii & iii.
Solution:
I.
GATTAG
= 1* 1 * 0.5 * 0.25 * 0.2 * 0.5 * 0.4 * 0.15 * 0.6 * 0.25 * 1 * 0.5 * 1
=0.00005625
II.
GTAAG
possible paths:
B -> S1-> S2 -> S4 -> S5 -> S7-> E
=>1 * 1 * 0.5 * 0.5 * 0.4 * 0.4 * 0.6 * 0.25 * 1 * 0.5 * 1 =
B -> S1-> S2 -> S4 -> S6 -> S7-> E
=> 1* 1 * 0.5 * 0.5 * 0.4 * 04 * 0.4 * 0 * 0.7 * 0.5 * 1 = 0
B -> S1-> S3-> S4 -> S6 -> S7-> E
=> 0
B -> S1-> S3-> S4 -> S5 -> S7-> E
=> 1 * 1 * 0.5 * 0.3 * 0.4 * 0.4 * 0.6 * 0.25 * 1 * 0.5 * 1 =
III.
GTACGG
possible paths:
B -> S1-> S2-> S3-> S4 -> S6 -> S7-> E
B -> S1-> S2-> S3-> S4 -> S5 -> S7-> E
B -> S1-> S3 -> S2-> S4 -> S6 -> S7-> E
B -> S1-> S3 -> S2-> S4 -> S5 -> S7-> E
B -> S1-> S3 -> S3-> S4 -> S6 -> S7-> E
B -> S1-> S3 -> S3-> S4 -> S5 -> S7-> E
B -> S1-> S3 -> S4 -> S6 -> S6 -> S7-> E
B -> S1-> S2 -> S4 -> S6 -> S6 -> S7-> E
How do I calculate this probability?
AI: The total probability is simply the sum of all the probabilities from the different paths. In probability terms it's the union of disjoint events, that's why the probabilities can be summed. |
H: What exactly is the linear layer in the transformer model?
Please see this image:
There are linear layers to modify the Query, key and value matrices and one linear layer after the multi head attention as they mention also from here:
Are these linear layers simply dense or fully connected layers? Let's consider the weight matrix Wi Q. Does this represent a dense layer with "Q" nodes? As they are using matrices as input rather than 1D vectors, I am getting a little confused.
AI: In the original Transformer article, these linear layers are just matrix multiplications.
As described in the paragraph you referred to in your question $W^Q$ is a matrix of dimensions $d_{model} \times d_k$, that is, it is a fully connected layer with $d_k$ units.
In practical implementations, these have the optional addition of a bias vector. You can see their actual definition in the fairseq code. |
H: Understanding LSTM text input
I am an beginner in text generation and and deep-learing but I like to get in touch with it. Currently I am learing about LSTM networks and VAEs for text generation. I would like to read a sequence at once and output another sequence. What I learned from this post is that the input shape should be 3D. Since my text data is 2D I want to reshape it. But this is now the confusing part. As I have training data with the shape of (nb_sequences, max_seq_lenth) = (274, 95). As the post pointed out, it would be important to reshape the data to a format of input = (nb_sequence, nb_timestep, nb_feature) with nb_sequence being the number of sequences (274), nb_timestep the length of the sequence (95) and nb_feature the number of features describing a sequence at a timestep, right?
I have two questions here:
1). How to reshape properly? My vocab size is 359 which I imagined (obviously wrong) to be the number of the features. --> input = (274, 95, 359). This is not possible since an array of 274*95 = 26030 can't be reshaped into (274, 95, 359).
If I take the number of diffrent words my sequences are containing, wouldn't it lead to some erros since sentences do not consist out of the same words naturally? Meaning that I would have a variable feature number with each sequence?
2). How would one realize the problem I am facing? I read about one-hot encoding the sequences. This might solve the problem I have regarding question 1), but as far as I understand this would lead me to a quite inefficient way of predicting sequences as I want to handle more data. Is there any suggestion how to do it in a effient way?
AI: The input to an LSTM must be a batch of sequences of vectors of real numbers, i.e. 3D tensor). Textual inputs are discrete tokens, so they are a batch of sequences of integers (indexes to the vocabulary table), i.e. a 2D tensor.
Before any reshaping, you must transform each integer value into a vector. For that, the usual approaches are one-hot encoding and embedding tables:
With one-hot encoding, you encode each integer index as a vector with 359 elements, where all are 0s except the position at the integer index, which is a 1.
With an embedding table, you keep a table with 359 vectors of an arbitrary dimensionality (the embedding size).
In either case, you get the "extra dimension" that you were lacking with integer indices. |
H: IS there any way we can add range in hyperparameter tuning of Decision Tree?
For example,
"min_samples_leaf":[1,2,3,4,5,6,7,8,9,10],
'criterion':['gini','entropy'],
"max_features":["auto","log2","sqrt",None],
"max_leaf_nodes":[None,10,20,30,40,50,60,70,80,90] }
but i want to add range of 1 to 100 in max deapth. is there any way i can do it?
Like shown below....
parameters={"max_depth" : range(1,100), ??????????
"min_samples_leaf":[1,2,3,4,5,6,7,8,9,10],
'criterion':['gini','entropy'],
"max_features":["auto","log2","sqrt",None],
"max_leaf_nodes":[None,10,20,30,40,50,60,70,80,90] }
AI: You can, for example, use "max_depth": np.arange(1, 101, 1) which gives
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52,
53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65,
66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78,
79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
92, 93, 94, 95, 96, 97, 98, 99, 100])
as a value range. |
H: Image classification convolutional neural networks
I am trying to solve this problem by using a convolutional NN to classify an image data set to check the type of disease it is. I have reached task 1b and trying to implement the training loop. However, I am getting an error and can't understand how to implement the training loop.
I am sharing the google drive link.
The code in task 1b where I am getting an error:
# news headlines text and its corresponding label.
# - sentence_len the maximum sentence length you want the
# tokenized to return. Any sentence longer than that should
# be truncated by the tokenizer. Any shorter sentence should
# padded by the tokenizer.
# We will be using the pretrained 'distilbert-base-uncased' transform,
# so please use the appropriate tokenizer for it. NOTE: You will need
# to include the relevant import statement.
I am stuck at this part. Can anyone suggest how to implement it?
AI: In the __getitem__ method you should return both the image and the label, in your current example you are trying to get the image by indexing the img_dir variable or your class, which won't work since this is of type Path. The general steps for the __getitem__ method is that you should (1) get the path of image, (2) load the image and (3) convert the image to a tensor for pytorch to use. This would something like this:
from torch.utils.data import Dataset
import torchvision.transforms as transforms
from pathlib import Path
import pandas as pd
from PIL import Image
class LesionDataset(Dataset):
def __init__(self, img_dir, labels_fname):
self.img_dir = Path(img_dir)
self.img_paths = self.img_dir.glob("*")
self.labels_fname = pd.read_csv(labels_fname)
def __len__(self):
return len(self.labels_fname)
def __getitem__(self, idx):
img_path = self.img_paths[idx]
image = Image.open(img_path)
image = transforms.ToTensor()(image)
label = self.labels_fname[idx]
return image, label |
H: is SST=SSE+SSR only in the context of linear regression?
the problem of regression is to minimize the sum of squared errors, i.e. $\sum\limits_{i=1}^n (y_i - \hat{y}_i)^2 = 0$ .
But only in linear regression could you use the expression $\hat{y}_i = \beta_0 + \beta_1.x_i$ , then minimize the sum of squared errors w.r.t. $\beta_0$ and $\beta_1$ to obtain the following constraints:
$ \begin{align*} \sum\limits_{i=1}^n x_i(y_i - \hat{y}_i) &= 0 \\ \sum\limits_{i=1}^n \hat{y}_i(y_i - \hat{y}_i) &= 0 \end{align*} $
and then these constraints are used to prove that the quantity $\sum\limits_{i=1}^n (\hat{y}_i - \bar{y})(y_i - \hat{y}_i) =0$, which yields SST=SSE+SSR.
So, my question is, if suppose regression wasn't linear, would:
SST=SSE+SSR still hold? If yes, why? If no, why?
If the answer to the above question is a no, then a follow-on question I would like to pose is: why is $R^2$ the go-to measure for assessing the regression performance of a model or a technique?
AI: if suppose regression wasn't linear, would SST=SSE+SSR still hold? If yes, why? If no, why?
Just to be clear that with linear regression it is perfectly OK to model nonlinear associations such as $y = 2x + 3x^2 + 17log(x)$ simply by including the relevant nonlinear terms, because it would still be linear in the parameters. I guess you are aware of this, but just wanted to make sure. In those cases, SST=SSE+SSR will hold.
Now, the crux of the matter is that SST=SSE+SSR is actually a special case that only holds when the model is linear in the parameters. When we are dealing with a nonlinear model such as logistic regression, or any Generalised Linear Model, the situation is quite different because we model the linear predictor's association with the response variable via a link function so that a simple sum of squared deviations does not meaningfully reflect the variability in the response because the variance of an individual response depends on its mean.
There is a very good further explanation of this on Cross Validated, by Ben, and so I will just post it here, for completeness:
The sums-of-squares in linear regression are special cases of the more general deviance values in the generalised linear model. In the more general model there is a response distribution with mean linked to a linear function of the explanatory variables (with an intercept term). The three deviance statistics in a GLM are defined as:
$$\begin{matrix}
\text{Null Deviance} \quad \quad \text{ } \text{ } & & \text{ } D_{TOT} = 2(\hat{\ell}_{S} - \hat{\ell}_0), \\[6pt]
\text{Explained Deviance} & & D_{REG} = 2(\hat{\ell}_{p} - \hat{\ell}_0), \\[6pt]
\text{Residual Deviance}^\dagger \text{ } & & \text{ } D_{RES} = 2(\hat{\ell}_{S} - \hat{\ell}_{p}). \\[6pt]
\end{matrix}$$
In these expressions the value $\hat{\ell}_S$ is the maximised log-likelihood under a saturated model (one parameter per data point), $\hat{\ell}_0$ is the maximised log-likelihood under a null model (intercept only), and $\hat{\ell}_{p}$ is the maximised log-likelihood under the model (intercept term and $p$ coefficients).
These deviance statistics play a role analogous to scaled versions of the sums-of-squares in linear regression. It is easy to see that they satisfy the decomposition $D_{TOT} = D_{REG} + D_{RES}$, which is analogous to the decomposition of the sums-of-squares in linear regression. In fact, in the case where you have a normal response distribution with a linear link function you get a linear regression model, and the deviance statistics reduce to the following:
$$\begin{equation} \begin{aligned}
D_{TOT} = \frac{1}{\sigma^2} \sum_{i=1}^n (y_i - \bar{y})^2 = \frac{1}{\sigma^2} \cdot SS_{TOT}, \\[6pt]
D_{REG} = \frac{1}{\sigma^2} \sum_{i=1}^n (\hat{y}_i - \bar{y})^2 = \frac{1}{\sigma^2} \cdot SS_{REG}, \\[6pt]
D_{RES} = \frac{1}{\sigma^2} \sum_{i=1}^n (y_i - \hat{y}_i)^2 = \frac{1}{\sigma^2} \cdot SS_{RES}. \\[6pt]
\end{aligned} \end{equation}$$
Now, the coefficient of variation in a linear regression model is a goodness-of-fit statistic that measures the proportion of the total variation in the response that is attributable to the explanatory variables. A natural extension in the case of a GLM is to form the statistic:
$$R_{GLM}^2 = 1-\frac{D_{RES}}{D_{TOT}} = \frac{D_{REG}}{D_{TOT}}.$$
It is easily seen that this statistic reduces to the coefficient of variation in the special case of linear regression, since the scaling values cancel out. In the broader context of a GLM the statistic has a natural interpretation that is analogous to its interpretation in linear regression: it gives the proportion of the null deviance that is explained by the explanatory variables in the model.
Now that we have seen how the sums-of-squares in linear regression extend to the deviances in a GLM, we can see that the regular coefficient of variation is inappropriate in the non-linear model, since it is specific to the case of a linear model with a normally distributed error term. Nevertheless, we can see that although the standard coefficient of variation is inappropriate, it is possible to form an appropriate analogy using the deviance values, with an analogous interpretation.
$^\dagger$ The residual deviance is sometimes just called the deviance.
Now as to the other part of the question:
If the answer to the above question is a no, then a follow-on question I would like to pose is: why is R2 the go-to measure for assessing the regression performance of a model or a technique?
My answer to this is that $R^2$ is NOT the go-to measure for assessing regression performance. On the contrary, $R^2$ is very poor. I would recommend reading this thread on CV
Is $R^2$ Useful or Dangerous
This has currently 10 answers, many of which are good, but please pay particular attention to the answer by whuber. |
H: How to train and evaluate machine learning models with growing/changing datasets over time
Assume that you have a classification machine learning model, and you start with an initial dataset that contains 3 classes. You split the initial dataset into training/testing spits, you train the initial model and evaluate it, and then overtime, you collect more data for your dataset. Now you have more data that you want to add to you initial training dataset. The question is: How do you you organize you dataset and model training regiment so you can effectively quantify possible improvement between the initial model and the new model?
One possible solutions: if you split the initial training dataset into training/testing data, once you have more data, you just split that data and then add the respective new training/testing splits into the original splits. But this approach feels a little too simplistic, and it could potentially lead to drift in the dataset itself.
This is similar to doing regression testing but for machine learning models, but I am not sure im using the correct technical jargon for it.
AI: The solution you described is definitely an option.
The other motivation for retraining is that the distribution of new data has drifted away from that of the original dataset and that warrants retraining to maintain the same level of model performance.
In such situations, It is possible that you may actually throw away some of your older training data because they may have become stale and include a part of newly available data for training. The remainder new data can then be used for out of sample testing and validation.
You also want to be able to figure out when to trigger the retraining. This is usually done by an independent drift detection process, which keeps track of some metric related to your Business and triggers retraining if it dips below a predefined threshold. |
H: Pytorch: Starting with a high loss value, but the loss converged at the end. I dont know if the model could start with a loss > 100. Help!
I have been trying to attempt plant disease detection using transfer learning methods. I chose ResNet50 first. I also performed a baseline model which is a CNN model. In resnet50, I used cross entropy loss and trained the model for 30 epochs. I did batch normalization too. Initially epoch 1 loss was 112.5250 and the training loss was 87.512. But, for the last epoch it was 2.1660 and Validation loss was 1.8905 with Validation accuracy as 0.995. The overall accuracy of the model was 98.8% and the model doesn't seem to overfit too.
Before training the model, I performed hyper parameter optimization where I optimized Learning rate and momentum using Bayesian optimization. I optimized weight decay using Cross validation. I performed batch normalization. Before that, I trained the model without any optimization. I just assumed the learning rate to be 0.001, same loss function and trained for 30 epochs. In this case, the model started with a loss value of 114.2 and validation loss was 130.46. It converged to 9.9038 with accuracy of 97%.
So, my question is, it is possible for the model to start with such a high loss value despite giving a good accuracy at the end? Does the magnitude of the loss have nothing to do with the correctness of the model nor the accuracy? If the model started with a huge loss, but gave a good accuracy, is my model bad?
First 5 epochs
EPOCH: 1
Epoch time taken: 28.61398434638977 seconds. Epoch 1 loss: 118.5250 Validation loss: 87.5214 Validation accuracy: 0.753
EPOCH: 2
Epoch time taken: 28.737553358078003 seconds. Epoch 2 loss: 68.8141 Validation loss: 50.3452 Validation accuracy: 0.858
EPOCH: 3
Epoch time taken: 28.45273518562317 seconds. Epoch 3 loss: 41.5181 Validation loss: 31.1520 Validation accuracy: 0.928
EPOCH: 4
Epoch time taken: 28.434630155563354 seconds. Epoch 4 loss: 26.8844 Validation loss: 21.1203 Validation accuracy: 0.948
EPOCH: 5
Epoch time taken: 28.762181282043457 seconds. Epoch 5 loss: 19.3646 Validation loss: 15.4843 Validation accuracy: 0.955
Last 5 epochs
EPOCH: 26
Epoch time taken: 28.35645818710327 seconds. Epoch 26 loss: 2.8168 Validation loss: 2.2771 Validation accuracy: 0.990
EPOCH: 27
Epoch time taken: 28.51978588104248 seconds. Epoch 27 loss: 3.3986 Validation loss: 2.2274 Validation accuracy: 0.992
EPOCH: 28
Epoch time taken: 28.51390767097473 seconds. Epoch 28 loss: 2.2707 Validation loss: 1.9976 Validation accuracy: 0.992
EPOCH: 29
Epoch time taken: 28.456573247909546 seconds. Epoch 29 loss: 2.5344 Validation loss: 1.9272 Validation accuracy: 0.990
EPOCH: 30
Epoch time taken: 28.486974239349365 seconds. Epoch 30 loss: 2.1660 Validation loss: 1.8905 Validation accuracy: 0.995
```
AI: The value of the loss in the first epochs is irrelevant. The network weights are initialized at random, so it is perfectly expected that the behavior of the network differs from the desired one.
However, this does not mean that you should only look at the last value of the training and validation loss. Instead, you should plot the whole curves, to understand the training dynamics. From the curves, you can see whether your network overfitted or underfitted, if there's still room for improvement if you train the model for longer, etc.
There are many questions in this site asking about how to interpret training/validation curves (e.g. this and this). You may take a look at them to understand how to interpret yours. From the values you posted, it seems there's still room for improvement if you train for more epochs. |
H: How to classify a set of words into one of the given labels
I have three labels: amusement, calm and energetic.
I get sets of words like:
Set1 = {Cloud
Sky
People in nature
Plant
Flash photography
Happy
Shorts
Grass
Leisure
Recreation}
Set2 = {Plant
Green
Natural landscape
Natural environment
Branch
Tree
People in nature
Shade
Wood
Deciduous}
I want to classify these group of words into one of the labels. What do you guys think? Set1 should be labelled energetic and Set2 should be labelled calm.
AI: There are probably many variants but here are two simple approches:
Using pretrained word embeddings, you can calculate the semantic similarity between two words. For example you could use cosine to measure the similarity between the vector of a target word (e.g. "calm") and every word in the set (e.g. "cloud"). Then the mean across words in the set gives how much the set is associated to the target, and you can pick the target which has the max similarity.
Using WordNet to directly obtain a semantic distance/similarity between words. The method is similar to the one above.
Note that there are many improvements that can be done to these basic ideas, for example you could use a predefined set of words related to "calm" instead of just the word "calm" (you can get the most similar words from WordNet for instance). There also many options for the aggregation across the set of words. |
H: Can a machine learning model be used as some kind of compression?
I'm trying to understand how machine learning is working. I read a lot and now came into my mind that it could be missuses in a practical way. I also hope that this question is on topic here.
Please correct me if I have some wrong assumptions:
All models require sample data
The learning process is some kind of optimization of its functions aka neutrons
The learning process is iterative to find the best values of the parameters to maximize the expected output of the neurons
To make this process faster you do some kind of reduction of the data, so that in the end the neutrons do not need to look at every part of the input
You split up the training data in two buckets to find out of the result became better for the optimization (I would call this verification, not sure how this is really called)
If your verification data are bad or equal to the input data than the results can just be as good as your input data
If this is true or at least most parts I got this idea:
When I set the input data to the same as the output data, then the model is doing nothing, but for equal data I get always the same data right?
So when I train the model as in the table:
input
output
0
3
1
1
2
4
3
1
4
5
5
9
6
2
7
6
8
5
Then the model would return PI based on the correct count of digits (of cause for non learned indexes that would be random)
I absolutely know that this makes no sense at all, but I'm just curious if I understand it correctly that you could "compress" data in a ML-Model.
AI: I think there is a good bit of confusion in how you define Machine Learning:
There are different types of learning: if the model is trained with data annotated with the "correct" answers, then it's supervised learning.
There are many different "families" (methods) for supervised ML. Your description focuses on Neural Networks, but there are many others.
Your last two steps probably refer to validation (during training) and/or evaluation (after training and on a separate test set), but it's not very clear.
Supervised ML can be summarized as follows:
The goal is for the system to find a function $f$ which transforms an input $x$ (features) into an output $y$ (target variable)
The system is provided with a sample of pairs $(x,y)$ (training data). Note that this sample is only a subset of the population data: after the training the system must be able to find $y$ for any input $x$, not only the values of $x$ seen in the training data.
This is why the system must generalize from the data, i.e. find the patterns in $x$ which are useful to determine $y$. If the system doesn't generalize and only stores exactly which $x$ gives which $y$ then it cannot predict the answer $y$ for any value $x$ which was not seen in the training data, and that's useless.
In a sense the process of generalization can be seen as compressing the knowledge contained in the training data. However there are at least a couple major differences:
The goal of compression is to represent a specific input $x$ using as little space as possible in a way which makes it possible to re-obtain the same $x$, possibly with some loss. The goal of supervised ML is to predict an output $y$ for any $x$. First $y \neq x$, but more importantly the ML cannot produce back the data it was trained with (in general, see exception just below).
A ML model doesn't have to reduce the size of the training data. For example instance-based learning just stores all the instances in order to later use it for predicting. |
H: regression with noisy target vairable
How can I approach a regression problem where the input data is not noisy but the target variable is noisy? Are there any regression algorithms that are robust to a noisy target variable?
Also, is it possible to de-noise the target variable somehow? If so, how?
AI: It depends how much noise:
If it's only a little noise, say for instance 2% of the target values are off by a small value, then you can safely ignore it since the regression method will rely on the most frequent patterns anyway.
If it's a lot of noise, like 50% of the target values are totally random, then unless you can detect and remove the noisy instances you can forget it: the dataset is useless.
In general ML algorithms are based on statistical principles, to some extent their job is to avoid the noise and focus on the regular patterns. But there are two things to pay attention to:
Is the noise truly random, or does it introduce some biases in the data? The latter is a much more serious issue.
Noisy data is even more likely to cause overfitting, so extra precaution should be taken against it: depending on the data, it might be necessary to reduce the number of features and/or the complexity of the model. |
H: Creating radial basis for linear regression Python
I'm trying to do time series forecasting with linear regression like it's done in this video: Radial basis forecasting starting from 5:50.
I understand the basic idea of basis, but I don't think I understood the usage of it in time series data correctly. I have a Pandas dataframe with daily timestamps and target variables. I tried writing radial basis function
def radial_basis(x, month):
alpha = 0.5
coef = -1/(2*alpha)
return np.exp(coef*(x-month)**2)
and calculating basis for every month (x is row number). This didn't work.
Any tips on how I should try to do this?
AI: I was able to solve this by myself.
First you need to create a column that contains day of year values from the timestamps.
Then apply radial_basis function for that column with month parameter being the middle day of every month. For example in January it's 15, February 45 etc.
With this method you can generate a spike for every month.
data_all['Day'] = data_all.Timestamp.dt.dayofyear
def radial_basis(x, month):
alpha = 8
coef = -1/(2*alpha)
return np.exp(coef*(x-month)**2)
for i in range(12):
col = 'RB' + str(i+1)
data_all[col] = data_all.Day.apply(radial_basis, month=(15+30*i)) |
H: Computing a cumulative distribution function in Python
I'm trying to compute the distribution function of any of the usual distributions in Python... However, all the methods I've seen involve first drawing N samples from said distribution, and then order them somehow, and then do a cumulative sum.
In Mathematica, I can just do CDF[ChiSquaredDistribution[df],quantile]. If I want another distribution, I just substitute ChiSquaredDistribution for the name of that other distribution.
Is there a simple way, like in Mathematica, to compute a cumulative distribution function in Python?
AI: If I understand you correctly, these can be found in scipy.stats. scipy has a long list of different distributions that you can use, both continuous as well as multivariate and discrete. All distribution functions have an underlying cdf method which allows you to calculate the cumulative distribution functions of that specific distribution. Using the Chi-squared distribution from your example would look as follows:
from scipy.stats import chi2
chi2.cdf(x=30, df=50)
# 0.011164780271550276
Using other distributions is as simple as importing that distribution and using the cdf method as shown above. |
H: TPOT machine learning
I trained a regression TPOT algorithm on Google Colab, where the output of the TPOT process is some boiler plate Python code as shown below.
import numpy as np
import pandas as pd
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline, make_union
from tpot.builtins import StackingEstimator
from tpot.export_utils import set_param_recursive
# NOTE: Make sure that the outcome column is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1)
training_features, testing_features, training_target, testing_target = \
train_test_split(features, tpot_data['target'], random_state=1)
# Average CV score on the training set was: -4.881434802676966
exported_pipeline = make_pipeline(
StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=False, max_features=0.9000000000000001, min_samples_leaf=1, min_samples_split=20, n_estimators=100)),
ExtraTreesRegressor(bootstrap=True, max_features=0.9000000000000001, min_samples_leaf=6, min_samples_split=13, n_estimators=100)
)
# Fix random state for all the steps in exported pipeline
set_param_recursive(exported_pipeline.steps, 'random_state', 1)
exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)
Would anyone know what is the Sklearn pipeline process like how does this work? When I butter up the boiler plate code and run it with my data set in IPython I can see this output from the pipeline process, what is this all doing?
Pipeline(steps=[('stackingestimator-1',
StackingEstimator(estimator=ExtraTreesRegressor(max_features=0.6500000000000001,
min_samples_leaf=19,
min_samples_split=14,
random_state=1))),
('maxabsscaler', MaxAbsScaler()),
('stackingestimator-2',
StackingEstimator(estimator=ExtraTreesRegressor(max_features=0.4,
min_samples_leaf=3,
min_samples_split=7,
random_state=1))),
('adaboostregressor',
AdaBoostRegressor(learning_rate=0.001, loss='exponential',
n_estimators=100, random_state=1))])
The results look good just curious about how the pipelines processes work, any tips or links to tutorials greatly appreciated. I thought this machinelearningmastery tutorial is also somewhat useful for anyone interested in learning more about TPOT.
AI: The documentation for StackingEstimator is surprisingly poor, but it's relatively simple: fit the estimator on the data, and tack its predictions onto the dataset as a new feature. Source, github issue.
So, your pipeline fits an ExtraTreesRegressor on the original inputs, and appends its predictions to the dataset going forward. The data (original + 1st predictions) get scaled, then another ExtraTreesRegressor is fit (on scaled original and 1st preds, and with different hyperparameters), its predictions also getting tacked onto the dataset. Finally, an AdaBoostRegressor is fit on scaled-original + scaled-1st-preds + 2nd-preds. |
H: Applying the same changes to the test set
I'm busy working through Aurélien Géron's book. (Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow)
The idea is to split the data into train and test set as early as possible in order to avoid data snooping bias. Afterwards changes are made to the data.
My question is that since changes were made to the training set, I assume the same changes(dropping columns, filling NA rows, converting categorical to numerical, etc) should be made to the test set before training and evaluating? If that is the case, what is the correct way to perform this? Write everything as a function and run it on both, which seems a bit counter intuitive to working with notebooks? Is there a built-in function that I'm not aware of?
AI: Most transformations are available as built in classes in scikit learn. You can assemble the classes into a scikit pipeline and have your train data pass through it as part of the pipeline's fit operation. When you are ready to evaluate your test data you can simply run the pipeline's predict operation. This ensures that the test data passes through the exact same transformation workflow as your train data. |
H: Handling nominal category features in decision tree
I have been reading some stackoverflow questions on how to handle nominal features for decision tree (sklearn implementation).
One of the answer states that :
Using a OneHotEncoder is the only current valid way, allowing arbitrary splits not dependent on the label ordering, but is computationally expensive.
But if we use LabelEncoder and make our tree go deep enough it will eventually isolate each category and the output remains the same.
So whats is the advantage of OneHotEncoding then?
AI: I would not say that using One-Hot-Encoder to handle nominal features is an optimal way. Even it might affect adversely your Decision Tree model in terms of performance.
Theoretically, a Decision Tree does not need a one-hot-encoding for categorical features, besides it will increase the number of computations and will result in a relatively inefficient model. Unlike non-tree algorithms (e.g. regression algorithms), a Decision Tree cannot be biased by the label encoding of categorical features because it uses splits to reach the optimal result.
Experimental results show that one-hot-encoding gives a rise to sparsity in the trees which might drastically drop the performance of the model. Here is a wonderful analysis and explanation of the effect of one-hot-encoding on Tree-based algorithms. Also, check this useful source. |
H: Extremely negative r^2
I use a linear regression to predict house prices (https://www.kaggle.com/c/house-prices-advanced-regression-techniques/overview). My linear regression sometimes works great with R^2 of 0.8 and sometimes really sucks with R^2 of - 20000000000000 (Yes, really that bad). My data is scaled (with Min-Max) but my target value isn't. The problem appeared before scaling as well. Apparently my predictions are only a few times off, but then they are really off.
Here is my code:
df = pd.read_csv('data/results/house_prices_advanced_regression/house_prices_advanced_regression_.csv')
target = df['SalePrice']
df = df.drop('SalePrice', axis=1)
x_train, x_test, y_train, y_test = train_test_split(df, target, test_size=0.2)
# model
lr = LinearRegression().fit(x_train, y_train)
predictions = lr.predict(x_test)
rmse = mean_squared_error(y_test, predictions, squared=False)
mae = mean_absolute_error(y_test, predictions)
print('RMSE: {}'.format(rmse))
print('MAE: {}'.format(mae))
x = range(len(y_test))
plt.plot(x, predictions, 'r')
plt.plot(x, y_test, 'b')
print("R^2: {}".format(lr.score(x_test, y_test)))
Output:
'RMSE: 4.2834860573491624e+16'
'MAE: 3483821398471256.5'
'R^2: -3.358286170039772e+23'
Difference between prediction (red) and real value(blue):
Coefficients:
Does anybody know what to do?
AI: There are a number of approaches to try and improve the model but more importantly to understand what is happening.
First of all, you've displayed a plot with over 200 'features' and despite scaling the input the coefficients are in the order of 1E17!!! Edit: house prices (in dollars are in the range 1E5-E7, ie less than $10M), yet the coefficients are at least 7 orders of magnitude greater or smaller.
Therefore, two immediate steps to take either together or in combination is conduct:
dimensionality reduction either using a formal method e.g. PCA or just dropping terms that have coefficients close to zero backed with some plots to help you decide.
regularisation - at at the moment you have some extreme and opposite coefficients around +7.5E17 and -7.5E17. See models here https://scikit-learn.org/stable/modules/linear_model.html#linear-model that implement ways to suppress extreme coefficients. Edit: If your input data is between 0-1 ask yourself why for some parameters are at opposite ends, in effect cancelling each other out.
Use some personal domain knowledge and evaluate the individual columns. Is there anything that has no bearing on the sale price? Some silly examples might the name of first person who bought it, which has no relevance to house price. Or a telephone number etc.
As we're now entering the world of 'hyper-parameter' tuning don't forget to split your data into 3 parts instead of 2. |
H: In U-Net, is there a non-linearity (relu) in up-convolution layer?
I am doing semantic segmentatio using U-Net. I was wondering whether to include 'relu' activation or not in the up-convolution layer?
x = Conv2DTranspose(filters, kernel_size) (x)
OR
x = Conv2DTranspose(filters, kernel_size, activation='relu') (x)
```
AI: I'd not use activation function in these layers because that's how I saw it done first time, but it's probably worth trying since I can't come to a reason why we woudn't use activation function in this layer, it probably shoudn't change the results much.
U-Net anyway has an architecture with convolutions between each upsampling, which use activation functions, so not using activation function in upsampling layers is not a big deal. |
H: What is the impact of changing image sources on an image recognition?
I have a fairly general question pertaining to an image recognition ML model. I’ve recently developed an image recognition model using a single camera collecting more than 5000 images and then trained/developed the model.
My question is if I change the camera source (also changing resolution) would this impact the accuracy of the model? The scripts we’ve developed would still resize the image to the same size as the training set.
There is some discussion within our team about this and I’m hoping someone with a little more expertise in image recognition can add some insight.
Thanks!
AI: In general the accuracy will change (even dramatically as a google research team found out) if only with slight changes in the input image quality.
References:
Understanding How Image Quality Affects Deep Neural Networks
Image quality is an important practical challenge that is often
overlooked in the design of machine vision systems. Commonly, machine
vision systems are trained and tested on high quality image datasets,
yet in practical applications the input images can not be assumed to
be of high quality. Recently, deep neural networks have obtained
state-of-the-art performance on many machine vision tasks. In this
paper we provide an evaluation of 4 state-of-the-art deep neural
network models for image classification under quality distortions. We
consider five types of quality distortions: blur, noise, contrast,
JPEG, and JPEG2000 compression. We show that the existing networks are
susceptible to these quality distortions, particularly to blur and
noise. These results enable future work in developing deep neural
networks that are more invariant to quality distortions.
Does Deep Learning Have Deep Flaws?
A recent study of neural networks found that for every correctly
classified image, one can generate an "adversarial", visually
indistinguishable image that will be misclassified. This suggests
potential deep flaws in all neural networks, including possibly a
human brain. |
H: dividing Mean by standard Deviation meaning
I have played around with logistic regression a little using movement data intervals that are prelabeled as either resting or active.
I now found that if I divide the mean movement of the individual intervals by the intervals standard deviation, the outcome is quite a good predictor of whether the interval is a resting interval or or active, with an average auc = 0.93 in a 20 fold cross validation.
Does someone have an idea of what I have created dividing the mean by the standard deviation? Its like a flipped coefficient of variation, or semi-normalization?!.
I want to report this in an essay, so I am asking myself whether there is a name to this statistic.
AI: You have created what can be called a normalised mean. Normalised in the sense that it corresponds to a related random variable which now has $\sigma = 1$ (since you divide by standard deviation, look it up in any statistics textbook).
The normalised mean can be a better predictor resulting from its normalisation which makes it robust to the range of fluctuations. |
H: In clustering, sequence number such as customer ID and dates such as purchase date should be dropped?
I am learning K-means clustering and found that in most datasets, there are sequence number such as customer ID and dates such as purchase date.
I don't see any use in them for clustering.
Should I include them for clustering or can simply ignore them?
Let's say other attributes are like purchase amount and number of purchases and etc.
AI: I think you should drop the customer ID since it's a unique identifier and won't give you any additional value.
Instead, I would be rather cautious with the date. The date can be split into year, month and day. Depending on the analysis you are doing, it might be interesting to have K-means running also with those features.
For instance, suppose that you want to check whether you have some blobs of data in a specific period of the year, you should use the month feature. |
H: What the differences between self-supervised/semi-supervised in NLP?
GPT-1 mentions both Semi-supervised learning and Unsupervised pre-training but it seems like the same to me. Moreoever, "Semi-supervised Sequence Learning" of Dai and Le also more like self-supervised learning. So what the key differences between them?
AI: Semi-supervised learning is having label for a fraction of data, but in self-supervised there is no label available. Imagine a huge question/answer dataset. No one labels that data but you can learn question answering right? Because you are able to retrieve relation between question and answer from data.
Or in modeling documents you need sentences which are similar and sentences which are dissimilar in order to learn document embedding but these detailed labels are usually not available. In this case you count sentences from same document as similar and sentences from two different documents as dissimilar and train your model (example idea: you can run a topic modeling on data and make similar/dissimilar labels more accurate). It is called self training. |
H: What if outliers still exist after variable transformation?
I have a variable with a skewed distribution.
I applied BoxCox transformation and now the variable follows a Gaussian distribution. But, as seen in the image below in the boxplot, outliers still exist.
My question is:
Although after transformation, the variable distribution is nearly Gaussian, if there are still outliers, should we still select this transformation?
Or should we decide to use other techniques such as discretization in order to capture all outliers?
AI: No right way in all cases. I have dealt with outliers from statistics and business problem view.
Are the outliers in a segment the business is expanding into and expect more people in these "outlier" areas? In this case these are not outliers from the business view and probably should be kept.
Are the outliers in a segment the business is retreating from and expect fewer people in these areas. Possibly want to get rid of these records.
Are these outlier records just outlier in this feature or in others? If they are just far from the mean in 1 feature, might want to keep them.
Discretizing - I am not a fan since the model is losing information. But can add an indicator variable and try the model. Might want to try multiple bucket approaches but I still think the model should see the real numbers.
I built models where the model would always get these outliers correct or incorrect regardless of if they were in the model training or not. So including these was a moot point. Want to make sure they are in a validation set to check. Make an outlier validation set.
Is the business treating these people different regardless of the model. For example, if this is a marketing model, the business might be targeting high income people regardless of what the model says. So including these in the model may penalize lower income people. Try building the model with and without.
I am sure I am missing other techniques that I have done. Outliers do not mean wrong and they do not mean right. They can be looked at from a purely statistical view. But in building a model since we should know something about the business problem and have access to subject matter experts, we can probably have a better informed decision. And always test. Might need to build multiple models. |
H: Model accuracy when training on GPU and then inferencing on CPU
When we are concerned about speed, GPU is way better than CPU.
But if I train a model on a GPU and then deploy the same trained model (no quantization techniques used) on a CPU, will this affect the accuracy of my model? Can the accuracy of the same model degrade on a CPU?
My intuition says, GPU vs CPU should not make any difference if accuracy is concerned.
But I have one doubt that, the GPUs and CPUs have their own different ways to process the information internally. Both of them have different architecture. When a model is trained on a GPU, does the exact same way of processing happens when trained on a CPU but in much slower way? I am not concerned about the accuracy while training, but if a model was trained on a GPU, will it perform exactly with same accuracy on a CPU?
AI: The results you get will be identical* for identical inputs. So for any practical purposes, the accuracy you get will not depend on whether you use a cpu or gpu.
*up to floating point precision. From this post on the pytorch forum:
Both are implementing the floating point number computation standard.
So they are both correct (even though [they may be] different) |
H: Dummy vectors and performance measurement for vector search Face Recognition
I have about thousands of person face (from celebrity dataset LFW), which each person represented by 512 x 1 vector. I stored it on vector DB to build face searching system using embedded feature (MTCNN for face detection and arcface for embedding model). Someone suggest me to add many vectors as "dummy faces" to the database with unknown class (the number of the vectors is larger than the personal class).
It's still unclear for me why I need to add many unknown faces as "UNKNOWN" class and put it together with thousands of vector from each person. As far as I know, its pretty easy to check the performance by get the similarity score with only from known vectors (the vectors from each person) without the unknown one, for example let said if i put k = 3 or k = 5, i will take the minimum distance as the result and get the class of the vector (ID or label).
AI: No, creating dummy unknowns is not the best way to do it.
A better approach can be, if a new face comes in, we calculate distance between vector of the new face and all of the vectors of known faces already present with us. And to identify the correct face, the minimum distance is considered. But this minimum distance should also be below a threshold value. If the minimum distance is above the threshold value, then it is considered as an unknown face. The threshold value is set manually.
Just to give you the idea, here is pseudo example -
Let's say you have 5 registered (known) faces. Their vectors are [1] [2] [3] [4] [5]. Assume that your model represents each face by a vector of shape 1 x 1.
You set the threshold value as 0.4
Your distance function is defined as abs(vector1 - vector2)
Scenario 1 -
A new face comes in, and your model generates a vector [2.2]
So you calculate the distance between new face and known faces as - [1.2] [0.2] [1.2] [2.2] [3.2]
Your minimum distance is [0.2] which is less than your threshold 0.4. Hence, this new face is identified as "[2] 2nd face" face.
Scenario 2 -
A new face comes in, and your model generates a vector [7.5]
So you calculate the distance between new face and known faces as - [6.5] [5.5] [4.5] [3.5] [2.5]
Your minimum distance is [2.5] which is more than your threshold 0.4. Hence, this new face is identified as an "unknown" face.
Lastly,
As you mentioned, you can use KNN based approach. Although it would produce very accurate results, it might not scale well. As number of faces in your database grows, KNN approach will slow down. |
H: AUC higher than accuracy in multi-class problem
I stumbled upon a 3-class classification problem where all compared classifiers yield a higher AUC than accuracy (usually around 10% higher). This happens both when the dataset is balanced or slightly imbalanced.
Now, after looking at this answer: Why is AUC higher for a classifier that is less accurate than for one that is more accurate? I understand that, for binary classification, this might happen because the accuracy is typically computed at a threshold of 0.5 whereas AUC is based on all threshold values.
But what happens with multi-class classification? Specifically, those scenarios where accuracy is defined as the frequency with which the predicted labels match the true labels (tf.keras.metrics.CategoricalAccuracy) and AUC is defined as the weighted average of the AUC for each class vs the rest (One-vs-rest) (sklearn.roc_auc_score). Why might AUC be higher there?
In other words, I'm trying to understand what this result means. Does it mean that my classifiers can discern well when each class is measured against the others (AUC), but not as well when the prediction probabilities are an output of the softmax function, and therefore spread out for the three classes?
AI: I think the question can se splitted to three parts. The first part is about comparing accuracy and AUC of the same model, the second is about comparing models and the third is about multi-class problems.
First part - I think that accuracy and AUC are not comparable metrics, I don't think that there is special meaning to the case where one is higher than the other.
Second part - ROC plot and AUC are good for understanding model and choosing model types, but in the end of the day you usually want to evaluate your final model with specific threshold so accuracy, precision and recall are the relevant metrics.
Third part - I think the answers to the previous parts above are valid both for binary classification and multi-class classification. Why do you think it have different meaning here? |
H: Replace a value in pytorch tensor
t=tensor([0.1 0.2 0.3 0.4 0.5 0.6])
now i need to modify this existing tensor t as follows:
t=tensor([0.1 0.2 0.7 0.4 0.8 0.6])
I tried as follows:
t=tensor([0.1 0.2 0.3 0.4 0.5 0.6])
a=tensor([0.1 0.2 0.7 0.4 0.8 0.6])
index=range(len(a))
t.index_copy_(0,index,a)
But still it is not updating
how can i modify the tensor in pytorch?
AI: Try this:
t = tensor([0.1, 0.2, 0.3, 0.4, 0.5, 0.6])
t.index_copy_(0, tensor([2, 4]), tensor([0.7, 0.8]))
Reference:
torch.Tensor.index_copy_ |
H: pytorch dataloader tensor modification
T=tensor([101,123,414,463][234,903,313,341]...)
train=TensorDataset(T)
train_dataloader=Dataloader(train)
Now I would like to update tensor T[0] i.e tensor
T[0]=tensor([101,123,567,463])
for this i have tried as follows:
train_dataloader.Dataset[0].index_copy_(0,tensor([2]),tensor([567])
is it possible to modify this way or not?
Any kind of reference is helpful
AI: Pytorch DataLoader is a generator, it will generate new batches when iterated through. Hence, as per the best of my knowledge, you can only change the data on the fly. For eg, if you want to replace the first element of your dataset with second element of the datase, you can do something like -
T = torch.tensor(([101,123,414,463],[234,903,313,341]))
train = TensorDataset(T)
train_dataloader = DataLoader(train)
for idx,i in enumerate(train_dataloader):
if idx == 0:
i = i[0] # Get the first and the only instance of the batch
i = train[1][0]
Otherwise, you will need to change your dataset (train variable) and then again create a dataloader for this new dataset. |
H: Suitable instance counting CNN for training on polygonal masks
I have a medical dataset labeled with polygonal masks (rather than rectangle boxes). It works well for pixel annotation with UNet to generate masks of healthy vs damaged skin. Now I need to do instance counting. Most of the CNNs like YOLOv4 consumes bounding boxes, not polygons.
Which CNN should I use for instance counting given that my dataset consists of labeled polygons?
AI: Which CNN should I use for instance counting given that my dataset consists of labeled polygons?
CNNs for instance segmentation. To start with you can try Mask-RCNN. Here are all state of the art CNNs for instance segmentation. You will also find code for most of them. |
H: ZeroR as performance baseline for binary classfication model?
It is known that ZeroR model is used predict the majority class in a given data set.
Having said that, is ZeroR a suitable performance baseline provided one has a balanced data set (50/50)?
If not, what would be a good baseline for a Naive Bayes classification model, used for binary classification (positive/negative)?
AI: Sure, ZeroR is a perfectly fine baseline. In this case I think it's better to call it a random baseline rather than a majority baseline, since it's just what it is.
To my knowledge this is the only basic baseline usable with any classification task. Other baselines would involve something more sophisticated based on the specific task. The standard way to have a more competitive baseline is to used a state of the art method for the task.
Btw the baseline or evaluation method doesn't depend on the learning algorithm, it doesn't matter if it's NB or any other classification method. |
H: What does my learning curve indicate?
I have performed logistic regression. And I am getting an accuracy of 77% with my current model. I divided my training set into cross validation set and train set. And I plotted a learning curve (graph of training examples vs cost function for train set and cost function of cross validation set). My learning curve is shown below -
What does it indicate? Since both the curves have a very less difference one is (0.51 and other in 0.52), Is my model biased, or is it correct?
AI: Generally speaking, the "normal" shape of a learning curve (defined as a "plot of error vs training set size is known as a learning curve" (1)) is to observe an initially very low training error indicating that the model almost perfectly learns the small amount of training data while the test error will be high. When the amount of training data increases, the training error is expected to increase, too, as it becomes harder for the model to learn the increasingly complex data. At some point usually the training error stops increasing because data complexity, i.e. the number of distinct patterns in the data, does not increase further - even when adding more data.
In contrast, the test error is expected to be high in the beginning and then decrease when train and test data become more similar (since you're adding more training data). That is, in the beginning the model overfits (which is good news since it means that it's able to learn the data) and later train and test error ideally converge. This occurs when training and test become more similar.
This could look similar to this (The black horizontal line is the Bayes error) (1):
Or like this for a more complex model (1):
And that is similary to what your graph shows.
In contrast, a model which is not able to capture even simple patterns when the amount of training data is low could produce a learning curve like this (1):
A third scenario would be a very complex model showing a learning curve like this (1):
Compared to the first plot, the training error does not increase as fast since the more complex model is able to overfit data even when data complexity increases. But it also shows a very high test error in the beginning and a larger gap between train and test error for larger amounts of data.
This answer might also be interesting for you to read.
References:
(1) Probabilistic Machine Learning: An Introduction p. 109-110 |
H: Should I do one hot encoding before feature selection and how should I perform feature selection on a dataset with both categorical and numerical data
a newbie here. I am currently self-learning data science. I am working on a dataset that has both categorical and numerical (continuous and discrete) features (26 columns, 30244 rows). Target is numerical (1, 2, 3). I have several questions.
I still have not performed any encoding or scaling techniques. According to my knowledge, as my categorical data are unordered, I have to perform one hot encoding right? As it will increase the number of columns, I am hoping to do that after feature selection. Is that okay?
How can I perform feature selection for this dataset? (Because this has both numerical and categorical data) Should I first do one-hot encoding and then go for checking correlation or t-scores or something like that?
(I am currently focusing on EDA only. I don't have a model in my mind)
Any help is much appreciated. Thank you!
AI: I have to perform one hot encoding right?
Yes
As it will increase the number of columns, I am hoping to do that after feature selection. Is that okay?
No, you should do basic preprocessing like dealing with missing values and then proceed for handling categorical data before feature selection. Beware of nominal vs ordinal features.
How can I perform feature selection for this dataset?
There are many ways to perform feature selection. You can use the methods you mentioned as well many other methods like -
L1 and L2 regularization
Sequential feature selection
Random forests
More techniques in the blog
Should I first do one-hot encoding and then go for checking correlation or t-scores or something like that?
There is a great answer on this issue here. |
H: Feeling Stuck on a Beginner – Intermediate level
Over the past two years, I have been working as a full-time data scientist for a government company. As the sole data science team in the organization, our job is a hybrid between data science and machine learning engineering. We need to research and develop ml solutions for the organization's business problems as well as implement them in production environments.
The problem is I'm feeling stuck knowledge-wise and I don't know what can I do about that. Let me explain.
I have a major in computer science (B.Sc). Although I took some ai/ml courses during my major, I would contribute most of my data science education to the wonderful book "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow".
During these past two years in the organization, I have gained a lot of experience in the field: I managed to bring some fair – but, far from perfect - ml solutions for a couple of the organization's business problems.
But alas, I'm still feeling like I'm missing a big piece of the puzzle that holds me from going forward. I'm feeling like I'm stuck somewhere between a beginner to an intermediate data scientist.
I know all about the basic ml models and their basic intuitions and algorithms. I know the basics of deep learning and how to implement them in keras/tensorflow/pytorch. I know about CNNs, RNNs, and other basic deep learning architectures. I'm pretty prolific with pandas, numpy, and all other common data preprocessing\wrangling\visualizations libraries. And yet, despite all of that, I can't shake the feeling that I'm missing something important. That something that would make the difference on the previous ml problems I worked on and would differentiate a professional data scientist from me. Sometimes I feel like, for a lack of a better term, a 'stack overflow' data scientist. I mean, with every problem it’s the same – I preprocess the data a bit (nothing too fancy or advanced), I try a couple of basic ml models (usually random forests\gradient boosting works the best) and then I try to see if I can get better results with a deep learning approach. Finally, I will do some hyper-parameters optimization and will start the process of implement this model in production.
I know the primary suspect is my not-so-great math/statistics knowledge but is it really? Obviously, I know the basic math behind the models (not that I see it really critical at this point) and I know the basic concepts in statistics. Will improving either one of these areas will really improve me as a data scientist in my day-to-day work? Cause honestly, I don’t think this is the answer.
I'm not looking to do a master's in computer science. I'm looking more for some useful books, online courses, or anything else that might help.
To sum it up: how can I 'escape' this beginner area and become a next-level data scientist/ml engineer? A one that can bring something unique to the table, other than doing the basic and obvious stuff for each problem.
I would really appreciate any advice on this. Thanks in advance.
AI: There's a good chance that your question will be closed I'm afraid, but here are a few thoughts:
would differentiate a professional data scientist from me
A professional data scientist is somebody who does data science for a living, so you definitely belong to the club, congrats!
Seriously, apparently, you have at least some symptoms of the impostor syndrome: your level is appropriate, you're able to do your job, yet you feel inadequate. The usual advice on AcademiaSE (it's very common in academia) is to deal with the psychological aspect, optionally with some professional help.
Now about the myth of "the real professional data scientist": data science has become vast and specialized. There's not even a clear definition of the scope of data science, let alone a shared understanding of which knowledge/skills a data scientist should have. Additionally, the field changes very fast, so it's humanly impossible to know everything.
What people usually recommend is to gain as much experience as possible, and especially in your case since it looks like you already covered the theory fairly well. You can just pick a topic you'd like to dig deeper into and go for it.
For the record, I find browsing and answering questions on DataScienceSE a very good way to keep up, discover things that I didn't know, and progress. Why answering is useful you ask? Because it forces me to (1) understand the problem and think about how I would address it. An intellectual ML design exercise, it's always good to practice. (2) explain things in a clear way, which is always a good exercise to check how clear things are in my mind. |
H: Which model is best for object detection which is trained on COCO dataset?
I want to do Object Detection and Segmentation. I want to find out which models are trained on COCO-Dataset eg YOLO etc. But I want to find out which model has the highest accuracy and lowest time. In short which model is best for object detection and segmentation which are trained on COCO -dataset
AI: I want to find out that which models are trained on COCO-Dataset
Here you can find state of the art models for instance segmentation on COCO dataset. And here you can find state of the art models for object detection. You will also find open sourced code and models for most of them.
But I want to find out that which model has highest accuracy and lowest time.
Here you can find state of the art models for real time object detection. You can refer the FPS metric mentioned in the table to compare their speeds. Following is the image from current table topper (as of 31st May 2021).
Here you can find state of the art models for real time instance segmentation. Again you can refer the FPS metric mentioned in the table to compare their speeds. SipMask++ is the current table topper (as of 31st May 2021).
I want to do Object detection and Segmentation.
If you have instance segmentation mask of an object, then you can easily get the bouding box out if it. |
H: Data cleaning in Pandas, where the csv file has all data of each row in 1 field
I have really messy data that looks like this:
As you can see all the data in each row is contained in 1 column separated by a semi colon.
How do I arrange this data so that they are spread out over more columns? For example, category_id, category_id_lvl_0 etc., to be in separate columns and the rows underneath corresponding to that columns i.e ones that are separated by the semi colon to fall under the column of category_id, category_id_lvl_0...
AI: That to me doesn't seem like messy data at all, it is just a csv file with a ; delimiter. Depending on the region settings excel can use different delimiters when saving data as .csv file, ; being one of them. By default pandas assumes a , as the delimiter, which in this case does not apply. Try reading it in by specifying the correct delimiter using the sep argument as follows:
import pandas as pd
df = pd.read_csv(filename, delimiter=";") |
H: Why the gradient of a ReLU for X>0 is 1?
Gradient is derivative of several variables.
I can't understand why is the gradient of a ReLU for X>0 is 1 ? and 0 for x < 0 ?
I tried to search for proof and examples but didn't found any good examples.
AI: The ReLU function is defined as follows: $f(x) = max(0, x)$, meaning that the output of the function is maximum between the input value and zero. This can also be written as follows:
$
f(x) = \begin{cases}
0 & \text{if } x \leq 0, \\
x & \text{if } x \gt 0
\end{cases}
$
If we then simply take the derivate of the two outputs with respect to $x$ we get the gradient for input values below zero and value greater than or equal to zero.
$
f'(x) = \begin{cases}
0 & \text{if } x \leq 0, \\
1 & \text{if } x \gt 0
\end{cases}
$
Therefore the gradient of the ReLU function is zero for values up to and including zero and 1 for positive values. |
H: How to encode a sentence using an attention mechanism?
Recently, I read about one of the state-of-the-art method called Attention models. This method use a Encoder-Decoder model. It can find a better encoding for each word in a sentence. But how can I encode a full sentence?
For example, I have a sentence "I love reading".
After embedding, this sentence will be converted to list of three vectors. (or matrix with dimension number of words times embedding dimension).
After several layers of attention mechanism, I will still have the same matrix.
How can I convert this matrix to a single vector that contains an encoded representation of the full sentence?
AI: A standard way of obtaining a sentence representation with attention models is using BERT or any other of its derivations, like RoBERTa. In these models, the sentence tokens passed as input to the model are prefixed with a special token [CLS]. The output of the model at that first position is the sentence representation.
To use these models, you may use sentence-transformers library, e.g.:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-distilroberta-base-v1')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
for sentence, embedding in zip(sentences, sentence_embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("") |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.