text
stringlengths 83
79.5k
|
---|
H: How to do feature selection for classification problem? Which technique will work?
I have 200 variables with 200000 records. How to find best features from this variables? I have tried correlation technique via Heatmap but all the variables have near to same correlation score < 0.4 so it didn't worked.
Please suggest me which features selection technique will work?
AI: In model building there is a sort of iterative workflow that you can use:
Select an appropriate model you want to build e.g. for classification maybe a XGB classifier or a logistic regression, etc.
This is important because the model by itself will determine a lot about how to wrangle your data. XGB only works with numerical features so you will have to convert factors/strings to a numerical encoding e.g. via One-Hot-Encoding.
Build a full model using all features you can!
Some features will naturally fall out in the first step because the amount of feature extraction you have to do, to use them is too much to start. All other features, simply throw them into your model!
Validate your model using classical validation methods (e.g. cross-validation, split-sample, etc.) and see how it performs!
If the performance is already great, perfect you are done! Otherwise you have a baseline against which to optimize the next steps.
Play around with feature importance and removing features
Extract the feature importance from the full model and see if removing features with low importance improves your performance.
Add features
At some point you will hit a wall in improving the model by simply removing irrelevant features (this might even be after removing 0 or 1 features). Now it is time to add by engineering additional features. Maybe now it is time to brush up your NLP skills and get some features out of the free-text variable you removed before.
Rinse-and-repeat for other models to crown a winner in a model beauty contest |
H: Isn't one-hot encoding a waste of information?
I was just playing around with one-hot representations of features and thought of the following:
Say that we're having 4 categories for a given feature (e.g. fruit) {Apple, Orange, Pear, Melon}. In this case the one-hot encoding would yield:
Apple: [1 0 0 0]
Orange: [0 1 0 0]
Pear: [0 0 1 0]
Melon: [0 0 0 1]
The above means that we quadruple the feature space as we go from having one feature to having four.
This looks like it's wasting a few bits, as we can represent 4 values with $\log_{2}4=2$ bits/features:
Apple: [0 0]
Orange: [0 1]
Pear: [1 0]
Melon: [1 1]
Would there be a problem with this representation in any of the most common machine learning models?
AI: Good idea but...
You encode not just to transform from categorical to numerical features but to give that information to your model.
Let's say that you do that and feed it through a linear model to try to predict the price. Let's say Pear is really expensive(500€) and Melon cheap (1€).
Your model coefficients with one hot encoding will be:
$price = 500 * Pear[0,1] + 1 * Melon[0,1]$
If you do your encoding the linear combination won't work. What will be the coefficients?
One could argue that with decision trees this won't happen since it can make splits... but it would have to make two splits before determining if it is a melon (a greedy decision tree won´t do it) so again you will lose computational power here too.
You could try to run the experiments and see if this is your result. In the end, this should be science and one can do experiments to prove a hypothesis.
On the other hand, Yes, One Hot Encoding increases the computational time and memory easily since you are creating a lot of features from one and if it has high cardinality you can end up with a lot of features. |
H: How to deal with multiple categorical data set
Please tell me how with sex, smoker, region?
Should I perform one hot encoder for all?
AI: Simply yes. Before that you may want to check how correlated those features are, so you can simply deselect redundant features, but in general you are right. Starting with one-hot encoding is a good choice.
What may need more inspection later, is the number of different regions. Then you come up with many sparse features for which you need to reduce dimensionality. If you provide more information on the whole project I can provide more insight. |
H: What would be a good loss function to penalize big differences and reward small ones, but not in a linear way?
I have an image with the differences between 2 other images. Concentrations of black pixels mean similar regions between the images, whereas, white values highlight differences.
Thus I want a function to provide rewards when the pixels are the same (or relatively similar - a term to weight this "similarity threshold" would be nice) and penalize when the differences are bigger (penalizing more as the differences grow).
A differentiable function is much appreciated.
So in the context of machine learning and this loss function being a way to help train a generator, what kind of function do you recommend or can come up with?
Remember, the ideia is to reward similarities and penalize differences (such that "really different" equates to a bigger loss than "slightly off" or "different").
Thanks in advance to you all!
AI: Square loss (MSE or SSE) does this. Let $y_i$ be an actual value and $\hat{y}_i$ be its estimated value (prediction).
$$SSE = \sum (y_i -\hat{y}_i)^2$$
$$MSE=\dfrac{SSE}{n}$$
Except for numerical issues of doing math on a computer, these are optimized at the same parameter values of your neural network.
The squaring is critical. If a prediction is off by 1 unit, it incurs one unit of loss. If the prediction is off by 2, instead of incurring 2 units of loss, there are 4 units of loss—much worse than being off by 1. If the prediction is off by 3, wow—9 units of loss!
(If you look at statistics literature or some Cross Validated posts, you may see $n-p$ in the denominator of MSE, where $p$ is the number of parameters in the regression equation. This does not change the optimal value but does have some advantages in linear regression, chiefly that it is an unbiased estimate of the error variance under common assumptions for linear regression that you are unlikely to make in a neural network problem.) |
H: Do i need to use hyperparamters from Gridsearch to train on WHOLE training set to get final model?
I just want to make sure i am on the right lines so please correct me if wrong. I am testing which hyperparmets are best for logisitic regession on my data X, y where X is featrues and y is target. X, y are made from my training set. I also have a test set.
from sklearn.linear_model import LogisticRegression
# split train into target and features
y = Train['target']
X = Train.drop(['target'], axis = 1)
X = pd.get_dummies(X)
#split test data into target and features
y_test = Test['target']
X_test = Test.drop(['target'], axis = 1)
X_test = pd.get_dummies(X_test)
logistic = LogisticRegression() # initialize the model
# Create regularization penalty space
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] }
clf=GridSearchCV(logistic,param_grid=param_grid,cv=5)
best_model = clf.fit(X, y)# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C']) #
I will now use these hyper parameters and 'train' it on my training data. Just so i'm sure when we say train do i then take best_model and train on the whole X data. Then i use my X_test which is my hold out data on this new trained model:
bestLog=best_model.best_estimator_
trained_model=bestLog.fit(X,y)
predicted=trained_model.predict(X_test)
then use this output above as my final model to test?
AI: As far as I understand (disclaimer: I'm not very familiar with Python) your approach is correct: the selected hyper-parameters are tested on the hold out test set which is different from the training set, this way there's no data leakage and you can evaluate the true performance of your model before applying it to the test set.
For analysis purposes it could be useful to compare the performance of the best model on X (training set) and X_test (hold out) in order to check for overfitting.
Note that in a case like this where you directly select the best hyper-parameters I would consider it acceptable to skip the testing on the hold out set, however in this case you wouldn't know the true performance of your model (so for instance you wouldn't be able to check if it's overfit). To be clear: I don't think you should do this, it's just a remark to show the difference with/without hold out set. |
H: Does it make sense to use train_test_split and cross-validation when using GridSearchCV to play with hyperparameters?
I was wondering if my methodology makes sense. I am using GridSearchCV with cross-validation to train and tune model hyperparameters for a bunch of different model types (e.g. Regression Trees, Ridge, Elastic net, etc.). Before fitting the models I leave out 10% of the sample for model validation using train_test_split. (see Screenshot). I select the models with the best parameters to make predictions on the unseen validation set.
Am I missing something, as I haven't seen someone doing this when evaluating model accuracy while tuning for model parameters?
AI: Yes it makes sense. Your "validation" set is typically called the "test" set (at least in my experience). CV is creating train/validation sets to pick the hyperparams.
The most complete process would be to then build a model one more time with the best hyperparams on the 90%, and evaluate on the 10%. This evaluation gives you a reliable estimate of the generalization error. It can be slightly lower than the error you got from the the CV process for the same parameters; the hyper-parameter fitting can itself 'overfit' a bit.
Some people skip this last step, if you don't care about estimating generalization error. Whatever it is, it's still the best you can do, according to your tuning process. The upside of skipping it is, I suppose, 10% more training data. |
H: Appending dataframe with .values
Is it necessary to append train X & test X with .values? But I checked and found the model works fine without appending .values, then why append?
In other words.. which among the following is better, and why?
X = df.iloc[:,4].values
OR
X = df.iloc[:,4]
AI: .values coverts the dataframe into an array and remove the underlying structure introduced by pandas. The pandas DataFrame is a higher version of numpy array which gives the data a tabular look and feel which helps in better visualising the data (with the column names). However when we use .values we remove the higher structure and represent the data in a lower numpy array format. The advantage here is that the array format is computationally less expensive than the DataFrame format. In other words it increases the speed and allows faster computations. I hope this clears your question. |
H: Meaning of 'hue" in seaborn
I know what "hue" does a little
I'm studying kaggle and the image is about bike sharing demand analysis
What I want to know is
shouldn't the second images's sum of each point's y axis be the first images's point's y axis??
AI: You would expect that to be the case, however, by default seaborn.pointplot uses the average estimator to calculate the number for each hour. So the numbers you are seeing on the y-axis is the average number of bikes shared for each hour. Since the number of bikes shared is not equal for category workingday=0 and workingday=1 the two averages for those categories do not add up. Using estimator=sum for seaborn.pointplot does give the expected results. |
H: Convert CSV from an api to a dataframe format in python
I am new to python, I have extracted some reviews from a website and I used the api of the webscrapping tool to import my data in python and the format is in csv. I want to convert this csv to a dataframe in python. Can someone guide me on how to perform this please.
Below is the code for importing the api extraction in csv format.
import requests
params = {
"api_key": "abc",
"format": "csv"
}
r = requests.get('https://www.parsehub.com/api/v2/runs/ttx8PT-EL6Rf/data', params=params)
print(r.text)
My output for the above codes are as follows:
"selection1_name","selection1_url","selection1_CommentID_name","selection1_CommentID_Date","selection1_CommentID_comment"
"A","https://www..html","137","February 02, 2020","I enjoy the daily package from the start with the welcoming up to the end.
I recommend this hotel."
"A","https://www.e a lot. Relaxing moments with birds chirping, different swings to chill. Overall, I shall visit again. Thanks Azuri & Marideal."
"A","https://www.html","17","June 12, 2019","Had an amazing stay for 2 nights.
The cleanliness of the room is faultless"
"B","https://www.html","133","April 16, 2019","Had a good time. Food is good."
etc...
Can you please help me to convert this into a dataframe in python please please.
AI: Try the following code:
import requests
import pandas as pd
import io
params = {
"api_key": "abc",
"format": "csv"
}
r = requests.get('https://www.parsehub.com/api/v2/runs/ttx8PT-EL6Rf/data', params=params)
r = r.content
rawData = pd.read_csv(io.StringIO(r.decode('utf-8'))) |
H: ROC AUC score is much less than average cross validation score
Using Lending club Dataset to find the propability of default. I am using hyperopt library to fine tune hyper parameter for an XGBclassifier and trying to maximize the ROC AUC score. I am also using Random over sampling inside the pipeline and performing the cross validation on the whole pipeline.
The problem is that I am getting very different scores using the parameters I get from the Hyperopt using cross validation than when fitting the model on the whole training data and trying to calculate the ROC AUC score on the validation set.
The models seems to be over-fitting despite the cross validation. I don't know what should I do.
Cross validation score:0.74
Validation Score:0.66
Find the code below:
#creating lists for numerical,text,categorical features for preprocessing step
numerical_features =(sorted(features.select_dtypes(include=['float64']).columns))
categorical_features = (sorted(features.select_dtypes(exclude=['float64']).columns))
text_features=['emp_title','title']
ordinal_features=['grade']
categorical_features.remove('emp_title')
categorical_features.remove('title')
categorical_features.remove('grade')
numerical_features.remove('int_rate')
#%%
numerical_features.remove('total_pymnt')
#label encoding label/target variable combining different classes
#le = preprocessing.LabelEncoder()
#eh=le.fit_transform(target)
#%%
#creating training and validation sets
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2,random_state=777)
#%%
preprocess = make_column_transformer(((make_pipeline(IterativeImputer(initial_strategy='median',add_indicator=True,verbose=2,max_iter=100),StandardScaler())),numerical_features),((make_pipeline(SimpleImputer(strategy='constant',fill_value="Not Available",add_indicator=True),OneHotEncoder(handle_unknown='ignore'))),categorical_features),(OrdinalEncoder(),ordinal_features))
from hyperopt import Trials, STATUS_OK, tpe, hp, fmin
#RandomOverSampler(sampling_strategy=sampling,random_state=777)
#%%
import numpy as np
unique, counts = np.unique(y_train, return_counts=True)
counts2=np.asarray((unique, counts)).T
#%%
#%%
from hyperopt import Trials, STATUS_OK, tpe, hp, fmin
def objective(space):
classifier = make_pipeline(preprocess,RandomOverSampler(random_state=777),XGBClassifier(n_jobs=-1,verbosity=3,
objective= 'binary:logistic',
nthread=-1,
scale_pos_weight=1,
seed=27,tree_method='hist',n_estimators = space['n_estimators'],
max_depth = int(space['max_depth']),
learning_rate = space['learning_rate'],
gamma = space['gamma'],
min_child_weight = space['min_child_weight'],
subsample = space['subsample'],
colsample_bytree = space['colsample_bytree']))
# Applying k-Fold Cross Validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv =3,scoring='roc_auc')
CrossValMean = accuracies.mean()
print("CrossValMean:", CrossValMean)
return{'loss':1-CrossValMean, 'status': STATUS_OK }
space = {
'max_depth' : hp.choice('max_depth', range(5, 50, 1)),
'learning_rate' : hp.quniform('learning_rate', 0.01, 0.5, 0.01),
'n_estimators' : hp.choice('n_estimators', range(20, 500, 10)),
'gamma' : hp.quniform('gamma', 0, 0.50, 0.01),
'min_child_weight' : hp.quniform('min_child_weight', 1, 10, 1),
'subsample' : hp.quniform('subsample', 0.1, 1, 0.01),
'colsample_bytree' : hp.quniform('colsample_bytree', 0.1, 1.0, 0.01)}
trials = Trials()
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=300,
trials=trials)
print("Best: ", best)
#%%
#training model on parameters got from hyperopt
grid_clf = make_pipeline(preprocess,RandomOverSampler(random_state=777),XGBClassifier(n_jobs=-1,verbosity=3,
objective= 'binary:logistic',
nthread=-1,
scale_pos_weight=1,
seed=27,tree_method='hist',n_estimators = 370,
max_depth = 6,
learning_rate = 0.16,
gamma = 0.45,
min_child_weight =7.0,
subsample = 0.52,
colsample_bytree = 0.76))
print(grid_clf)
#%%
clf=grid_clf.fit(X_train, y_train)
#, xgbclassifier__early_stopping_rounds=20, xgbclassifier__eval_set=[(X_test, y_test)],xgbclassifier__eval_metric='refit_score')
#%%
print(classification_report(y_test, grid_clf.predict(X_test) ))
#%%
print(confusion_matrix(y_test, grid_clf.predict(X_test) ))
#%%
from sklearn.metrics import roc_auc_score
y_predicted = grid_clf.predict(X_test)
r_adj_test = roc_auc_score(y_test, y_predicted)
print(r_adj_test )
AI: Your test score is incorrect. The ROC curve needs the probability scores from the model, not the class decisions. So replace
y_predicted = grid_clf.predict(X_test)
with
y_predicted = grid_clf.predict_proba(X_test)[:,1] |
H: Keras BatchNormalization axis
I use spectrogram as input to a Convolutional Neural Network I have created with tensorflow.keras in Python.
Its shape is (time, frequency, 1).
The input's shape of the CNN is (None, time, frequency, n_channels) where n_channels=1 and the first layer is a Conv2D. In between every Convolutional layer I use a BatchNormalization layer before an Activation layer.
The default value for BatchNormalization is "axis=-1".
Should I leave it as it is or should I make it with "axis=2" which corresponds to the "frequency" axis?
The thought behind this is that the features of a spectrogram are represented in the frequency axis.
AI: Interesting question :)
Using spectrograms means you are essentially using images (of frequencies varying over time). I understand the content is in understood like the graph i.e. with axes time and frequency, but as far as the network knows, you are giving it black and white images; assuming your last dimension (=1) is the channels dimension.
You normally want to take the batch-norm over the features, so it could depend on what you see as a feature. Do you care about the shape of the peaks/troughs of the spectra, or the values encoded in the final channel? The meaning of axis when used in terms of BatchNormalization might be a little confusing. Have a look at these explanations and some of the points here.
So as far as pure images go, I would recommend keeping the default axis=-1.
Remember that a 2d convolution operation is looking for spatial correlations over the image itself - the kernel slides from left to right, top to bottom. So mixing that idea with then taking batch norm over your second axis (frequency) is literally an orthogonal idea.
I don't know if it will work out, but I think it might make most sense if your spectrograms are aligned in time, that is they cover the same nominal time period and so for two of them with the same size, the x-axis has the same scale. Otherwise you would be taking batch-normalization over different time-frames and therefore stirring up temporal information between the samples. |
H: order of features for model tuning vs model fitting
Assuming that the same columns (i.e., features) are used for hyperparameter tuning and model fitting, and ensemble models are used for modeling (e.g., Random forest or XGboost), then does the order of columns used during the hyperparameter tuning process should be identical to the order of columns used when fitting the model based on the best hyperparameters?
I am using sklearn's make_column_transformer functions in my CV pipeline for the hyperparameter tuning. Unfortunately, this function modifies the order of provided columns when setting the remainder argument to 'passthrough'. Should I ensure that when fitting the model the same order of columns is preserved, or the order does not matter as long as I am using the same features.
AI: Ah, I was too quick, and misinterpreted your question! At the bottom of this post I'll leave my old answer, answering why the test set needs to have the same column order.
As for column order of the data in hyperparameter selection vs. training a final model, no, I suppose there's no real reason these need to be the same. In a tree model with column subsampling, you're right (in your comment) that the columns will be selected randomly anyway, so the original order doesn't matter.
Even if you don't use column subsampling, and even for other models: a model generally won't be using the column order as informative; if anything, it's used as a fallback tiebreaker. (Time series are an obvious exception, but in that case maybe the data isn't tabular in the same way.)
That said, it's still perhaps best practice to use the same pipeline, so that the column order will be the same anyway. sklearn's hyperparameter tuners make this easy, with refit=True by default just refitting the model pipeline on the best hyperparameters found.
Since sklearn operates on numpy arrays and not pandas dataframes (one of the first things in most sklearn steps is conversion to arrays), you need to make sure columns arrive in the same order as the training data. Otherwise the model will mistake values of some features as being different features! Hopefully this will actually break things (wrong feature type, e.g.), but perhaps it will silently make very bad predictions!
This shouldn't be hard if you use pipelines. The make_column_transformer (and all other steps) will be applied to the testing data in the same way as your training data, so the array after these steps will have columns in the right order. (Alas, if you want to dig into the results, attaching names to the columns after the preprocessing parts of the pipeline may be a hassle.) |
H: Data science career problems: interaction of technical and social difficulties
I have been trying to break into the bioinformatics space (and now, data science more generally).
Although there are numerous challenges in this field and I am constantly learning to deal with them, I have have found the most consistent and intractible challenges are interpersonal, and they are unlike anything that my previous experience prepared me for. In particular, I find it very difficult to balance between getting the information I need (or think I need) to do a good analysis and maintaining productive relationships with people.
I have often had the sense that higher-ranking people than me were using their ability to act as gatekeepers of information to assert power over me, but it is almost never possible to be sure whether this is what is going on, or if I am making up excuses for my own lack of planning and follow-through.
A few examples.
When I was in grad school (before flunking out), I rotated in a bioinformatics lab where my attempts to pick a project always went in cicrles. If I'd suggest an experiment to the PI, the first question that I would need to answer before continuing was "What data do you want to use to answer this question?" Which sounds perfectly reasonable, except that when I would ask for access to databases that would contain such information, the question was "What question do you want to answer with access to this data". Thus, I could never figure out a project, because I had no way of knowing what questions I had the theoretical ability to address. What was going on there? Did the PI simply not like me? Did I need to be more clever in terms of social politics in order to discover the necessary inforation through informal channels? Is it that I lack some basic research skill that would have vaulted me past this predicament?
In another lab, the PI wanted me to do some work before a deadline, but to do the work I needed access to some additional datasets, which had supposedly already been prepared, and thus did not need to be factored into the estimate of how long the project would take. However, when I asked my coworker for the data, they sent me the wrong data, and I had to go through several back and forths before I got the file I needed, by which time I was unable to complete the main assignment by the deadline. The result was being let go from the lab and flunking out of grad school. Again, it's hard for me to deterimine where I failed in my responsibilities, and whether I was undermined by colleagues. How much of the job of a data scientist is simply resiliant against and planning for unhelpful coworkers? How would one explain such a situation to a boss without giving the impression of trying to blame others for one's own failings?
In a third project, not in academia and not science related, I am being asked to do some analyses on a dataset that contains a lot of numbers that I don't know the origin or meaning of. According to the person who assigned me the task, these numbers are not relevant to me and I don't need to know them. I'm not so sure that is the case, and have repeatedly asked that these numbers be explained to me, to nop avail. I'm trying to work with what I was given, but it drives me insane to have to ignore data without even knowing what exactly it is. Similarly, the datasets keep getting moved to archive, and while there are always datasets of similar format that I can test my code on, I am not able to consistently work with one of them and keep using it as a reference. Instead I have to use the most current data. The result of this is that if I notice potential issues with the dataset, I need to get a response from my higher-ups before the dataset is moved to archive, otherwise I will have to find a new example in the most current data, which hopefully I will get a response about before it goes to archive. Of course, in theory, since all these datasets are formatted the same, it shouldn't matter what data I am using, my code should work equally well. But I still want to be able to stick with one example until my question is answered, rather than constantly documenting new ones.
So these are the questions I need to figure out answers to:
1) How common is this sort of behavior in the data science world?
2) Are there good reasons for it, or is it just how people assert power in this space?
3) Is the problem most likely my approach to data, or my approach to people?
Based on your experience and my telling of the situations, what do you think is most likely?
AI: 1) How common is this sort of behavior in the data science world?
Very common across the board and not necessarily related to data science but any data request. I cannot comment on academia and my experience is much broader than what nowadays is considered data science but with regards to data requests towards different departments (e.g. controlling, sales, production, supply chain, finance, HR) in different industries (e.g. chemicals, metals, consumer goods, retail, utilities) I have seen this a lot.
2) Are there good reasons for it, or is it just how people assert power in this space?
There are plenty of different reasons, e.g.
high number of incoming data requests,
high general workload,
unclear goal of data request,
bad quality of data requests,
not-ideal form of communication,
incorrect path of communication,
interpersonal issues (with you, your boss, or your department).
The important thing here is to anticipate these and request your data accordingly - which brings me to your third question.
3) Is the problem most likely my approach to data, or my approach to people?
From the list above you can directly draw some conclusions how to properly request data:
Provide your requests in a way that helps the receiver to prioritize properly and process it efficiently. [addresses number 1 and 2 in above list]
State clearly what data you need and what the purpose of your request is. [addresses number 3 and 4 in above list]
communicate in line with your company culture, e.g. choose the right level of politeness (hugely dependent on corporate culture and countries but very generally tech-people might sometimes come across as too demanding, direct or even impolite). [addresses point 5 in above list]
Follow the correct path of communication in your organization, e.g. involve superiors before approaching someone on the working level in a different department or team, have your boss approach the other department or team if appropriate, reach out to people responsible for data provision (and not just anyone who has access to or "owns" it) and if in doubt align with your boss how to proceed. [addresses number 6 of above list]
Keep interpersonal relationships in mind and approach them accordingly [addresses number 7 in above list]
With regards to the examples you provided I would like to point out two things.
Example 1: As described above it is helpful to explain the reasons for a data request. Failing to answer this question might lead to deprioritizing or not complying with your data request.
Example 2: This can and does happen a lot. There are at least two things you can do about it: 1. include contingencies in your planning. 2. Flag potential and actual delays asap with your supervisor.
All this is not say that it is all your fault but do your part to make things go as smoothly as possible. |
H: LabelEncoder with a Multi-Layer Perceptron?
So we're working on a machine learning project at work and it's the first time I'm working with an actual team on this. I got pretty good results with a model that uses the following SKLearn pipeline:
Data -> LabelEncoder -> MinMaxScaler (between 0-1) -> PCA (I go from 130 columns to 50 prime components that cover the variance) -> MLPRegressor
One of my colleagues mentioned that I shouldn't normally use LabelEncoder to encode training data, as it's meant for encoding the target variable. I did some research and now and I understand why
LabelEncoder only is not a good choice, since it brings in a natural
ordering for different classes.
Then, however, my colleague mentioned that in this case it shouldn't make much of a difference as I'm using a neural net (~MLPRegressor). My question is - (if he's right - is he?) why? He basically commended me saying this usually would be a bad idea but in this case it should work.
I will try to move to one-hot encoding (currently I'm only stuck with it because I run out of memory while doing PCA on that many columns, but that's another question and I'll do some research on that separately), but for now I'd like to know if using this kind of encoding can result in inaccurately good results (I'm having an r^2 score of around 0.9 and my boss literally won't believe I achieved a result that good haha).
AI: Your colleague was correct that LabelEncoder is meant for the target variable (a.k.a. labels); more-or-less equivalent OrdinalEncoder is meant for the independent variables.
IMO, your colleague is partially correct about an ordinal encoding "not making much of a difference" for a neural network. A neural net can contort its way into a correct understanding of your labels anyway, although I think an ordinal encoding will make its job substantially more difficult, so I still wouldn't advocate for it. The intervening steps, scaling and PCA, make it less clear exactly what the neural network will actually see, and I don't know if that alleviate or worsen the issue of an ordinal encoding. (Given your good score, perhaps alleviate.) |
H: order of features importance after make_column_transformer and pipeline
I have a data preparation and model fitting pipeline that takes a dataframe (X_trn) and uses the ‘make_column_transformer’ and ‘Pipeline’ functions in sklearn to prepare the data and fit XGBRegressor.
The code looks something like this
xgb = XGBRegressor()
preprocessor = make_column_transformer(
( Fun1(),List1),
( Fun2(),List2),
remainder='passthrough',
)
model_pipeline = Pipeline([
('preprocessing', preprocessor),
('classifier', xgb )
])
model_pipeline.fit(X_trn, Y_trn)
Therefore, the training data which inputted into the XGBRegressor have no labels and resorted due to the make_column_transformer function. Given this, how do I extract the features importance using XGBRegressor.get_booster().get_score() method?
Currently, the output of get_score() is a dictinary that looks like this:
{‘f0’: 123 ,
‘f10’: 222,
‘f100’: 334,
‘f101’: 34,
…
‘f99’:12}
Can I assume that the order of the features provided by get_score() is identical to the order of features after make_column_transformer function (aka, I have to incorporate the feature sorting) such that 'f0' == 1st feature after make_column_transformer, 'f1' ==2nd feature after make_column_transformer, etc.?
AI: Your assumption is correct. Usually after column Transformation columns lose their names and get default values corresponding to their orders.
Additional Info:
You may try Eli5
from eli5 import show_weights,show_prediction
show_weights(model)
show_prediction(model,data_point)
The later function shows the impact of every features for predicting a data_point. |
H: predicting next jobtitle
I have a dataset of which has 30M rows each like [current_jobtitles, nextjobtitles].
[['junior software programmer', 'senior software programmer'],
['senior software programmer', 'lead software programmer'],
['sales associate', 'regional sales associate']]
I want to build a deep learning model to predict the nextjobtitle when a currenttitle is given. Are there any ways that I could acheieve this using some deep learning model? if yes, what kind of model ?
Can we use any of the text generation models for this scenario ? I have seen some examples, but I cannot relate to my solution for the problem. Any suggestions please ..!
update:
I have sequences of jobtitles in each row.
[['junior programmer', 'senior programmer', 'lead programmer'],
['sales associate', 'sales regional associate', 'sales manager'],
['chairman', 'ceo'],
['associate', 'intern', 'junior', 'student']]
for the orginal data (in update) i have modifed data as above (before update) by just taking the nextjob title.
Can I train model with the original sequence data to predict the next jobtitle?
** - all the rows will be of different lengths.
AI: I want to build a deep learning model to predict the next job title when
a current title is given. Are there any ways that I could achieve this
using some deep learning model?
I think that approaching this problem in the classification way (input:- current job embedding, output:- getting next job title as a class) can somewhat be time-consuming (not necessarily hard) to train.
The reason behind this:-
Assuming that each of the job-title pair is unique in your dataset,
given the volume of your dataset, there can be many classes, so much that even deep learning models may take time(for processing, optimization, etc.) to train.
Also as there can be many to many relationships in the dataset, hence using the classic approach might not be a good option.
what kind of model?
As per your given examples, the kind of model that we need here is one, that can learn the hierarchy of the job titles from the text itself, except for some cases where the sample denotes that the individual of that job title has changed their domain of work(this can be attributed as noise).
Can we use any of the text generation models for this scenario?
A deep learning model such as seq2seq(https://www.geeksforgeeks.org/seq2seq-model-in-machine-learning/) or noisy autoencoders(https://towardsdatascience.com/convolutional-autoencoders-for-image-noise-reduction-32fce9fc1763). I know that there is not much content of 'noisy autoencoders for nlp' on the internet, but we can always try. These models can learn the complex relationships between the words of the job title and the job title itself.
I would recommend noisy-autoencoders as I assume (a strong one) that only some if not all corresponding words in the current and next job title are different, and this is the purpose of noisy autoencoders:-
They can accommodate noise very well(many to one and one to many
relationships between current and next job title)
They keep most of the content in input and output is the same, which I think you are gonna need because of the strong assumption I mentioned above.
I hope it helps, thanks!! |
H: Help with algorithm approach for computer vision
I hope this is the right forum to ask.
I had a client approach me with a demand for a vision system for their assembly line.
The problem they are facing is that the operator sometimes forgets to put all three parts of the product together. They want a vision system that can spot if any of three main components are missing (only missing nothing else).
I have the knowledge to build a covnet in keras but I think that would be a way to big hammer for this nail and I have to go to the client any produce images myself (ergo I can sit there for a day or two and take photos the products). So I might end up with a data set of maybe 200 images. I will try and make some artificially bad images, taking of the different components and photoing that. I don't think I'm allowed to show what the product looks like but I made a sketch.
What algorithm can I read up on that would solve this problem in a good and time efficient manner? Demands on the camera and pc for the any suggested algo would also be very much appreciated (resolution etc).
AI: this problem set is in the realm of object detection and image classification,you need have need a good dataset from your client or scrape one yourself,you will need to perform real time object detection in the assembly line so for that case you will need a have a really good convnet that has high accuracy.One way to have such a high accurate model is to perform transfer learning using your preferred models such as inception,caffe,lenet etc. you will need to delve more into learning about anchor boxes,for this kind of problem look at the keras implementation of Yolo algorithm https://towardsdatascience.com/object-detection-using-yolov3-using-keras-80bf35e61ce1 |
H: How is the GridsearchCV Score calculated?
How is the score of GridsearchCV calculated? Is the score a percentage? Does this mean higher is a better?
AI: The score is based on the scorer defined in the scoring argument. Meaning, the scorer can be any of the default metrics, such as precision, accuracy or F1-score
(e.g., this); or a custom scorer.
For a scorer (by convention), higher value is better.
The value is not necessarily a percentage, but is often normalized between 0 and 1. |
H: Data augmentation in deep training
I'm trying to understand the role of data augmentation and how it can affect the performance/accuracy of a deep model. My target application is a fire classification (fire or not, on video frames), with almost 15K positive and negative samples, and I was using the following data augmentation techniques. Does using ALL the followings always increase the performance? Or we have to choose them somehow smartly given our target application?
rotation_range=20, width_shift_range=0.2, height_shift_range=0.2,zoom_range=0.2, horizontal_flip=True
When I think a bit more, fire is always straight up, so I think rotation or shift might in fact worsen the results, given that it makes the image sides stretch like this, which is irrelevant to fires in video frames. Same with rotation. So I think maybe I should only keep zoom_range=0.2, horizontal_flip=True and remove the first three. Because I see some false positives when we have a scene transition effect in videos.
Is my argument correct? Should I keep them or remove them?
AI: Your reasoning is perfectly correct. Augmentation is just a process, which helps you cover your domain better. You should only pick operators that help you. Abusing augmentation can definitely mess up your model. It's always good idea to print data at those limits, to check yourself. Try also to think, how data will be acquired on production. Albumentations is a nice repo with a lot of available methods. |
H: EfficientNet: Compound scaling method intuition
I was reading the paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks and couldn't get my head around this sentence:
Intuitively, the compound scaling method makes sense because if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image.
In the case of a big image, why the network needs more layers to increase the receptive field ? What does increasing the receptive field mean? Increasing its width/height ? If so, we can do it directly without increasing the number of layer in the network no ?
is "fine-grained patterns" referring to noisy shape we can see after visualizing convolution output ?
I feel like I am missing / misunderstanding something evident.
AI: Receptive field refers to the number of input pixels that a convolutional filter will operate on. There's a nice distill article about how to calculate receptive field size for your filters (with a nice visualization of receptive field size) and an interactive calculator here if you're only curious about how receptive field size grows with changes to depth and filter size.
Increases to receptive field size typically come from adding layers and from increasing the kernel size. A larger kernel operates on more pixels, which grows the receptive field. Increasing the depth of your network refers to adding additional convolutional layers. These downstream filters operate on the feature maps produced by the initial conv. layers of your net, which increases the receptive field for the filters in those additional layers (if this isn't clear, this is a good guide). The distill article also goes into detail about how other operations affect receptive field size.
As for the claim of a gain in number fine grain patterns captured, it's more in line with the intuition that more filters will give the network more ways to learn specific features of your data. See articles around visualizing convolutional filters for a sense of what types of features are captured (this tutorial on object detection links to a nice visualization).
Hope this helps! |
H: How to impute right-censored data
I have a dataset of vectors representing movement with various characteristics. Some vectors represents the movement that was stopped by external factor and therefore, observed value for length of such a vector (v_length) is incomplete (marked as incomplete == 1). The data looks like below:
# A tibble: 10 x 9
v_length incomplete v_angle x0 y0 x1 y1 type vap
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr> <dbl>
1 1.70 1 0.869 66.6 0.5 67.7 1.8 A 0
2 1.82 1 -0.165 37.4 65.6 39.2 65.3 B 0
3 2.57 1 0.236 61.3 49.7 58.8 49.1 A 0
4 3.14 1 1.18 57.8 40.5 59 43.4 A 0
5 12.6 0 0.119 52.5 33.7 65 35.2 A 0
6 20.5 0 -0.847 65.3 32.3 78.9 16.9 A 0
7 33.0 0 -0.180 77.5 13.7 45 19.6 A 0
8 14.1 0 -0.780 45 19.6 35 29.5 B 0
9 2.97 0 1.00 35 29.5 33.4 27 B 0
10 6.59 0 0.732 33.4 27 38.3 31.4 A 0
I want to impute a v_length for incomplete observations (incomplete==1). My first idea was to use some parametric survival model (e.g. Weibull). But as I'm not experienced in Survival analysis I have been struggling with a good setup. My first doubt is if it is correct to use v_length as one of the predictors as well? It doesn't make sense at first sight, but the predictions for the model without v_length as one of predictors looks very strange:
The idea behind inclusion is to help the model know what was the observed vector length, so it can predict a value higher than that. After inclusion of v_length in predictors the output looks like below:
However, we still have plenty of values lower than actual vector length, while I obviously don't want a model to predict a lower value than observed.
So here's my question: is Weibull survival model suitable for this task? What's the correct setup if so? What are the other methods suitable for imputation of right-censored data?
AI: You can't put v_length in the regression - that'd be a form of data leakage. However, you are right to be thinking about "how to tell the model that I've already observed some length". This can be accomplished with some survival analysis math. For censored observations, what you want is
$S(l \;|\; l > \text{observed v_length})= P(L > l \;|\; L > \text{observed v_length})$.
Let's analyze this:
$P(L > l \;|\; L > \text{observed v_length}) = \frac{P(L > l \;\text{and}\; L > \text{observed v_length})} {P(l > \text{observed v_length})}$
$\;\;\;\;\;\;=\frac{P(L > l)} {P(L > \text{observed v_length})} = \frac{S(l)}{S(\text{observed v_length})} $
This gives you a new survival curve that you can use to do imputation (take the median or mean of the new survival curve).
In python's lifelines package, this calculation is done behind the scenes when the conditional_after argument is used in the prediction methods, see docs here. I'm not sure if R packages have something like this.
It also doesn't make sense to plot the observed vs predicted, because some observed values are truncated (censored) - hence a difference in observed and predicted could be due to censoring rather than bad prediction. For example, using your values above, I may plot the point (1.70, 25.0). The 1.70 is censored, and the 25.0 is the predicted value. There is a big difference between these values, but we don't know if that difference is just because of the censoring or if our prediction is just really off.
My advice would be to focus on finding a good model that maximizes the out-of-sample likelihood or AIC (Why likelihood? It's a good measure of survival fit). |
H: what metrics to evaluate rank order results?
I have searched on stackexchange and found a couple of topics like this and this but they are not quite relevant to my problem (or at least I don't know how to make them relevant to my problem).
Anyway, say I have two sets of prediction results, as show by df1 and df2.
y_truth = [0, 1, 0, 1, 0, 1, 0, 1, 0, 1]
y_predicted_rank1 = [6, 1, 7, 2, 8, 3, 9, 4, 10, 5]
y_predicted_rank2 = [4, 1, 7, 2, 8, 3, 9, 6, 10, 5]
df1 = pd.DataFrame({'tag': yy_truth, 'predicted_rank': y_predicted_rank1}).sort_values('predicted_rank')
df2 = pd.DataFrame({'tag': yy_truth, 'predicted_rank': y_predicted_rank2}).sort_values('predicted_rank')
print(df1)
# tag predicted_rank
#1 1 1
#3 1 2
#5 1 3
#7 1 4
#9 1 5
#0 0 6
#2 0 7
#4 0 8
#6 0 9
#8 0 10
print(df2)
# tag predicted_rank
#1 1 1
#3 1 2
#5 1 3
#0 0 4
#9 1 5
#7 1 6
#2 0 7
#4 0 8
#6 0 9
#8 0 10
By looking at them, I know df1 is better than df2, since in df2, a negative sample (zero) was predicted to have rank #4. So my question is, what metric can be used here so that I can (mathematically) tell df1 is better than df2?
AI: For comparing two rankings Spearman's rank correlation is a good measure. It's probably worth a try, but since your gold truth appears to be binary I would think that top-N accuracy (or some variant of it) would be more appropriate (advantage: easy to interpret). You could also consider using the Area Under the Curve (AUC), using the predicted rank as variable threshold (less intuitive to but doesn't require choosing any top N). |
H: What is the name of this statistical interaction?
What is the name of the following statistical / informational interaction:
given A, I know exactly what B is.
given B, I know to some extent what A is.
I'm not looking for a probability but rather something like correlation.
Something that tells me that I don't need to include B as a feature when A
is already a feature. A correlation heatmap wouldn't give me this information, but there must be some computation that tells me "Don't include B, it's worthless."
To give some intuition: A could be an item_id and B an item_category.
Or am I wrong? Is it not worthless at all?
AI: You might want to look at conditional entropy, H(A|B) and H(B|A). |
H: What are features in computer vision?
I'm learning how U-NET network works to do semantic segmentation.
I think I have understood everything but features. What are those image features?
I read that convolutional layers extract features from the images using their filters, but what are they? Are they corners? edges? colours?
I have read this article, "Finding Features", but I think I need more information about them.
AI: CNNs like U-Net extract lower level features like edges on lower layers (i.e. the first convolutional layers) and higher level features on higher layers (i.e. convolutional layers closer to the final linear layers). This principle is losely inspired by how visual perception is implemented in the Visual Cortex among humans (and other animals).
In a CNN the feature maps could for example look like this:
As you can see the lower level feature maps detect simple structures like edges while higher level feature maps recognize more complex structures like eyes or faces.
Colors, however, are processed differently by CNNs than spatial features since color images are usually fed to CNNs using three input channels (one for each color in RGB format). So colors are not detected in the same way as spatial features but instead the first convolutional layer receives a 3-dimensional input image with one dimension for each color-component (one for red, one for green, one for blue).
The article What exactly does CNN see? and the paper Visualizing and Understanding Convolutional Networks (which are also the sources of above images) provide a more detailed explanation. |
H: Class asks me to give self for Naive Bayes Model python
I try to use the following code but when I try to use fit function with my X_train and y_train,
I get the following error:
fit() missing 1 required positional argument: 'self'
I do not know much about classes but I know it should not ask for self. I found somethings about instantiation but could not figure it out.
class BernoulliNB(object):
def __init__(self, alpha=1.0):
self.alpha = alpha
def fit(self, X, y):
count_sample = X.shape[0]
# group by class
separated = [[x for x, t in zip(X, y) if t == c] for c in np.unique(y)]
# class prior
self.class_log_prior_ = [np.log(len(i) / count_sample) for i in separated]
# count of each word
count = np.array([np.array(i).sum(axis=0) for i in separated]) + self.alpha
smoothing = 2 * self.alpha
# number of documents in each class + smoothing
n_doc = np.array([len(i) + smoothing for i in separated])
print(n_doc)
def predict(self, X):
return np.argmax(self.predict_log_proba(X), axis=1)
when I try
b = BernoulliNB()
b.fit(b, X_train,y_train)
this time, I receive
fit() takes 3 positional arguments but 4 were given
Then I changed it to
BernoulliNB().fit(X_train,y_train)
but this time, this error occurs:
The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
AI: b = BernoulliNB()
b.fit(b, X_train,y_train)
This doesn't work because you're calling the fit method of the BernoulliNB class object you instantiated with BernoulliNB(). This argument says you have an extra parameter because it only accepts X, y.
It's hard to diagnose your second issue without seeing your data. But your y_train is probably formatted incorrectly. |
H: Plot dataframe with two columns on the x axis
how would I plot the data below with the x axis as Year & Month? Each Year-month combination has a unique monthly count (Y). I am unsure how to proceed, given than they are in different columns, should I combine them first?
AI: Using matplotlib, you could create a custom tick formatter to show the right ticks. The year and month can either be fetched from the dataframe via the index (df.iloc[value]['Month']) or just be calculated.
Here is an example. You can also read the month name in the status bar when you hover over a position in the plot. The xticks (the positions where to have a tick with a label) can either be set automatically or manually.
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
month_name = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
def format_func(value, tick_number):
value = int(value)
year = 2001 + value // 12
month = value % 12
# return f"{month+1}-{year}"
return f"{month_name[month]}-{year}"
N = 229
# create a dataframe resembling the example
df = pd.DataFrame({'Year': np.repeat(range(2001, 2021), 12)[:N],
'Month': np.tile(range(1, 13), N // 12 + 1)[:N],
'monthlycount': np.random.binomial(5000, .7, N)
})
plt.plot(df['monthlycount'], marker='o', c='crimson')
plt.xlim(0, N - 1)
plt.xticks(range(0, N, 12), rotation=20) # set major ticks every year
# plt.gca().set_xticks(range(0, N, 12), rotation=30)
plt.gca().set_xticks(range(0, N), minor=True)
plt.gca().xaxis.set_major_formatter(plt.FuncFormatter(format_func))
plt.tight_layout()
plt.show() |
H: Does building a corpus make sense on a documentation project?
I have zero to experience in data science or machine learning. Because of this I am not able to determine if building a corpus does apply to the problem I am trying to solve.
I am trying to build a reference site for cloud technologies such as AWS Google Cloud.
I was able to build structured data and identify primary entities with in a single ecosystem using standard web scraping and sql.queries.
But I wanted to have the ability to have a mechanism that can autonomously identify entities and related information that is relevant to that entity and other entities it has relationships with.
Given that a specific ecosystem documentation follows a certain style can I use few entities as training docs and then have it classify the information I mentioned above.
Is the starting point to this is to build a corpus? I tried it out nltk categorized corpus builder.
Is it fine to include a specific document in multiple categories? For example an instance in AWS can be in category ec2 and a general category Computing unit
Anyways is this problem I am trying to solve fit into the general NLP ML space?
AI: Anyways is this problem I am trying to solve fit into the general NLP ML space?
Generally speaking, feeding the source data in bulk to a ML system is unlikely to give the kind of structured output you expect. It's likely that you would have to somehow guide the process in the direction of what you want to obtain, and this might take a lot of time and effort (depending on the requirements).
That being said, there are indeed NLP methods which are meant to extract specific pieces of information from text, and it usually works quite well with domain-specific data (provided it's done correctly, caveats apply). I'm just going to list a few typical tasks which can be done:
Named Entity Recognition would be the most common and probably the most simple, since there are many existing libraries. Most libraries use a pre-trained model, but it's likely to give much better results when it's trained on the kind of data it's applied to (of course that usually means manually annotating your own training set).
Text classification can be used to automatically assign documents to a category (class) among a set of predefined categories. This is supervised so you would also need a training set containing labelled documents. Again there is a number of algorithms and libraries available.
Simple information retrieval methods based on measuring semantic similarity (see e.g here) between terms and documents can be used to search e.g. documents relevant to a term.
Topic modeling is an unsupervised approach which groups similar documents together (clustering). Since it's unsupervised it doesn't require any training data, but on the other hand what the algorithm finds as "topics" (groups) is often different than what a human would expect.
Extracting relations between concepts (typically between named entities) is a more advanced task which usually requires more work in order to capture the specifics of the job. I'm not aware of any general library for that.
Overall there are many things possible, but the first step would be to try to design the system precisely, typically using some of the existing tasks as building blocks.
Is it fine to include a specific document in multiple categories? For example an instance in AWS can be in category ec2 and a general category Computing unit
Yes it's fine, but if you want a ML classifier to be able to predict several classes you will need to use multi-label classification (the "standard" is single label).
Is the starting point to this is to build a corpus? I tried it out nltk categorized corpus builder.
I'd recommend building a corpus only once you have a clear idea of how you're going to use it. Also it's usually an experimental process with lots of back and forth, so try to progress iteratively rather than starting with strong assumptions/decisions which might later turn out not relevant. |
H: K-Means Clustering too crowded
I have written a simple python code that opens a csv files and then clusters the values of one column. There around 10k rows
This is my code
import pandas as pd
import numpy as np
import random as rd
import matplotlib.pyplot as plt
data = pd.read_csv('file.csv', encoding='unicode_escape')
data.head()
feature_names = ['numbers_col']
X = np.asarray(data[feature_names])
from sklearn.cluster import KMeans
labels = KMeans(5, random_state=10).fit_predict(X)
plt.scatter(X[:, 0], X[:, 0], c=labels,
s=50, cmap='rainbow');
This is how the result looks like
The result doesn't look that good it looks all liner and the clusters are shown just like dots.
I am asking for some advice so my output would look more towards what a normal K-Means output looks like: Example
AI: The figure looks exactly how it should.
plt.scatter(X[:, 0], X[:, 0], c=labels, s=50, cmap='rainbow');
You are plotting the a value again itself, in two dimensions it will give you a line of the form y = x
Check what you want to plot, if you have only one feature it doesn't mean that the K-Means wouldn't work, you will have a clustering, but the plot its correct. |
H: How to train my model efficiently?
I am new to ML and have been reading online about training bottlenecks when there are frequent updates to data.
Let's say I have a built a model based on a dataset of 10M records.
Now, in another 2 months, I might receive another 1M records which we would like to feed into our model as well.
Similarly this goes on for every 2 months. We would like to update/train our model with latest data as and when it's available
1) But for example, let's say the training takes 1 week for every new data update
2) Any suggestions on how we can minimize the training time (when we train every 2 months?)?
3) Should we select a representative sample from 1 Million datapoints? Is that good enough
4) I understand it's all about tradeoff but I am curious to know whether I am missing any known approaches to save training time? I am thinking representative sample can reduce the sample size and help us fasten the training process
Can you share your suggestions on this?
AI: From what I understand - your problem is "sample selection bias" problem. Any kind of pattern to select a subset out of large data may lead to bias. This raises two question.
How to choose? Random/stratified random (If you have multiple classes) under sampling to obtain a smaller subset.
How big to choose? we can set percentage of undersampling.
Reducing training time : We can perform preprocessing : dimensionality reduction techniques to remove correlated data by aplying techniques like- PCA. Or Sparse reconstruction methods - Transform data to sparse data and then process. |
H: How to preprocess data for Word2Vec?
I have text data which is crawled from websites. I am preprocessing data to train Word2Vec model. Should I remove stopwords and do lemmatization? How to preprocess data for Word2Vec?
AI: Welcome to the community,
I do not know about other libraries, but gensim has a very good API to create word2vec models. In order to preprocess data, you have to decide first what things you are gonna keep in your vocab and whatnot. for ex:- Punctuations, numbers, alphanumeric words(ex - 42nd) etc.
In my knowledge, the most generic preprocessing pipeline is the following:-
1) Convert to lower
2) Remove punctuations/symbols/numbers (but it is your choice)
3) Normalize the words (lemmatize and stem the words)
Once this is done, now you can tokenize the sentence into uni/bi/tri-grams.
Have a look at this
The generic format to put data in gensim.models.word2vec()'s sentence parameter is : [[tokeneized sentence 1], [tokenized sentence 2].....and so on]
Hope it helps, thanks!! |
H: How to calculate the final adjusted weights for a neural network model
My understanding of a neural network algorithm is the 1st row/observation of the dataset is inputted into the NN model and then backpropagation happens to adjust the weights, until some condition is met and the weights stop adjusting. This then happens again on the 2nd row/observation of the dataset, 3rd row/observation of the dataset and so on. Say if the dataset has 1000 rows, then I will have 1000 weights. How would then the final weight be calculated for the entire NN model?
AI: That's not how it works. There is a given number of weights given by the architecture you chose. Those weights are then updated on each iteration of the learning process. Depending on your method of learning, your weights may be updated once on 1000 observations, or for groups of observations (even groups of 1). May I suggest you to read an introduction to statistical learning http://faculty.marshall.usc.edu/gareth-james/ISL/ ? or to join the machine learning course on coursera https://www.coursera.org/learn/machine-learning ? |
H: How are the channels handled in CNN? Is it independently processed or fused?
Let's assume that we are talking about 2D convolutions applied on images.
In a grayscale image, the data is a matrix of dimensions $w \times h$, where $w$ is the width of the image and $h$ is its height. In a color image, we normally have 3 channels: red, green and blue; this way, a color image can be represented as a matrix of dimensions $w \times h \times c$, where $c$ is the number of channels, that is, 3.
A convolution layer receives the image ($w \times h \times c$) as input, and generates as output an activation map of dimensions $w' \times h' \times c'$. The number of input channels in the convolution is $c$, while the number of output channels is $c'$.
My confusion is that will CNN operate on the fused representation of the data if $c =2$ or 3 or 4 etc? Or does it operate on each channel at a time and then stacks the results? Say I have 4 channels each channel is a 2D matrix then would CNN internally form a fusion of the 4 channels and make some sort of a representation?
AI: Let $n$ be a convolutional layer with dimensions $w' \times h' \times c'$. Then each of its $c'$ filters is connected to all $c$ filters (or channels*) of the previous layer.
I find it helpful to look at the number of weights here: A single filter of that convolutional layer $n$ with kernel size $k'\times k'$ will have $c \times k' \times k'$ weights. And since layer $n$ has $c'$ of these filters it has a total of $c \times k' \times k' \times c'$ weights.
In a toy example with a 3 channel input layer followed by a conv. layer with 5 channels this would look as follows (not showing the bias here for simplicity):
As you can see from the drawing each feature map of the conv. layer receives all input channels as an input (and the same would apply if this was not an input layer but a conv. layer with 3 feature maps).
*Note that it does not make a difference whether the previous layer is a conv. layer too or the input layer - in the first case you call its depth "filters" and in the second you call it "channels" but that does not change how it is connected to the following conv. layer. |
H: What is the feedforward network in a transformer trained on?
After reading the 'Attention is all you need' article, I understand the general architecture of a transformer. However, it is unclear to me how the feed forward neural network learns.
What I learned about neural nets is that they learn based on a target variable, through back propagation according to a particular loss function.
Looking at the architecture of a Transformer, it is unclear to me what the target variables are in these feed forward nets. Can someone explain this to me?
AI: Let's take the common translation task which transformers can be used for as an example: If you would like to translate English to German one example of your training data could be
("the cat is black", "die Katze ist schwarz").
In this case your target is simply the German sentence "die Katze ist schwarz" (which is of course not processed as a string but using embeddings incl. positional information). This is what you calculate your loss on, run backprop on, and derive the gradients as well as weight updates from.
Accordingly, you can think of the light blue feed forward layers of a transformer
as a hidden layer in regular feed forward network. Just as for a regular hidden layer its parameters are updated by running backprop based on transformer $loss(output,target)$ with target being the translated sentence. |
H: Semantic text similarity using BERT
Given two sentences, I want to quantify the degree of similarity between the two text-based on Semantic similarity.
Semantic Textual Similarity (STS) assesses the degree to which two sentences are semantically equivalent to each other.
say my input is of order:
index line1 line2
0 the cat ate the mouse the mouse was eaten by the cat
1 the dog chased the cat the alligator is fat
2 the king ate the cake the cake was ingested by the king
after application of the algorithm , the output needs to be
index line1 line2 lbl
0 the cat ate the mouse the mouse was eaten by the cat 1
1 the dog chased the cat the alligator is fat 0
2 the king ate the cake the cake was ingested by the king 1
Here lbl= 1 means the sentences are semantically similar and lbl=0 means it isn't.
How would i implement this in python ?
I read the documentation of bert-as-a-service but since i am an absolute noob in this regard I couldn't understand it properly.
AI: BERT is trained on a combination of the losses for masked language modeling and next sentence prediction. For this, BERT receives as input the concatenation of the special token [CLS], the first sentence tokens, the special token [SEP], the second sentence tokens and a final [SEP].
[CLS] | First sentence tokens | [SEP] | Second sentence tokens | [SEP]
Some of the tokens in the sentences are "masked out" (i.e. replaced with the special token [MASK]).
BERT generates as output a sequence of the same length as the input. The masked language loss ensures that the masked tokens are guessed correctly. The next sentence prediction loss takes the output at the first position (the one associated with the [CLS] input and uses it as input to a small classification model to predict if the second sentence was the one actually following the first one in the original text where they come from.
Your task is neither masked language modeling nor next sentence prediction, so you need to train in your own training data. Given that your task consists of classification, you should use BERT's first token output ([CLS] output) and train a classifier to tell if your first and second sentences are semantically equivalent or not. For this, you can either:
train the small classification model that takes as input BERT's first token output (reuse BERT-generated features).
train not only the small classification model, but also the whole BERT, but using a smaller learning rate for it (fine-tuning).
In order to decide what's best in your case, you can have a look at this article.
In order to actually implement it, you could use the popular transformers python package, which is already prepared for fine-tuning BERT on custom tasks (e.g. see this tutorial). |
H: How do I know the best pruning criteria for decision trees?
Right now,I am working on decision trees on python,how do I know what would be the best pruning criteria based on my data?
AI: Experimentally: using cross-validation on a subset of your training data, compute the performance of every option that you want to consider. Then select the best option and train the final model using this option.
// different settings for hyper-parameters,
// for instance different pruning criteria:
hpSet = { hp1, hp2, ...}
trainSet, testSet = split(data)
for each hp in hpSet:
// run cross-validation over 'train' using hyper-parameter 'hp'
// and store resulting performance
perf[hp] = runCV(k, trainSet, hp)
bestHP = pick maximum hp in 'perf'
model = train(trainSet, bestHP)
perf = test(model, testSet) |
H: Slightly different results between scipy.stats.spearmanr and manual calculation
I have the following dataset.
When I calculate the Spearman correlation coefficient with scipy.stats.spearmanr, it returns 0.718182.
import pandas as pd
import numpy as np
from scipy.stats import spearmanr
df = pd.DataFrame(
[
[7,3],
[6,5],
[5,4],
[3,2],
[6,4],
[8,9],
[9,7]
],
columns=['Set of A','Set of B'])
correlation, pval = spearmanr(df)
print(f'correlation={correlation:.6f}, p-value={pval:.6f}')
It returns this:
correlation=0.718182, p-value=0.069096
However, when I tried to calculate it manually:
df_rank = pd.DataFrame(
[
[5,2],
[3.5,4],
[2,4],
[1,1],
[3.5,4],
[6,7],
[7,6]
],
columns=['Rank of A','Rank of B'])
cov_rank=np.cov(df_rank.iloc[:,0],df_rank.iloc[:,1])[0][1]
cov_rank/(df_rank.std()[0]*df_rank.std()[1])
It returns a different value.
0.7105597124064275
After the two decimal points are different and I do not know why.
The question is if scipy.stats.spearmanr expect the data to be ranked or not.
AI: I think you have a small error in your manual calculation. You assign rank 4 to 4, 4, and 5. The first two should have rank 3.5 and the last should be rank 5. Your calculation then gives the same answer, 0.7181818181818181 |
H: ADAM algorithm for multilayer neural network
I’m trying to touch neural networks without using “in box” algorithms. And so I found out that nowhere is written how to calculate square of gradient for hidden layers in ADAM optimizer.
I took the description from original article and the problem is that Vt is some kind of normalizing coefficient, so it should be scalar. But for hidden layers gradient is not really a vector byt a matrix. So how should we calculate gt^2 in this case?
AI: It is written right in the algorithm description that $g_t^2$ is elementwise square. |
H: Correlation based Feature Selection vs Feature Engineering
I'm a bit confused about the superiority of Feature Selection over Feature Engineering or vice versa.
Let's say I just want to get the best possible performance on a couple of models like a neural network, something tree-based and a Naive Bayes Classifier.
Before starting any training I looked at my features and engineered some additional (hopefully) even more expressive features. I did this from a domain expert point of view. For instance I added a new ratio feature C = A / B because I think this will be a very expressive information for the model.
Furthermore I added several features measuring basically the same thing but in different ways. Lets say a feature D measuring the the length of any text including empty lines and an additional feature E measures the length of any text excluding empty lines.
So this leads to very many features in my data set with also very high correlation / multi-collinearity. (of course D and E are very high correlated and A, B and C are also high correlated / multi-collinear.
So any kind of correlated based feature selection (between the features) would remove a lot of the engineered features, but could the removal provide the model any kind of better discriminative power with just less information? What helps the model more, keeping all features or removing correlated ones?
AI: What helps the model more, keeping all features or removing correlated ones?
There is some theory about it but in the end Machine Learning is try and error. You should give it a try with all features and then doing a feature selection to see if you are able to improve your model. What works for some models doesn´t necessarily have to work for the rest of the models.
If you want to select which features help your model you could do otherwise, instead of having all and removing, starting with one and adding features and only keeping them in case that it boosts the performance of the model. There are cases when you add a feature and the performance of the model drops.
There are a lot of ways in which we can think of feature selection, but most feature selection methods can be divided into three major buckets. From this source
Filter based: We specify some metric and based on that filter features. An
example of such a metric could be correlation/chi-square.
Wrapper-based: Wrapper methods consider the selection of a set of features as a search problem. Example: Recursive Feature Elimination
Embedded: Embedded methods use algorithms that have built-in feature selection methods. For instance, Lasso and RF have their own feature selection methods. |
H: UniLM - Unified Language Model for summarization
The UniLM claims to be the best approach for summarization task. But there doesn't seem to be any tutorial or how-to section in the README.md or any other blog. How exactly can I use this state-of-the-art library for abstractive summary generation?
Github link
Paper
P.S. A newbie in NLP. Sorry if this is a dumb question.
AI: Here's what you should do
Prepare your dataset: Follow similar instructions as described in the paper and preprocess your dataset. This will be your major task as after this you will only have to fine-tune the model. If you don't have a dataset, you can use the dataset used in this research paper, which can be downloaded from here.
Download the pre-trained model. Or you can choose to start from the provided fine-tuned model checkpoint (from the link). You will have to check which version of the model works best for your dataset. If you select model fine-tuned for summarization task and your dataset is similar to CNN/DailyMail dataset [37] and Gigaword [36] you can skip fine-tuning.
Fine-tune model: In this step, you will be using the command mentioned in the readme of the Github repository. Note that, there are some parameters that should be according to the language model you have downloaded in the previous step. Based on the size of your dataset you can change the number of epochs in the following command. You should also note that this will require GPU. The repository readme recommends 2 or 4 v100-32G GPU cards for finetuning the model.
OUTPUT_DIR=/{path_of_fine-tuned_model}/
MODEL_RECOVER_PATH=/{path_of_pre-trained_model}/unilmv1-large-cased.bin
export PYTORCH_PRETRAINED_BERT_CACHE=/{tmp_folder}/bert-cased-pretrained-cache
export CUDA_VISIBLE_DEVICES=0,1,2,3
python biunilm/run_seq2seq.py --do_train --fp16 --amp --num_workers 0 \
--bert_model bert-large-cased --new_segment_ids --tokenized_input \
--data_dir ${DATA_DIR} \
--output_dir ${OUTPUT_DIR}/bert_save \
--log_dir ${OUTPUT_DIR}/bert_log \
--model_recover_path ${MODEL_RECOVER_PATH} \
--max_seq_length 192 --max_position_embeddings 192 \
--trunc_seg a --always_truncate_tail --max_len_a 0 --max_len_b 64 \
--mask_prob 0.7 --max_pred 48 \
--train_batch_size 128 --gradient_accumulation_steps 1 \
--learning_rate 0.00003 --warmup_proportion 0.1 --label_smoothing 0.1 \
--num_train_epochs 30
Evaluate your model: Use biunilm/decode_seq2seq.py to decode (predict the output of evaluation dataset) and use the provided evaluation script to evaluate the trained model.
Use the trained model: In order to use this model to make a prediction, you can simply write your own python code to:
load Pytorch pre-trained model using pytorch_pretrained_bert library as used in decode_seq2seq.py file
Tokenize your input
predict the output and detokenize the output.
Here is the logic which you can use:
model = BertForSeq2SeqDecoder.from_pretrained(long_list_of_arguments)
batch = seq2seq_loader.batch_list_to_batch_tensors(input_batch)
input_ids, token_type_ids, position_ids, input_mask, mask_qkv, task_idx = batch
traces = model(input_ids, token_type_ids,position_ids, input_mask, task_idx=task_idx, mask_qkv=mask_qkv)
Note that this is not the complete logic. This code just show how github repository code has handled the saved model and used it to make predictions. Use traces to convert ids to tokens and detokenize the output tokens (as used in the code here). The detokenization step is necessary as the input sequence is tokenized to subword units by WordPiece.
For reference here is the code which loads the pre-trained model. You can go through the loop to understand the logic and implement it in your case. I hope this helps. |
H: How should I read the following heatmap?
I've been playing around with the linear regression, and I was thought that before commencing it's always good to plot an heatmap to see whether there are features that somehow is worth testing for their significance/relationship.
Would you agree with the above?
With that being said, after having refined a tiny bit my data, I obtained the following heatmap:
As you can see I now have two features completely blank. I though in the beginning to some sort of rendering glitch, which is not the case.
In reviewing the dataset, I found out that the features have always the same value across the dataset.
I'd assume this is not being printed out due to the fact this feature has became statistically significant for every feature.
Would you say this is a correct way of referring to this? Or is there any other way to express this concept?
This is however not the case for the other feature, which contains a mix of numerical value, hence I'm not able to explain.
Can you please advice?
AI: Since the variables isPWKinText and H1_2_Len have the same value for all examples in your dataset they have zero variance and contain no information. There is simply just no inference you can make based on them. That is why they are not relevant and shown blank.
Please note, that this depends on your dataset, of course. The two variable might just be constant for the examples you are currently looking at. For example, if this is an analysis of your training data maybe your datasplit is just really bad and the validation or test set contains examples for which these variable do show other values. That is something I suggest to double check. |
H: Multi-class clasification
Just getting my toes wet with running some models on the Iris dataset and was wondering if using One-vs-Rest is required or not? Because I can fit a linear model without it, but using OVR yields different results.
Any explainations would be great, thank you!
AI: Linear regression is not a classification technique. Most of this is meaningless as a result. Did you mean logistic regression? You do not need OneVsRestClassifier explicitly if you set LogisticRegression to handle multi-class classification internally with 'ovr' (one vs rest again) or 'multinomial' (softmax loss) |
H: How to predict an outcome within a specific time window?
I have a dataset which has around 10K records.
My objective is to predict whether the customer will churn or not. Binary classification problem with each class representing around 55:45 proportion and 20 features.
I understand when it's just about prediction, I can apply some binary classification algorithm and find out whether the customer churns or not
But how do I incorporate the objective of finding whether the customer will churn in 30 days or not?
Another example is find whether patient will be dead within 30 days from the date of discharge. I have his date of discharge along with other features like Blood pressure, Cholesterol etc.
Rather than just predicting whether he will be dead or not anytime in future, I would like to restrict it to 30 days from date of discharge.
Hope I gave the details to help you understand the question better.
AI: It depends how deep technichally you want to go. You can apply a slight modification of a Survival methods/ cox models that relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time.
Also if you group de features you can make the problem look like as a classical binary classification problem. But you should do a bit of data engineering in order to have the labels this good.
Probably the easiest is to modify your data and make it look like a classification problem when the target is if the person went away in the next months. |
H: Classification model using RNN(action detection)
1) Could it be useful to use RNN for classification problem?(e.g.
to distinguish which action is taken: car is going, walking,
digging, nothing).
If 1 question is positive, how should RNN structure look like?
I have dataset of 4 actions, many examples for each action, each example includes 124 samples.
So my X_train, X_test are arrays of float(400000, 124, 1); y_train, y_test are arrays of int(0 or 1 or 2 or 3 depends on action).
My data preprocessing:
X_train, X_test, y_train, y_test = np.array(X_train), np.array(X_test), np.array(y_train), np.array(y_test)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
My structure:
regressor = Sequential()
regressor.add(LSTM(units=55, return_sequences=True, input_shape=(X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=50))
regressor.add(Dropout(0.2))
regressor.add(Dense(4,activation='softmax'))
regressor.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
regressor.fit(X_train, y_train, epochs=5, batch_size=32)
y_pred = regressor.predict(X_test)
AI: @CapJS,
Instead of going ahead with an RNN, which helps you model the dependencies and relationship between your content, I would suggest you to take a look at 1D convolution networks to achieve the classification of activity.
There is a nice post talking about something similar : https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/
With 1D convolution, you are by design gaining better processing performance and given that you have just 124 samples per activity, you do not need a deep layered network as the data size is still quite small.
Given the data size, I would also suggest you to go ahead with something as simple as logistic regression, random forest approach. Hope this helps. |
H: Papers presenting results that are worse than random chance
Is it me or has there been an increasingly large amount of object detection papers describing models that are performing worse than chance. Here is an example (an extract so not to name names):-
AP represents the average precision. No mention of recall.
The paper goes on to say that YOLOv3-Lite reaches state of the art performance in detecting the specified object. This table would suggest to me that the models are worse just flipping a coin. What am I missing here?
AI: Not all probabilities are 50/50.
I'm assuming that the paper you are looking at is eYOLOv3-Lite: A Lightweight Crack Detection Network.
Under the evaluation metrics section, I see
For each image, the intersection over union ($I_{oU}$) between the bounding box of the detected crack
and ground truth can be calculated as: $I_{oU}$ = $\frac{A_o}{A_u}$
, where $I_{oU}$ is the intersection over union, $A_o$ is the
area of overlap, and $A_u$ is the area of union.
When the $I_{oU}$ of the predicted bounding box and ground truth is greater than a certain threshold
value (e.g., 0.5), it is considered to be a true positive; otherwise, it is a false positive.
It looks like the problem is to assign a bounding box to each image such that the bounding box contains the crack (or the ground truth box associated with the crack). It's not an easy probability to calculate exactly, but it's quite clear that assigning a bounding box completely at random will yield a success much lower than 50%. |
H: Using Tensorflow object detection API vs Keras
I am new to machine learning. I am curious to know what is the difference between using Keras instead of TensorFlow object detection API. We need to manually configure hidden layers and input layer in Keras so what is the advantage to use Keras and how to know how many layers should configure to achieve object detection using Keras.
Please check two different types of implementation 1) Using Keras 2) Using Tensorflow Object detection API without Keras
Thanks !!!
AI: Keras provides you high level api or can say wrapper written on top of multiple backends.
These back ends have the core implementation of DNN.
List of Keras supported backends are:
Tensorflow
Theano
CNTK
**Source: Keras documentation for supported backends
Keras hides a bit complexity of DNN implementation, but again restrict
your freedom. In case if you write a code in Tensorflow, you have
explicitly specifies and calculate optimizer, cost function and other
things, but it provides you flexibility.
So for me writing in Keras just a convenience.
As far my knowledge is concern, so far we dont have any fix formula to identify number of layers sufficient for Object detection :). |
H: Reinforcement Learning: Policy Gradient derivation question
I have been reading this excellent post: https://medium.com/@jonathan_hui/rl-policy-gradients-explained-9b13b688b146 and following the RL-videos by David Silver, and I did not get this thing:
For $\pi_\theta(\tau) = \pi_\theta(s_1, a_1, ..., s_T, a_T) = p(s_1) \prod_{t=1}^T \pi_\theta(a_t | s_t)p(s_{t+1}|a_t, s_t)$ being the likelihood of a given trajectory in a cycle, the derivative of the value function becomes
$$\nabla_{\theta}J = E[\nabla_{\theta}log\pi_{\theta} \cdot r ]$$
which then immediately becomes
$$={{1} \over {N}} \sum_{i=1}^{N}(\sum_t^T \nabla_\theta log \pi_\theta(a_{i,t}, s_{i,t})) r$$
i.e. summed over all N paths for $\tau$, while I expected
$$=\sum_\tau \pi_\theta(\tau) \sum_t^T \nabla_\theta log \pi_\theta(\tau) r$$
What I do not get: Where does the probability for the trajectories $\pi_\theta(\tau)$ (left-most sum) go or why did it get replaced by the mean over all paths? Is it assumed that all trajectories are equally likely, given that you start from a known starting position?
(You can find the equations in the blog-post linked above, at the end of the chapter "Optimization", right before the chapter "Intuition".)
AI: Actually in the article there is a $\approx$ rather than an $=$. This is because you can approximate expectation values by sampling the respective distribution.
Assume you want to compute
$$ E \left[ f(x) \right] = \int p(x) f(x) dx$$
The integral might not be tractable, you might not even know the distribution $p(x)$. But as long as you can sample from $p(x)$, you can approximate quite well using the Monte Carlo estimator
$$ E \left[ f(x) \right] \approx \frac{1}{N} \sum_{i=1}^N f(x_i)$$
with $x_i \sim p(x)$ for all $i$. This approximation would get better for larger $N$. The distribution $p(x)$ is, in some sense, represented by the samples $x_i$ and their respective frequency.
This is what is going on in the article. You want to compute the expectation value over all possible trajectories, but that is infeasible. Luckily though, you can sample the distribution by running simulations of the environment. The expectation is then approximated using samples of trajectories. The $\pi_\theta(\tau)$ is represented in the sampled $a_{i,t}$ and $s_{i,t}$.
In short, the expectation over all possible paths is approximated by the mean over samples of paths. |
H: Random Forest workflow?
I have a data-set comprised of a fairly large number of columns (over 1000) relative to the number of rows (370) that I am currently running a random forest regression on. I am a little confused with respect to the best way to go around various tasks such as feature selection and cross-validation.
At present, I am doing the following:
Splitting the data-set 90/10, and running the RF on all features using default hyper-parameters
Using Grid Search to fine tune hyper-parameters and then running an 'optimized' model with these tunings
Using 5-fold cross validation to evaluate the model
Obtaining feature importance scores to indicate which features are performing best
Given the large number of features I have, and the fairly low performance of my models, I would like to find a smarter workflow.
It seems sensible to me to use the feature importance scores to filter out poor performers, but would I do this before or after cross validation? For instance, would I, after completing the above steps and removing features below a certain thrsehold, then run steps 1-3 again? or would I use feature selection before parameter tuning? Do I only cross validate once? Are there any other obvious steps I should integrate?
I appreciate this isn't an exact science, but if there are some industry standards that I'm either getting wrong, or have missed entirely, then I would much appreciate the feedback.
AI: 370 rows is quite a few, RF does bootstrap but it is still few info.
Having too many columns will lead to a more complex model (since the algorithm will work 1 000 dimensions).
Consider doing a pipeline with all the steps and search for hyperparameters and feature selection there. https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html
You can check a simple tutorial here
In summary, try to build a pipeline with all the steps but maybe the problem is in your data 370 rows is a small sample. |
H: why we need data labelling tool for computer vision?
Before start training images with tensorflow object detection api we need to use labelling tool to annotate our images and converted to XML format.
What happens when we convert our annoted image to XML file? Why we need that?
AI: You need to label data in order to train classifier. Labelled data are at the basis of all supervised ML. XML files are just one of the many format you can use to store information, people usually use them since they can be easily read by many tools. |
H: What is the “learning” step in Gaussian Naive Bayes classification?
For conditionally independent features $f_i$, Naive Bayes Classification gives me the classifier
$Classifier(f) := \arg \max_{k} P(C=k) · ∏^n_{i=1} P(f_i|C=k)$
for classes $k$. I understand that for Gaussian Naive Bayes, I can assume normally distributed features, yielding
$Classifier(f) := \arg \max_k P(C=k) · ∏^n_{i=1} \frac{1}{\sqrt{(2πσ_{k,i})}} e^{( -\frac{(f_i - μ_{k,i})^2}{2σ_{k,i})}}$
where $μ_{k,i}$ is the mean of class $k$ and feature $f_i$ (and similiar for variance $σ_{k,i}$).
But where is the "learning step" in this whole procedure?
AI: I'm assuming you're asking about the intuition behind Naive Bayes (NB). For the sake of clarity I'm considering only categorical features. Gaussian NB is simply an application of NB to numerical features (assumed to be normally distributed).
During training every $p(f_i|C_k)$ is calculated by counting how often the feature value $f_i$ is associated with class $C_k$ among all the other possible features values associated with $C_k$:
This is done by counting how often $f_i$ and $C_k$ appear together across all the instances. That's how NB generalizes: the fact that a feature appears with a particular class is just an example, but the fact it appears proportionally more often with class A than class B forms a pattern.
The probability $p(f_i|C_k)$ represents how important $f_i$ is within class $C_k$.
When predicting the class for a new instance:
NB "weighs all the pros and cons" for this instance to be predicted as $C_k$ by combining all the $p(f_i|C_k)$ corresponding to this instance (in the sense that some of the probabilities $p(f_i|C_k)$ are low and some are high, so their product reflects the combination of "pros" and "cons" indications).
But even if $p(f_i|A) > p(f_i|B)$, it doesn't imply that $f_i$ is a strong indication of class $A$, because maybe this is due to class $A$ being less frequent than class $B$. This is taken into account in the prior $p(C_k)$, which gives more importance to a frequent class than a rare one (this is the basis of the Bayes theorem).
These last two points show how NB uses the "knowledge" of the trained model to make a prediction about any unknown instance. |
H: Performance gain of GPU when learning DNNs
Currently, I learn deep neural networks on my CPU (i7-6700K) using TensorFlow without AVX2 enabled. The networks need about 3 weeks to be learned. Therefore, I am searching for a (cheap) way to speed up this process. Is it better to compile TensorFlow enabling AVX2 or to buy a cheap[1] GPU like the GeForce GTX 1650 Super (about 180€ and 1408 CUDA cores)? What is the estimated performance gain of using a cheap[1] GPU?
[1] Cheap compared to current top edge GPUs.
AI: 3 years ago the rule of thumb was: About 15 times faster
Your CPU does 113 GFlops on float operations (source) and your GPU does 3 Tflops (source).
My bet: somewhere between 15 and 30 times faster |
H: How to begin understanding of audio and music analysis
I recently have been assigned to do some work with the python libROSA library. I don't have extensive experience with audio and music analysis and the apis and docs seem to assume a higher level of understanding. For example, the hello world example says things like:
The example is encoded in OGG Vorbis format,
The variable sr contains the sampling rate of y, that is, the number
of samples per second of audio.
By default, all audio is mixed to mono and resampled to 22050 Hz at
load time.
And I am "why do you need to encode audio?" and "why do you even need to sample? (analog v digital I guess)" and "why do you need to mix to mono?" "What does that even mean, 'mix'"?
Is there a good book(s) or sites that can help me get a basic understanding of audio and music processing?
Thanks in advance
AI: To learn about the basics and a bunch of advanced topics, check out Meinard Müller's "Fundamentals of Music Processing" (FMP) (Amazon/accompanying website). There's also a website with many Jupyter Notebooks demonstrating the book's content very well. FMP does not use librosa, but teaches you all the concepts you need to understand librosa. Most demonstrated approaches are signal-processing oriented. For work that relies on machine learning, you probably need to go through the recent research literature—ISMIR papers are a great place to start.
About the points you've raised:
Just like images (think JPEG, PNG, etc.) audio is stored in some format. OGG is just another format, like WAVE or MP3.
After audio has been decoded from some format like OGG, you get the raw samples, pretty much like points in a bitmap for images. One usually uses PCM for this (librosa does).
Most of the time we don't care about stereo, which is why librosa simply mixes stereo channels into one, mono channel by default. Also, most of the time we don't require CD quality, i.e., a sampling frequency of 44.1 kHz, so librosa downsamples the audio to 22.05 kHz by default. In a way, it's similar to reducing the size of an image by reducing its resolution.
Good luck! |
H: Weka: Implementation of Random Forest
I am wondering how random forests are exactly implemented in Weka. This paper is very specific about RFs in Weka, but the description of its learning process in chapter 2 seems strange to me.
They say:
Bootstrap samples $B_i$ for every tree $t_i$
A random subset of features is selected for each $t_i$
Information gain is used to grow unpruned trees $t_i$
My questions:
Shouldn't step 2 be repeated on all levels of the decision tree? Otherwise each tree will never see some of the features
Whats the default when setting numFeatures=0. I think this is the number of features that is available for each split. Is it the square root of the number of all features?
Is really information gain used for determining the best split attribute?
I am using Weka 3.8.3 - not sure if this matters.
Thanks for all hints :)
AI: Your linked paper appears to be wrong about feature subsetting. I couldn't find it in the documentation for randomForest, but the source for randomForest uses randomTree for the base models, and in that documentation it says
a tree that considers K randomly chosen attributes at each node.
So the selection seems to happen at each split.
(Note that xgboost has feature subsetting at each of the tree, the level (depth), and the node. I don't see any obvious reason that one or more of these options should always be preferable...)
For default number of features, sqrt(m) is the most common, but it looks like Weka uses lg(m). See option -K at
https://weka.sourceforge.io/doc.dev/weka/classifiers/trees/RandomForest.html
Yes, Weka uses the Quinlan family of decision trees, which split using information gain (as opposed to CART, which uses gini). |
H: What is "Laplacian image space"?
I have been working through Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis where the authors make the following claim (subjection 2.2.1):
...the Laplacian space is more robust to illumination changes and more indicative for face structure.
what is Laplacian space in this context? How is it robust to illumination changes?
AI: They should really be more clear about what they mean, but I expect they're using a Laplacian pyramid. As more evidence, they cite: "Denton et al "Deep generative image models using a laplacian pyramid of adversarial networks."
The idea is, store a very low resolution copy of your image, and a series of "difference" images. Each difference image tells you what to add to the lower resolution copy to get the next higher resolution version of the image. You can imagine that lots of values of the difference will be close to zero.
The "mean intensity" and therefore "illumination" is only really stored at that lowest resolution copy, and doesn't really (usually, they hope) affect the gradient of the image, which is what the laplacian pyramid stores. That's why those authors say it's not sensitive to illumination changes. Does that make sense? |
H: Which data set to use to find correlation between Predictor and response variables? Test data set? Training data set? or the entire data set?
I have a dataset with 4 predictor variables X1, X2, X3, X4, and one response variable Y. I have been asked to check the correlation between these variables and see how they are related and then use linear model to fit them.
No split of training set:test set is given. I have one data set with 10000 samples. I was planning of splitting this data set in the ratio 80:20 for training and testing respectively.
Now I have a doubt on whether the correlation should be found after the data has been split or is it better to check correlation with the entire dataset? Which is the std way of doing?
P.S. I am going to use the R programming to do the same.
AI: This depends on your research question. Do you want to make predictions? - Then you need to split your data set in training and testsamples.
However, if you are more interested in answering the following question: What impact does X1, X2, X3 and X4 have on Y, respectively? Then you are interested in estimation. For this, you do not need to split your sample, but have to test your data set for underlying model assumptions (e.g. heteroscedasticity, autocorrellation of the residuals etc.) to get unbiased/accurate estimations.
For OLS (linear regression) these model assumptions follow the Gauss-Markov-Theorem.
Most statistical tests for model assumptions are already implemented in R:
Test for autocorrelation - Breusch-Godfrey - bgtest
Test for heteroscedasticity - White's Test - het.test
Test for normality - Jarque Bera Test - jarque.bera.test
However, even the tests make some assumptions about your data - but these tests are the most common ones.
UPDATE - Example for clarification
According to your comment. Imagine this real-life problem. You want to build a model that can predict the chance of cancer (y), using variables like age, blood pressure, weight, cholesterol value etc.. (X1, X2, X3, X4)
To assess the nature of the variables and their relationships you do descriptive analysis (Mean, Variance etc.) + Correlation analysis.
Let's have a look at the following data. (Yellow records would be your test samples - the data you DON'T have in real life, as those patients haven't seen the doctor yet).
As you can see the mean and variance can differ already a lot depending on the fact if you include test samples or not in your calculations.
Now we are coming to the correlation part - the relationship between the independent variables or also called predictors
I highlighted the relationship between weight and cholesterol. Without the test sample, it seems like weight and cholesterol are uncorrelated or have only a slightly positive relationship. If we add the test data it turns out that the correlation turns negative.
Question: If the correlation between your variables would have an impact on your choice of variable selection for your model, would it make sense to include the test-sample in your correlation analysis? Especially, knowing that this data is not available in real life yet.
Remember Model Estimation: if you build a multivariate regression model mean, variance and covariance are used to find the best parameters to estimate your dependent variable (Cancer). So a model that was already trained with all available data is likely to make better predictions as it has already seen the data it should predict.
Summary
Whenever you plan to make predictions with a model you should work as closely to real-life assumptions. So you split your data in train - test data. You ònly use the training data to perform all your tests and checks and you will not touch the test set before making a prediction. |
H: Assign a unique cluster based on a dataframe column with KMeans Algorithm
I have the following df
x1 x2 x3 x4
1000 5000 0.8 restaurant1
2000 7000 0.75 restaurant1
500 1000 0.5 restaurant2
700 1400 0.6 restaurant2
1000 5000 0.8 restaurant2
100 600 0.9 restaurant3
200 1200 0.9 restaurant3
50 1000 0.9 restaurant3
applying a Kmeans Algorithm for 2 clusters what happens is that y:
x1 x2 x3 x4 Y
1000 5000 0.8 restaurant1 1
2000 7000 0.75 restaurant1 1
500 1000 0.5 restaurant2 2
700 1400 0.6 restaurant2 2
1000 5000 0.8 restaurant2 1
100 600 0.9 restaurant3 2
200 1200 0.9 restaurant3 2
50 1000 0.9 restaurant3 2
Possible Desired Outputs:
x1 x2 x3 x4 Y
1000 5000 0.8 restaurant1 1
2000 7000 0.75 restaurant1 1
500 1000 0.5 restaurant2 2
700 1400 0.6 restaurant2 2
1000 5000 0.8 restaurant2 2
100 600 0.9 restaurant3 2
200 1200 0.9 restaurant3 2
50 1000 0.9 restaurant3 2
or
x1 x2 x3 x4 Y
1000 5000 0.8 restaurant1 1
2000 7000 0.75 restaurant1 1
500 1000 0.5 restaurant2 1
700 1400 0.6 restaurant2 1
1000 5000 0.8 restaurant2 1
100 600 0.9 restaurant3 2
200 1200 0.9 restaurant3 2
50 1000 0.9 restaurant3 2
I would like to set this boundary: a restaurant must belong to 1 and only 1 cluster.
I understand why there is this output, but how could I avoid and fix it?
Below the code that I used in my notebook:
#Converting float64 to numpy array
x1=df['x1'].to_numpy()
x2=df['x2'].to_numpy()
x3=(df['x5']/df['x2']).to_numpy()
x4=df_joint_raw['x4'].cat.codes.to_numpy()
X=np.stack((x1,x2,x3,x4),axis=1)
#Getting clusters
y_pred=KMeans(n_clusters=2, random_state=0).fit_predict(X)
AI: Very interesting question! I try my best:
It depends a bit on the number of clusters and number of restaurant but in general I explain a bit.
If the number of restaurants and clusters are the same, then, theoretically, your question has just one trivial answer: "each restaurant is a cluster". You even don't need any algorithm. I go a bit deeper on it.
Most of ML algorithms solve an optimization problem to find the answer. Sometimes optimization problems are subject to some constraints.
Example:
Cluster restaurants such that all similar restaurants are necessarily assigned to the same cluster.
Cluster restaurants such that the density of same restaurants in same clusters is maximum.
The first one has the trivial answer I gave before but second one can be solved. You run several clustering methods (or just k-means but with several initial conditions) and accept the one in which higher number of similar restaurants are in identical clusters. For this you need to convert "density of same restaurants in same clusters" to a mathematical formulation and use it as criterion of choice. If you need help on it just drop a comment so I update the answer.
In any case, you change the output of the clustering and you don't let it "naturally" find the clusters as you push a criterion which is not normally considered in the algorithm. But don't worry! The good thing is that at least you have a criterion for "goodness" of your clustering which does not exist normally in clustering problem.
UPDATE
Let's try $\chi^2$ test first. It is pretty simplified but try it and if it didn't work we can think of something else. For know how, I prepared it for you in a simple way so you don't get confused with different tutorials on the net.
Imagine you have 4 restaurants and you want 4 clusters. You will end up with such a frequency table which says how many restaurants of which type fall in which cluster:
Then in Python, you simply calculate $\chi^2$ statistic which tells you if clusters and restaurants "are correlated or not".
from scipy.stats import chi2_contingency
obs = np.array([[10,1,2,1], [1,11,0,1], [1,2,8,1], [0,2,2,12]])
chi, p, _,_ = chi2_contingency(obs)
print('The chi-square statistic of {} with p-value of {}'.format(chi,p))
P-value, as you know, tells you if the statistic is significant. There theoretical consideration in this solution but I am not going to confuse you with that. I apologize as I did not go through your proposal in the comment. Will answer accordingly as soon as I find time to have a look at that.
Good Luck! |
H: Comparing one small dataset with a big dataset for similar records
I create a varying small dataset (dataset: X) with 500 records in each query. Everytime I need to compare the dataset with a bigger one (dataset: A) (15 milion records) to find similar (or semi-silmilar) values from three different columns. The values are either one word or a sentences. My algorithms is like this:
make a vector of words in each record in both datasets
with a for loop, search for similarities over the big dataset (e.g. with tfidf). That means each record from the small dataset should look for possible similarities over the big dataset.
However, the problem is searching a big data is very slow. Is there any efficeint way to solve this problem?
Thanks
AI: A way to speed up this process is to preprocess the large dataset, the goal being to store the documents from A in a way which avoids a lot of useless comparisons.
Store each document from A in an inverted index $m$, so that for any word $w$ $m[w]$ is the list of all documents in A which contain word $w$ (note that a document can appear several times in this data structure).
When comparing a new query against $A$, instead of iterating through all documents in $A$ just compare against the subsets which have at least one word in common, i.e. $m[w]$ for every word $w$ in the query.
Couple remarks:
Normally stop words would be excluded from the keys since they appear everywhere and they are not relevant for matching.
The key doesn't have to be a single word, it could also be an n-gram (or several n-grams) or even several words in case it fits in memory.
This kind of problem is frequent in the task of record linkage. |
H: How to comptute principal component from three points in two dimensional space?
I have the following question:
Given 3 points (-1, 1), (0, 0), (1, 1). What's the first principal component and what are the coordinates of the projected data points? What would be the variance of the projected data? How to reconstruct the original data points and what would be the reconstruction error (squared)?
Am I right that the first PC would just be a horizontal line at y = 0.5 because the PC is the best fit line where the variance of the data is the maximum? And the variance would be 2 in that case?
Any help much appreciated.
AI: I think you may have forgotten to subtract the mean. As far as I know you have to center the data, otherwise you will compute variance with respect to the origin, rather than the variance within the data.
Your data points have a mean vector
$$\mu = \left[0, \frac{2}{3}\right] $$
Let's subtract the mean from the data and put it into a matrix
$$ \tilde{X} = \begin{bmatrix} -1 & \frac{1}{3} \\ 0 & \frac{-2}{3} \\ 1 & \frac{1}{3} \end{bmatrix}$$
The covariance matrix $\Sigma$ is now given by
$$ \Sigma = \frac{1}{3} \tilde{X}^T \cdot \tilde{X} = \begin{bmatrix} \frac{2}{3} & 0 \\ 0 & \frac{2}{9} \end{bmatrix}$$
The $\frac{1}{3}$ is because the covariance matrix involves an expectation value. This is neat, $\Sigma$ is alread diagonal, so the first primary component is the x-axis with a variance of $\frac{2}{3}$. That is because the direction of largest variance is the eigenvector with the largest eigenvalue of $\Sigma$ (for why that is, see here). |
H: How can you include information not present in an image for neural networks?
I am training a CNN to identify objects in images (one label per image). However, I have additional information about these images that cannot be retrieved by looking at the image itself. In more detail, I'm talking about the physical location of this object. This information proved to be important when classifying these objects.
However, I can't think of a good solution to include this information in a image recognition model, since the CNN is classifying the object based on the pixel values and not on ordered feature data.
One possible solution I was thinking of was to have an additional simple ML model on tabular data (including mainly the location data), such as an SVM, to give give a certain additional weight to the output of the CNN. Would this be a good strategy? I can't seem to find anything in the literature about this.
Thanks in advance!
edit: Someone asked what I meant by 'location'. With the location I meant the physical location of where the image was taken, in context of a large 2d space. I don't want to go too deep into the domain, but it's basically an (x,y) vector on a surface area, and obviously this meta-data cannot be extracted by looking at the pixel values.
edit2: I want to propose an additional way I found that was useful, but was not mentioned in any answer.
Instead of using the neural network to predict the classes, I use the neural network to produce features.
I removed the final layer, which resulted in an output of shape 1024x1. This obviously depends on the design of your network. Then, I can use these features together with the meta-data (in my case location data) in an additional model to make predictions, such an SVM or another NN.
AI: Other answers suggest to put an additional channel, I disagree. I think it's a very computationally intensive, time consuming process. Moreover, it forces non-pixel data to be processed by Conv filters, which doesn't make much sense IMHO.
I suggest you to establish a multi-input model. It would be composed by three parts:
A Convolutional part, to process pixel data,
A Feed-forward part to process non-image data,
Another Feed-forward part that elaborates the prediction based on the concatenation of the two outputs above.
You will need to instantiate them separately, then combine together in a Keras Model(). You will also need Concatenate() layers to combine the two different sources of data.
You can read more about the implementation of multi-input Neural Networks here. |
H: Is it possible to deploy a python trained machine learning model (e.g. a .pkl file) in C language?
I would like to train my machine learning using Python and libraries such as tensor flow, keras, and scikit-learn. After trained, I would like to export this trained model to a file, so far I have been using the library pickle. I feel that this is pretty standard in any ML project.
However, and the point of this question, is it possible to use this trained model (e.g. a .pkl or .sav file) in C language? When I say "use", I mean passing values to the model and getting a prediction back.
This is somewhat similar to TensorFlow Lite for Microcontrollers, however, I am not sure if this is the most appropriate and easier approach to this problem.
AI: Library http://www.picklingtools.com/
#include "chooseser.h"
// assume that pickle file is "loaded_pkl";
int main()
{
Val loaded_pkl;
LoadValFromFile(loaded_pkl, "my_pkl.pkl", SERIALIZE_P0);
cout << loaded_pkl << endl;
} |
H: No statistical significance but observable trends
I have a general inference question regarding scenarios when results from data are not statistically significant but there appears to be an observable trend.
For example, treatment A and treatment B are applied to 2 independent populations. Using a ttest to analyze the resulting data (lets say the data is total revenue), the p value == .2, so the effect of treatment on revenue was not statistically significant. However, the total revenue from treatment A was observably higher in treatment B. What can I say in this regard?
I've had academic advisors recommend saying 'While the effect of treatment was not significant, a trend was observed', and then one would go on describing the trend. Is this an adequate viewpoint, or a statistical folly? What conclusions would a data scientist in industry draw and present to stakeholders from a scenario like this?
AI: There is high variance within each group. Even though there is a mean difference between the groups, there is a high amount of spread within just treatment A or just treatment B.
From a statistical point of view, the difference between the groups could be due to chance because of the large spread relative to the small mean difference.
Due to the amount of randomness observed - if we did ran the experiment again, treatment B mean could be higher than treatment A mean. |
H: How to select the best model from validation/training/holdout accuracy score
I have made my own function to log all the attempts at hyperparamter tuning, the following information is gathered from a 10 fold cross validation.
But I am struggling to work out which model is best.
+----+-------------------+-------------------+-------------------+-------------------+
| | Validation | Val StDev | Train Err | Holdout |
+----+-------------------+-------------------+-------------------+-------------------+
| 0 | 0.899393281242345 | 0.02848162454242 | 1 | 0.903572035647507 |
+----+-------------------+-------------------+-------------------+-------------------+
| 1 | 0.902138248122673 | 0.029520127182082 | 1 | 0.924233269690891 |
+----+-------------------+-------------------+-------------------+-------------------+
| 2 | 0.899394809813502 | 0.025173568322695 | 0.918663669124935 | 0.909288194444444 |
+----+-------------------+-------------------+-------------------+-------------------+
| 3 | 0.897228682965021 | 0.030714865334356 | 1 | 0.908866279069768 |
+----+-------------------+-------------------+-------------------+-------------------+
| 4 | 0.909270070641525 | 0.027056183719667 | 1 | 0.924575965355467 |
+----+-------------------+-------------------+-------------------+-------------------+
| 5 | 0.891001537843704 | 0.032846295144796 | 1 | 0.903091978426922 |
+----+-------------------+-------------------+-------------------+-------------------+
| 6 | 0.895784577401649 | 0.032132196798841 | 1 | 0.884848484848485 |
+----+-------------------+-------------------+-------------------+-------------------+
| 7 | 0.88188105768332 | 0.033952319596936 | 0.910316431235621 | 0.903091978426922 |
+----+-------------------+-------------------+-------------------+-------------------+
| 8 | 0.920584479640099 | 0.027520133628529 | 0.936918085663145 | 0.924575965355467 |
+----+-------------------+-------------------+-------------------+-------------------+
| 9 | 0.887347364032516 | 0.030797370441503 | 0.900531622363285 | 0.903091978426922 |
+----+-------------------+-------------------+-------------------+-------------------+
| 10 | 0.876942399956479 | 0.043318142049256 | 1 | 0.868971836419753 |
+----+-------------------+-------------------+-------------------+-------------------+
| 11 | 0.900452899973248 | 0.033120692366442 | 0.924324942576064 | 0.886904761904762 |
+----+-------------------+-------------------+-------------------+-------------------+
| 12 | 0.889635597754135 | 0.023388005619559 | 1 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 13 | 0.899270803213788 | 0.026083641971929 | 1 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 14 | 0.886812435037192 | 0.033456079535065 | 0.904149376999805 | 0.898247322297955 |
+----+-------------------+-------------------+-------------------+-------------------+
| 15 | 0.884982783710439 | 0.029572747271092 | 0.901110070564378 | 0.903091978426922 |
+----+-------------------+-------------------+-------------------+-------------------+
| 16 | 0.896948178627522 | 0.026483462928863 | 0.919101870462519 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 17 | 0.891278476391404 | 0.030334266939915 | 1 | 0.888614341085271 |
+----+-------------------+-------------------+-------------------+-------------------+
| 18 | 0.909092365260866 | 0.031848098756772 | 1 | 0.898247322297955 |
+----+-------------------+-------------------+-------------------+-------------------+
| 19 | 0.889032812279866 | 0.028580171007027 | 1 | 0.903572035647507 |
+----+-------------------+-------------------+-------------------+-------------------+
| 20 | 0.890821503056501 | 0.03250894068403 | 0.935473681065598 | 0.889619742654119 |
+----+-------------------+-------------------+-------------------+-------------------+
| 21 | 0.908662067002155 | 0.02515678884091 | 1 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 22 | 0.894626358857844 | 0.038437447908009 | 0.921035110317957 | 0.907956547269524 |
+----+-------------------+-------------------+-------------------+-------------------+
| 23 | 0.904503845292009 | 0.03112232540355 | 0.922738180662704 | 0.903091978426922 |
+----+-------------------+-------------------+-------------------+-------------------+
| 24 | 0.893363641701947 | 0.032000273114453 | 1 | 0.897729496966138 |
+----+-------------------+-------------------+-------------------+-------------------+
| 25 | 0.891379560352061 | 0.032525437349441 | 0.916932513590361 | 0.892891134050809 |
+----+-------------------+-------------------+-------------------+-------------------+
| 26 | 0.905999138236311 | 0.027801373368529 | 1 | 0.913722347684612 |
+----+-------------------+-------------------+-------------------+-------------------+
| 27 | 0.892155761699392 | 0.028942257106962 | 1 | 0.898740310077519 |
+----+-------------------+-------------------+-------------------+-------------------+
| 28 | 0.90547992240768 | 0.034616407726398 | 1 | 0.888072054527751 |
+----+-------------------+-------------------+-------------------+-------------------+
| 29 | 0.854504389713449 | 0.045352425614533 | 1 | 0.816845180136319 |
+----+-------------------+-------------------+-------------------+-------------------+
| 30 | 0.854329853134155 | 0.047548722046064 | 0.912076317693916 | 0.828930724012203 |
+----+-------------------+-------------------+-------------------+-------------------+
| 31 | 0.919465770658208 | 0.029180215562696 | 0.931409372636061 | 0.938637698179683 |
+----+-------------------+-------------------+-------------------+-------------------+
| 32 | 0.90057318637927 | 0.026049221971696 | 1 | 0.919367283950617 |
+----+-------------------+-------------------+-------------------+-------------------+
| 33 | 0.895446367880531 | 0.033041859254943 | 1 | 0.913722347684612 |
+----+-------------------+-------------------+-------------------+-------------------+
| 34 | 0.884125762352326 | 0.03697539365293 | 0.897337858027344 | 0.897729496966138 |
+----+-------------------+-------------------+-------------------+-------------------+
| 35 | 0.883233039971663 | 0.034111835608482 | 0.893720586886425 | 0.913722347684612 |
+----+-------------------+-------------------+-------------------+-------------------+
| 36 | 0.898831686569679 | 0.042604871968562 | 1 | 0.863619885443604 |
+----+-------------------+-------------------+-------------------+-------------------+
| 37 | 0.909826578207203 | 0.029949272123959 | 1 | 0.898247322297955 |
+----+-------------------+-------------------+-------------------+-------------------+
| 38 | 0.881913627468284 | 0.036432833984006 | 0.893183972214204 | 0.918992248062016 |
+----+-------------------+-------------------+-------------------+-------------------+
| 39 | 0.893848891847337 | 0.02592119349599 | 1 | 0.90842259006816 |
+----+-------------------+-------------------+-------------------+-------------------+
| 40 | 0.855511803338288 | 0.045600097937187 | 1 | 0.816845180136319 |
+----+-------------------+-------------------+-------------------+-------------------+
| 41 | 0.856261861628324 | 0.046589739419035 | 1 | 0.811284379041902 |
+----+-------------------+-------------------+-------------------+-------------------+
| 42 | 0.892329922600147 | 0.027143842917071 | 1 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 43 | 0.883558035554916 | 0.031100578488573 | 0.903223129534581 | 0.898740310077519 |
+----+-------------------+-------------------+-------------------+-------------------+
| 44 | 0.853943064191891 | 0.045073764302981 | 1 | 0.816845180136319 |
+----+-------------------+-------------------+-------------------+-------------------+
| 45 | 0.888853496341911 | 0.027961101762991 | 1 | 0.913722347684612 |
+----+-------------------+-------------------+-------------------+-------------------+
| 46 | 0.897113661181162 | 0.036022289370872 | 0.921272431864044 | 0.897729496966138 |
+----+-------------------+-------------------+-------------------+-------------------+
| 47 | 0.891468488082473 | 0.028579769797201 | 1 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 48 | 0.898831686569679 | 0.042604871968562 | 1 | 0.863619885443604 |
+----+-------------------+-------------------+-------------------+-------------------+
| 49 | 0.902910285816319 | 0.025031947763032 | 1 | 0.908866279069768 |
+----+-------------------+-------------------+-------------------+-------------------+
| 50 | 0.876865550417646 | 0.04283766065777 | 0.941524627838129 | 0.868971836419753 |
+----+-------------------+-------------------+-------------------+-------------------+
| 51 | 0.88091084510941 | 0.030430567351611 | 0.902847063239986 | 0.893421723610403 |
+----+-------------------+-------------------+-------------------+-------------------+
| 52 | 0.881680572698977 | 0.031246268640075 | 0.902192589536189 | 0.892891134050809 |
+----+-------------------+-------------------+-------------------+-------------------+
| 53 | 0.855040900830318 | 0.046858153140763 | 1 | 0.816845180136319 |
+----+-------------------+-------------------+-------------------+-------------------+
| 54 | 0.894438862703006 | 0.040891854948011 | 0.920792225979698 | 0.907956547269524 |
+----+-------------------+-------------------+-------------------+-------------------+
| 55 | 0.891904027574454 | 0.031645714271463 | 1 | 0.898740310077519 |
+----+-------------------+-------------------+-------------------+-------------------+
| 56 | 0.886050671635633 | 0.032472146105505 | 0.899909848784576 | 0.908866279069768 |
+----+-------------------+-------------------+-------------------+-------------------+
| 57 | 0.881980625144301 | 0.03563700286777 | 0.892637674766258 | 0.918992248062016 |
+----+-------------------+-------------------+-------------------+-------------------+
| 58 | 0.891270767582537 | 0.03142082216981 | 1 | 0.898740310077519 |
+----+-------------------+-------------------+-------------------+-------------------+
| 59 | 0.910664399342078 | 0.025582535272637 | 0.927959603815499 | 0.893421723610403 |
+----+-------------------+-------------------+-------------------+-------------------+
| 60 | 0.888100544552359 | 0.026544114270193 | 1 | 0.918597857838364 |
+----+-------------------+-------------------+-------------------+-------------------+
| 61 | 0.896690074843623 | 0.034624649065343 | 1 | 0.913292822803036 |
+----+-------------------+-------------------+-------------------+-------------------+
| 62 | 0.887338053809583 | 0.030621509271507 | 1 | 0.918992248062016 |
+----+-------------------+-------------------+-------------------+-------------------+
| 63 | 0.897753456871316 | 0.025062963044505 | 0.916156729004089 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 64 | 0.896456912702229 | 0.030808038532288 | 0.907756656209691 | 0.909288194444444 |
+----+-------------------+-------------------+-------------------+-------------------+
| 65 | 0.883460118896986 | 0.037339407800874 | 0.89706586511388 | 0.897729496966138 |
+----+-------------------+-------------------+-------------------+-------------------+
| 66 | 0.878395246090765 | 0.034089155345539 | 0.896519056428997 | 0.913722347684612 |
+----+-------------------+-------------------+-------------------+-------------------+
| 67 | 0.910664399342078 | 0.025582535272637 | 0.927959603815499 | 0.893421723610403 |
+----+-------------------+-------------------+-------------------+-------------------+
| 68 | 0.895947305636744 | 0.027392857919265 | 1 | 0.91413103898301 |
+----+-------------------+-------------------+-------------------+-------------------+
| 69 | 0.891138031159789 | 0.032237486772592 | 1 | 0.903572035647507 |
+----+-------------------+-------------------+-------------------+-------------------+
| 70 | 0.885255027194677 | 0.031984990817382 | 0.90200648514247 | 0.903572035647507 |
+----+-------------------+-------------------+-------------------+-------------------+
| 71 | 0.908735508831696 | 0.027210512830753 | 1 | 0.913722347684612 |
+----+-------------------+-------------------+-------------------+-------------------+
| 72 | 0.88713294586216 | 0.029978466529712 | 1 | 0.903572035647507 |
+----+-------------------+-------------------+-------------------+-------------------+
| 73 | 0.901055027327092 | 0.033359546159533 | 0.923523486623107 | 0.887502446662752 |
+----+-------------------+-------------------+-------------------+-------------------+
Traing err - the average accuracy score of the estimator fit and predicted on the training data
Validation - the average accuracy score of the esitmator fit on the training and predicted on the testing data set.
Val StDev - the standard deviation of the accuracy score based on the esitmator fit on the training and predicted on the testing data set.
Holdout (testing) Error - the holdout accuracy score (completly unseen data)
With this information in this format - how would one determine the best choice of esimator? There is so much information out there on model selection it is hard to take it all in.
Is my goal to produce the smallest variance between Training error and vaidation error? Ie bringing the lines of the learning curve as close together as possible.
Is a 100% accuracy on the Traing error a bad thing? (its bascially memorised the data) and should I ignore all these models?
Shoud I use the 1 sd rule to select the esimators that are 1 SD from the max mean validation score... and then choose the model with the lowest variance between holdout and validation?
Is there a better way decide which model is best? What model would you select?
AI: Is my goal to produce the smallest variance between Training error and vaidation error?
Normally your goal is to have a model that performs best in production, for that you normally split the train/hold out in a way that reproduce the best in the hold out set. Have a look at this answer from here What would I prefer - an over-fitted model or a less accurate model? They talk a bit about model selection.
Is a 100% accuracy on the Traing error a bad thing?
When you have this you probably have "overfitting". Your model is predicting unrealistically good results in the training set that most probable it won´t able to achieve in the test set.
It is not necessarily a bad thing, but it is suspicious.
With this information in this format - how would one determine the best choice of estimator?
For your case, I would recommend choosing the one that achieves best in hold out without overfitting.
Note: Your dataset it is way too small. 830 rows of data is almost nothing. In order to validate your model properly I will suggest Leave One Out Cross Validation: "In LOOCV we divide the data set into two parts. In one part we have a single observation, which is our test data and in the other part, we have all the other observations from the dataset forming our training data." from here
What is happening in your test is that 2 or 3 rows of your data make a big impact in the result that is why you have a big variance between the different folds and parameters. It is hard to say which model is better when validating this way.
In summary, my suggestion is that you do LOOCV and then select the one that has better performance in test.
(I am assuming that you can do random split since there is no patter such as it is a time series) |
H: Recall score for each sample in multilabel classification
Does it make sense to calculate the recall for each sample in a multilabel classification problem?
Suppose I have 3 data samples, each having its own true set of labels and predicted set of labels.
I want to see the match between the true set of labels and the predicted set of labels. I do not care for the true negatives or false positives in each prediction, so this translates to recall score for me. Programmatically, I would do an AND operation between y_predicted and y_true to get the number of true positives and divide it by the total number of true labels for each sample. (in other words, true positives/(true positives+false negatives))
My question is -
Is calculating recall per sample (not per label), usually done?
Is my thought process correct?
I've seen articles where a single recall is calculated for the whole matrix of y_true and y_predicted or recall is calculated for a single label.
AI: This metric is usually referred to as a sample-based or example-based score and can be applied in multi-label cases (to recall and other scores too). You can find a brief explanation here.
Scikit learn has an implementation for it (see here for recall) when setting average='samples' for recall_score:
'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). |
H: SVM SVC: Metric for parameter optimization on imbalanced data
I trained a multiclass SVC with RBF kernel on a down-sampled (and therefore balanced) dataset.
Now I want to perform grid search to find best cost and gamma.
What performance metric should I optimize for?
I have a highly imbalanced test set. There might be a factor of over 100 between the number of instances of different classes.
I am classifying 3D points (car, facade, human) - so I think one could assign equal weight to all classes.
AI: Using resampling methods to fix the imbalanced databases a good method. This can be completed by oversampling, downsampling or creating synthetic instances.
In this kind of problem, precision is not necessarily a good evaluation metric. Instead evaluating True Positives, True Negatives, False Positives and False Negatives is better way to evaluate this.
For this evaluation following techniques can be utilized from Sklearn library.
Recall score
Accuracy score
F1 score
AUC Score
In Sklearn the method caled "classification_report" gives a summary of those evaluations (except AUC Score from the list). That can be a good evaluation metric to check the performance of the model. |
H: Python how to add a condition to the groupby while calculating the median of a field?
I need to add a new condition to the following line:
train["Age"].fillna(train.groupby("Ticket")["Age"].transform("median"), inplace=True)
Currently, its taking the median of the age based on tickets class.
I need to add to this groupby("Ticket") another condition, where train["PrSb"]==0.
AI: This should work:
train["Age"].fillna(train.loc[train["PrSb"] == 0].groupby("Ticket")["Age"].transform("median"), inplace=True)
This adds an extra rowwise filter to the dataframe that is used in the groupby operation, which only selects rows where the PrSb colum is equal to 0. |
H: Cross Validation - Why does more folds increase variation?
Can someone explain why increasing the number of folds in a cross validation increases the variation (or the standard deviation) of the scores in each fold.
I've logged the data below. I'm working on the Titanic dataset and there is around 800 instances. I'm using a StratifiedKFold and accuracy scoring metric.
I thought that adding more data decreased variance - so if my understanding is correct, adding more folds would increase the amount of data supplied to each fit? But it appears that the more folds and LESS data passed in the lower the Standard Deviation (but the mean accuracy for each CV remains around the same)
{5: {'Mean': 0.8136965664427847, 'Std': 0.015594305964595902},
15: {'Mean': 0.8239359698681732, 'Std': 0.0394725492730379},
25: {'Mean': 0.823968253968254, 'Std': 0.07380525674642965},
35: {'Mean': 0.8284835164835165, 'Std': 0.08302266965043076},
45: {'Mean': 0.8207602339181288, 'Std': 0.09361950295425485},
55: {'Mean': 0.8243315508021392, 'Std': 0.08561359961087428},
65: {'Mean': 0.8273034657650041, 'Std': 0.10483277787806128},
75: {'Mean': 0.8274747474747474, 'Std': 0.11745811393744522},
85: {'Mean': 0.8240641711229945, 'Std': 0.12444299530668741},
95: {'Mean': 0.8305263157894738, 'Std': 0.12484655607120225},
105: {'Mean': 0.8243386243386243, 'Std': 0.1399822172135676},
115: {'Mean': 0.8240683229813665, 'Std': 0.12916193497823075},
125: {'Mean': 0.8249999999999998, 'Std': 0.13334396216138908},
135: {'Mean': 0.8306878306878307, 'Std': 0.15391278842405914},
145: {'Mean': 0.8272577996715927, 'Std': 0.1552827992878498},
155: {'Mean': 0.8240860215053764, 'Std': 0.16756897617377703},
165: {'Mean': 0.8270707070707071, 'Std': 0.16212344628562209},
175: {'Mean': 0.824, 'Std': 0.16293498557341674},
185: {'Mean': 0.8278378378378377, 'Std': 0.1664272446370702},
195: {'Mean': 0.8284615384615385, 'Std': 0.17533175091718106},
205: {'Mean': 0.8265853658536585, 'Std': 0.185808841661263},
215: {'Mean': 0.8265116279069767, 'Std': 0.188431515175417},
225: {'Mean': 0.8288888888888889, 'Std': 0.17685175489623095},
235: {'Mean': 0.8294326241134752, 'Std': 0.19467536066874633},
245: {'Mean': 0.8231292517006802, 'Std': 0.2009280149561644},
255: {'Mean': 0.823202614379085, 'Std': 0.20790684270535614},
265: {'Mean': 0.8254716981132075, 'Std': 0.2109826210610222},
275: {'Mean': 0.8254545454545454, 'Std': 0.2144726806895627},
285: {'Mean': 0.8242690058479532, 'Std': 0.2182928219064767},
295: {'Mean': 0.823728813559322, 'Std': 0.22096355056065273}}
AI: This has been a discussion for quite a while. For a more theoretical point of view you can find a good summary here.
From a practical point of view I'd look at it as follows. With increasing $k$ two things happen:
Your $k-1$ training folds increase in size
Your validation folds decreases in size
From the first point you can draw the conclusion that your $k$ models become more similar since your training data becomes more similar since you splitt off less data for the validation sets in the $k$-th fold. Which might lead to less between model-variance.
Moreover, since you are training on more data, the relative model complexity (i.e. compared to your data) decreases. In the bias-variance-trade-off graph (taken from [1])
this means we are moving towards the left, i.e. we trade less model variance for more model bias (please note that model variance here has a more general and conceptual meaning than the calculated variance or standard deviation between folds). The reason is that we are fitting a model with constant complexity to more data as $k$ increases, i.e. it becomes harder to learn the training data.
However, we are not only increasing the size of our training set with increasing $k$. We are also decreasing the validation set in size (see point two from above). Therefore, there might be higher between-fold variance with regards to our validation sets. This might be more relevant if the overall dataset carries higher variance or more outliers.
[1] "The Elements of Statistical Learning" by Hastie et al |
H: What is the bleu score of professional human translators?
Machine translation models are usually evaluated using bleu score. I want to get some intuition for this score. What is the bleu score of professional human translator?
I know it depends on the languages, the translator ect. I just want to get the scale.
edit: I want to make it clear - I talk about the expected bleu. It's not a theoretical question, it is an experimental one.
AI: The original paper "BLEU: a Method for Automatic Evaluation of Machine Translation" contains a couple of numbers on this:
The BLEU metric ranges from 0 to 1. Few translations will attain a score of 1 unless they are identical to a reference translation. For this reason, even
a human translator will not necessarily score 1. It
is important to note that the more reference translations per sentence there are, the higher the score is. Thus, one must be cautious making even “rough”
comparisons on evaluations with different numbers
of reference translations: on a test corpus of about
500 sentences (40 general news stories), a human
translator scored 0.3468 against four references and
scored 0.2571 against two references.
But as their table 1 (providing the numbers compared to two references, H2 is the one mentioned in the text above) shows there is variance among human BLEU scores:
Unfortunately, the paper does not qualify the skill level of the translators. |
H: getting error while scrapping Amazon using Selenium and bs4
I'm working on a class project using BeautifulSoup and webdriver to scrap
Disposable Diapers on
amazon for the name of the item, price, reviews, rating.
My goal is to have something like this where I will split this info in
different column:
Diapers Size 4, 150 Count - Pampers Swaddlers Disposable Baby Diapers, One
Month Supply
4.0 out of 5 stars
1,982
$43.98
($0.29/Count)
Unfortunately, I get this message after the 50 data appears: message: no such
element: unable to locate element: {"method":"css selector","selector":".a-last"}
Here is my code:
URL = "https://www.amazon.com/s?
k=baby+disposable&rh=n%3A166772011&ref=nb_sb_noss"
driver = ('C:/Users/Desktop/chromedriver_win32/chromedriver.exe')
driver.get(URL) html = driver.page_source soup = BeautifulSoup(html, "html.parser")
df = pd.DataFrame(columns = ["Product Name","Rating","Number of
Reviews","Price","Price Count"])
while True:
for i in soup.find_all(class_= "sg-col-4-of-24 sg-col-4-of-12 sg-col-4-of-36
s-result-item sg-col-
4-of-28 sg-col-4-of-16 sg-col sg-col-4-of-20 sg-col-4-of-32"):
ProductName = i.find(class_= "a-size-base-plus a-color-base a-text- normal").text#.span.get_text
print(ProductName)
try:
Rating = i.find(class_= "a-icon-alt").text#.span.get_text()
except:
Rating = "Null"
print(Rating)
try:
NumberOfReviews = i.find(class_= "a-size-base").text#.span.get_text()
except:
NumberOfReviews = "Null"
print(NumberOfReviews)
try:
Price = i.find(class_= "a-offscreen").text#.span.get_text()
except:
Price = "Null"
print(Price)
try:
PriceCount = i.find(class_= "a-size-base a-color-secondary").text#.span.get_text()
except:
PriceCount = "Null"
print(PriceCount)
df = df.append({"Product Name":ProductName, "Rating":Rating, "Number of
Reviews":NumberOfReviews,
"Price":Price, "Price Count":PriceCount}, ignore_index = True)
nextlink = soup.find(class_= "a-disabled a-last")
if nextlink:
print ("This is the last page. ")
break
else:
progress = driver.find_element_by_class_name('a-last').click()
subhtml = driver.page_source
soup = BeautifulSoup(subhtml, "html.parser")
Unfortunately, I hit a block road trying to figure out why it is not taking a_last.
AI: Most likely the element you are trying to find (I suppose it is a link or a button with class a-last) is not on the page. When the error appears, you should look at your chrome window and see what it is showing you. Check if the element is on the page.
It could be that amazon.com just showed a captcha and therefore all the usual elements disappeared from the screen. For example, if you try to do while True without waiting time on Google it will show you a captcha after several requests. |
H: Machine Learning Out of test data forecast (XGBoost, ANN)
I see a lot of applications for machine learning techniques applied to time series. Unfortunately almost all kernels with XGBoost or ANN stop short in creating an actual forecast. The achieve a great fit as they have the test data exlcuded.Are there any kernels for XGBoost or ANN where an actual forecast is created? I do not understand how this is possible as the predict function always needs future values which are not given. When trying out with the testset you have the values but after this you dont have the forecasted values of all the features, so would is the point of making such models with a lot of features which have no future values. All the models are useless?
Please prove me wrong but i do not understand.
Thank you.
AI: It depends on how you built your model. But yeah most of the time, you will only be able to predict $y_{t+1}$ from $X_t$. If you want to predict $y_{t+2}$ from $X_t$, you have to proceed either by :
Changing your target to $y_{t+2}$ during the calibration process, this need some work as you will have to rebuild your target.
Building a model that will also predict $X_{t+1}$, which would allow you to apply the same model iteratively. One such approach would rely on some assumptions about some sort of stationarity over time.
That are the more general approaches. However I feel like this doesn't not really answer your question. The real answer to your question is that we don't really care about the level of the time serie. Most applied ML techniques only care about a y that you can define how you want. You can choose a more appropriate target you care about like : does my time series cross a given level over a given time horizon ? |
H: pivoting two column in pandas
How can I transfer two column features into pivot table on the following dataset
I have tried the aggfunc function but this fill the value either 0 or 1. I want to transfer the row as cell value.
Here is the dataset
content Users
22 1196
23 1196
23 1216
16 880
20 880
20 1224
22 1245
23 1122
22 872
I want to transfer this dataset into this
Users 1 2
1196 22 23
1216 23 NaN
880 16 20
1224 20 NaN
1245 22 NaN
1122 23 NaN
872 22 NaN
I have tried it by using
df.pivot_table(index=["city"], columns="cuisine", aggfunc=lambda x: 1, fill_value=0)
df.pivot_table(index=["users"], columns="content", aggfunc=lambda x: 1, fill_value=0)
But this fills the values either 0 or 1
AI: You can accomplish what you want by doing:
df.groupby('Users')['content'].unique().apply(pd.Series)
First, what you want is to groupby your values by users, to get all content values. Since you want unique ones, you can apply unique() built-in pandas function.
Then, since you will have a columns with a list of values, you can apply pd.Series to each row, to expand it into columns, by default, they are named from 0 to n, where n is the maximum length of a list, in this case, two. |
H: How many times is backprop used in epoch?
As I understand for the algorithms that use gradient descent we have to pass data to the algorithms multiple times so that the optimum is found.
So one epoch means that the forward-backprop (and updating weights with gradient descent) is done only once and in order to find the minimum we need to run the algorithm more times?
Or is forward-backprop combination done multiple times (like for every input of data that is passed to the input layer) in one epoch until the minimum for that epoch is found?
AI: It depends on the type of gradient descent or respectively your batch size: One epoch means that your neural net (NN) has applied the forward pass on all examples of your training data, i.e. it has "seen" all training data. Now to do so you have at least two options (let $n$ be the number of samples in your training data):
You can either run backprop after each forward pass of an example and update your weights accordingly. This is usually implemented as stochastic gradient descent (SGD), has a batch size of 1 and means you run backprop $n$ times per epoch.
Alternatively, you can first run the forward pass for all your training data and then run backprop and update your weights accordingly. This is called batch gradient descent, has a batch size of $n$ and means you run backprop once per epoch.
Usually a mix of these two is applied so that $1 < \text{batch size} < n$, which is called Minibatch Gradient Descent. Following the above explanations, it means you'll run backprop $\frac{n}{\text{batch size}}$ times per epoch.
This blog post provides a good explanation as well. |
H: Need an advice on research topic
I am about to choose ML research topic for my master thesis, but i am at a dead end. The problem is, that while reading research papers, i find solutions, but not an open problems.
For now, a came up with such ideas:
Research neural network quantization algorithms - i started to research this topic 3 months ago and it seems to me, that current papers reveals high accucary methods. I have not idea what can i to do in this field
Implement online video-processing system using quantized neural networks. It sounds also good, but it could be done with few lines of Tensorflow Lite code.
Intelligent Image Database - image database, that has image classifiers and detectors for automatic annotation. It seems to be a good idea, but i need to find here a research part. For me this tasks sounds like pure engineering task
Make some signal processing research. It could be connected with first idea as optimal hardware implementation. But i don't know where to search for suitfull specific task for ML, because a lot of dsp problems are solved with classical analytical mathematical methods.
As you can see, i always see some disadvatages in ideas. It stopping me from starting to work. Can you give me an advice how to develop one of this ideas to make it suitable both for engineering and research? Generally, how to search for research topics?
AI: You're probably aiming too high: a research topic doesn't have to lead to a major breakthrough, and very often it's impossible to know what it leads to before doing it. A Master thesis is not very long so you need to find a research topic which is feasible within the time frame. A common mistake by Master students is to spend too much time on implementation, then botching the analysis and/or writing a poor quality dissertation because there's not enough time left.
Discuss with your advisor, they have the experience and they know what is a good topic.
Read a few random papers in your domain. Why? Because if you read only the top quality papers (yes, of course these are important to read), you can get the impression that you need to achieve the same level of quality. But that's not the case: these top quality papers are 1 in a million and are often done by people who have years of experience behind them. A "regular" research contribution is often just a small improvement or a decent analysis on a specific problem.
Typical process: just pick a general topic/problem you're interested in, then study the state of the art thoroughly (write your state of the art section at this stage), reproduce a few experiments and analyze the results: what can be observed? what are the limitations? what would be a reasonable idea to overcome one of these limitations? Then you implement and evaluate this idea.
Keep in mind that in research what matters is not the complexity of the topic or even the performance, it's the rationale and the method (and of course originality of the work, but even that doesn't matter that much at Master level). |
H: Extracting amount from free text
I want to extract various amounts and tenure of contracts from different contract documents that we have.
For example: Mr xyz, this contact is valid for 3 Months and you have to pay $3000 as agreement fee.
Expected output : 3 Months, $3000
Please note that this is just an example but the sequence, format, currency and tenure is not fixed in the actual problem.
AI: Named Entity Recognition (NER) models should be able to identify money amounts.
Other NLP techniques such as Dependency Parsing or Constituency Parsing can be used to identify the subject of the sentence - or the person the amount is referred to.
For the months amount, I think once you have other informations that's something you could extract with a "hard coded" script. |
H: Categorical feature as output and perform a classification
I have a database in which the output feature Y is categorical, for example (oversimplification)
A B C Y
1.0 0.2 5.1 Car
3.0 1.1 0.1 Car
7.6 6.9 2.7 Bike
2.5 3.8 0.3 Train
6.1 9.5 8.4 Car
8.4 0.7 5.6 Train
and so on.......
I would like to run a classification algorithm like kNN, Logistic Regression or Random Forest using as the output feature the column Y, i.e., to predict which transport was used.
How could I implement that in python since the output is not numerical?
AI: Assuming that yours data are in pandas dataframe, you could use Label Encoding
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['Y'] = le.fit_transform(df['Y'])
#check labels
# le.inverse_transform(df['Y]) |
H: Closed form of Weighted Ordinary Least Squares calculation of the trend line
I would like to know if there is a closed form version of this equation:
$\beta = \frac{n\sum{xy}-\sum{x}\sum{y}}{n\sum{x^2}-(\sum{x})^2}$
But for weighted data, where the weight $w_i$ is the value of the importance I want the individual $i$ in my database to have.
AI: We found the answer:
The formula is really close to the original OLS closed formulation:
$\beta = \frac{n\sum{(wx)(wy)}-\sum{xw}\sum{wy}}{n\sum{(wx)^2}-(\sum{wx})^2}$
There is something that should be warned:
When the sample is small, the method will simply don't work at all, resulting in absurd estimations of the $\beta$.
Make sure your sample is big enough! |
H: Processing data in the right manner in data science
From what i have learned, people say that it is more correct if i preprocess data after splitting it to train/test dataset. My questions are
1.Does it mean we detect flaws of the data + preprocess it after the splitting? if yes/no, why?
2.Is it okay if i detect flaws of the data before splitting it and then preprocess the flaws after splitting it? if it is/ no it is not, why?
AI: The goals of all these methodological guidelines is to avoid data leakage.
Example: let's imagine we want to classify short messages (e.g. tweets). When inspecting the data we find various kinds of smileys: :-), :|, :-/... At preprocessing stage we replace all smileys found in the data with a special token like <smiley> (or something more specific).
If the detection/replacement is done on the whole data, every occurrence of a smiley in the test set is replaced with <smiley>.
If the detection/replacement is done on the training set, even after preprocessing there might be a few smileys left in the test, because some uncommon ones didn't appear in the training set.
In the first case there is data leakage: we fixed some issues in the test set even though this wouldn't have been possible with actual fresh data (here the variants of smiley that were not seen in the training set). In the second case the test set is "imperfect", i.e. it's exactly as if it was made of "fresh" unseen data, therefore the evaluation will be more realistic.
This example shows why it's always safer to separate the data first, design the preprocessing steps on the training data, then apply exactly the exact same preprocessing steps to the test data.
In practice there can be cases where it's more convenient to apply some general preprocessing to the whole data. The decision depends on the task and the data: sometimes the risk of data leakage is so small that it can be neglected. However it's crucial to keep in mind that even the design of the preprocessing can be a source of data leakage. |
H: Bayesian regularization vs dropout for basic ann
Does it make sense conceptually to apply dropout to an artificial neutral network while also applying bayesian regularization?
On one hand I would think that technically this should work just fine, but on the other hand if bayesian regularization is like finding the MAP estimate given some prior shouldn't the effect just be that it creates random noise in your outcomes?
AI: It actually makes perfect sense to use both. Gal et al. provided a nice theory on how to interpret dropout through a Bayesian lense. In a nutshell, if you use dropout + regularization you are implicitly minimizing the same loss as for a Bayesian Neural Network (BNN), where you learn the posterior distribution over the network weights given the training data.
You can think of the fully Bayesian approach as taking it one step further on the road from MLE over MAP to Bayesian regression, as the former two provide only point estimates, while the latter gives access to the entire posterior distribution.
If you approximate a BNN with dropout, the introduced noise will take the role of drawing samples from the posterior distribution. The regularization will take the role of a prior, which is the same interpretation you would give it in the MAP-context. I don't think, however, that the represented priors are identical.
Either way, the prior helps to regulate the "spread" of the posterior. If you choose a large regularization parameter, which corresponds to a narrow prior distribution, the model will reduce variance in the posterior distribution as well.
If you only need a simple ANN with dropout and regularization, you probably don't care too much about using the approximate posterior after training, but I think this is still a nice perspective on things. |
H: Reshaping Pandas DataFrame
I am trying to do the following as shown below.
Input Output
Letter Number A B C
A 1 1 1 1
A 2 2 2 2
B 1 3 3
B 2 4
B 3
C 1
C 2
C 3
C 4
I have wrote the following code which works just fine.
import pandas as pd
df = pd.read_excel('Test.xlsx')
df = df.pivot(columns='Letter', values='Number')
list = []
for col in df.columns:
col = df[col].sort_values()
col.index = range(len(col))
list.append(col)
ndf = pd.concat(list, axis=1, sort=False)
ndf = ndf.dropna(axis=0, how='all')
print(ndf)
Is there any other alternative way of doing this without using loops? Any help would be very much appreciated.
AI: Here is the simplistic way.
>>> df = pd.DataFrame(data={'Letter': list('AABBBCCCC'),
>>> 'Number': [1,2,1,2,3,1,2,3,4]})
>>> dfx = df.groupby('Letter').agg({'Number':list})
>>> dfx
Number
Letter
A [1, 2]
B [1, 2, 3]
C [1, 2, 3, 4]
>>> dfx = dfx['Number'].apply(pd.Series)
>>> dfx
0 1 2 3
Letter
A 1.0 2.0 NaN NaN
B 1.0 2.0 3.0 NaN
C 1.0 2.0 3.0 4.0
>>> dfx.T.fillna(0).astype(int)
Letter A B C
0 1 1 1
1 2 2 2
2 0 3 3
3 0 0 4
So basically the sequence is:
aggregate by letter and put all numbers into a single cell by using df.groupby('Letter').agg({'Number':list}).
apply(pd.Series): turn "column with lists" into 2-dimensional array
T to transpose, and cleanup with fillna and type casting. |
H: I have a pandas dataframe and i need to clear all the special characters other than space
Input:
import pandas as pd
df=pd.read_excel("OCRFinal.xlsx")
df['OCR_Text']=df['OCR_Text'].str.replace(r'\W+'," ")
print(df['OCR_Text'])
Output:
The excel removes all the special characters along with the space. But i dont want space characters to be removed
AI: import pandas as pd
df=pd.read_excel("OCRFinal.xlsx")
whitespace = "\r\n\t"
df['OCR_Text']=df['OCR_Text'].apply(lambda x: x.strip(whitespace))
print(df['OCR_Text']) |
H: Interpretable xgboost - Calculate cover feature importance
When trying to interpret the results of a gradient boosting (or any decision tree) one can plot the feature importance.
There are same parameters in the xgb api such as: weight, gain, cover, total_gain and total_cover. I am not quite getting cover.
”cover” is the average coverage of splits which use the feature where coverage is defined as the number of samples affected by the split
I am looking for a better definition of cover and perhaps some pseudocode to understand it better.
AI: A more detailed explanation of cover can be found in the code
cover: the sum of second order gradient of training data classified to the leaf, if it is square loss, this simply corresponds to the number of instances in that branch. Deeper in the tree a node is, lower this metric will be
You can find this here: cover definition in the code
This basically mean that for each split the second order gradient of the specified loss is computed (per sample). Then this number is scaled by the number of samples. |
H: XGBOOST - different result between train_test_split and manually splitting
I am trying to train XGBOOST model.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=43, stratify=y)
when I'm using train_test_split and pass the model X_train, Y_train and for eval_set X_test, Y_test, The model seems to be a very good one.
CM example:
But when I manually split the Dataset :
splitValidationIndex = round(dataset.shape[0]*0.6)
splitTestIndex = round(dataset.shape[0]*0.8)
X_train = X[:splitValidationIndex]
y_train = y[:splitValidationIndex]
Pass it to fit
X_val = X[splitValidationIndex:splitTestIndex]
y_val = y[splitValidationIndex:splitTestIndex]
Pass it to eval_set
X_test = X[splitTestIndex:]
y_test = y[splitTestIndex:]
Check the model prediction on that
that produced a much worse model
example:
What am I missing/doing wrong?
AI: All looks correct, but you have to get to know about some details.
train_test_split splits arrays or matrices into a random train and test subsets.
If you not specifying random_state, you will get a different result, it means that
every time when you run your code train and test datasets would have different values each time.
If you fixed specific value like random_state = 42 then yours data in test and train set conserve the same values.
So in the first way splitting, model is learning on the specific(always the same) chunk of data, which could give a good results, in second way every time model is learning on different chunks of data, so results could be worse(by the way, its looks like overfitting on chunk(in first way of split).
In general no matter what way you choose, but you have to use cross-validation to create a good performing model. |
H: AUC ROC metric on a Kaggle competition
I am trying to learn data modeling by working on a dataset from Kaggle competition. As the competition was closed 2 years back, I am asking my question here. The competition uses AUC-ROC as the evaluation metric. This is a classification problem with 5 labels. I am modeling it as 5 independent binary classification problems. Interestingly, the data is highly imbalanced across labels. In one case, there is an imbalance of 333:1. I did some research into interpreting the AUC-ROC metric. During my research, I found this and this. Both these articles basically say that AUC-ROC is not a good metric for an imbalanced data set. So, I am wondering why would they be using this metric to evaluate models in the competition? Is it even a reasonable metric in such a context? If yes, why?
AI: As you would have seen in the research, AUC ROC prioritizes getting the order of the predictions correct, rather than approximating the true frequencies.
Usually, like in the credit card fraud problem you link to, the impact of one or two false negative is more devastating that many false positives. If those classes are imbalanced, like they are in the fraud case, AUC ROC is a bad idea.
It appears that in the competition you are referring to, the hosts are more interested in labeling which comments are more toxic than others rather than rating how toxic they each are. This makes sense since in reality the labels are subjective. |
H: Simple Imputer cannot impute by column
I have X_train that shapes (14599, 13), i'm trying to impute NaN with column's median but somehow it imputes with row resulting error because in a row there are date, and other than integer values. I already lookup if SimpleImputer has axis parameter but could not find that it exists. How to solve this?
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
from sklearn.impute import SimpleImputer
from sklearn.model_selection import train_test_split
plt.close('all')
avo_sales = pd.read_csv('avocados.csv')
avo_sales.rename(columns = {'4046':'small PLU sold',
'4225':'large PLU sold',
'4770':'xlarge PLU sold'},
inplace= True)
avo_sales.columns = avo_sales.columns.str.replace(' ','')
plt.scatter(avo_sales.Date,avo_sales.TotalBags)
x = np.array(avo_sales.drop(['TotalBags'],1))
y = np.array(avo_sales.TotalBags)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
imp = SimpleImputer(strategy='median')
X_train = imp.fit_transform(X_train)
the output
ValueError: Cannot use median strategy with non-numeric data:
could not convert string to float: '12/31/2017'
```
AI: I don't think it's trying to impute across rows; rather, it's trying to impute in the dates column. You may want to use ColumnTransformer to select which columns to impute. |
H: How to use Keras predict_generator() for segmentation output?
Below is the code I'm using for segmentation mask prediction after using fit_generator(...) on model named m :-
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator.reset()
test_generator =test_datagen.flow_from_directory('result/test', class_mode=None,seed=1, color_mode="grayscale",target_size=(256,256),batch_size=1)
results = m.predict_generator(test_generator,steps=17, verbose=1)
It runs without any errors but how do I visualise the predicted segmentation masks from results?
AI: Okay, I checked that m.predict_generator returns a numpy array so I just plotted it using plt.imshow(). Obviously I'd to do some slicing and squeeze on it.
Sorry if it was too trivial |
H: Purpose of validation data NN
Aside from using validation data to tune the hyperparameters is there any other benefit to including validation data to the model?
All I ever read about is it being used to tune hyperparameters and check for overfitting. Is the checking for overfitting separate from tuning the hyperparameters?
Training: Tune the parameters (weights and biases)
Validation: Tune hyperparameters
Test: Evaluate the model
So, if we are NOT tuning the hyperparameters, the validation set is pointless?
AI: The whole idea of a validation set is that the model does not know about this data, so you can get an unbiased estimation of the model's performance. Then based on this unbiased estimation you find the best parameters for the model. The problem is, that finding the hyperparameters is in itself a way of training your model. So with optimized hyperparameters the model starts to overfit on the validation set. That is why to check the real accuracy of the model you actually need a different part of data, that your model never saw. This is the testing part of data.
Therefore in your case, when you do not have any hyperparameters, you can just use the division on train and test.
If you want a higher accuracy of estimating your model's accuracy than it is better to use Cross Validation instead of just dividing on train and test. Neural networks usually do not do full cross validation because it means increasing the computation time several times (5 for 5-fold cross validation).
With hyperparameters the ideal way is to do a double cross validation. One for validation set and one for test set. This is too expensive computationally so it is only used on very simple models, like a ridge regression.
Also very few models, that I know of, do not have hyperparameters. And usually those that do not have hyperparameters perform poorly compared to the ones that have. Ridge regression is often better than linear regression. Neural networks with variable number of layers perform better than fully automated neural networks. |
H: How can I provide an answer to Neural Network skeptics?
After given several talks on NN's, I always have a skeptic that wants a real measure of how well the model is. How do you know the model is truly accurate?
I explain the use of test data etc. to evaluate the total error, however, there is always someone who wants to know about the error associated with each weight?
Can anyone enlighten me as to how I can satisfy these types of questions?
It has become a real issue.
AI: Neural networks are essentially a black box, especially big ones. You could know even how it is designed and how it is training, but you really do not know how it is working in the end. In my work lots of people want to understand the model instead of using "black box" models. This is the reason why companies choose to use linear regressions and polynomial models instead of using stronger machine learning algorithms, like LightGBM and Neural Networks.
I never found a true answer to this question. Some engineers are taught that you cannot use models that you cannot understand. Therefore every model that is a "black box" is not usable for them. This means that most of machine learning models are magic and heresy for them. Though take this with a grain of salt, this is my subjective experience. Sometimes as the time passes these people are more willing to use data science methods because it becomes mainstream. They start to trust the methods, because others use them.
The situation is different on the higher level. For high-tier managers it matters less how to interpret the model, but more what results it could give you. They are more willing to try, especially if there is a hype of something, like "artificial intelligence", "data science".
As a result, I could only give you an advice to find some good support higher in the hierarchy of the company. Someone who believes in data science more and who has more power in the company.
In data science community the performance of the model on the test dataset is one of the most important things people look at. Just look at the competitions on kaggle.com. They are extremely focused on test dataset and the performance of these models is really good.
The only problem with performance on the test dataset is that it depends on the data in the test dataset. If in real life you will have completely different data that will be outside of the bounds of the test dataset, then the test dataset will not be able to give a good approximation of the performance of the model in real life. |
H: searching for clouds (in the sky) images big dataset
I'm looking for a big dataset of clouds (in the sky) ground based images. i need tens of thousands of images.
It is important that the images will be ground based and not from satellite/ flights.
I've tried to search and so far found datasets of hundreds/thousands of images, but not the amount I need.
I'll appreciate your help.
Thanks!
AI: You could scrape Google, Bing, Yandex, Baidu for images and then do manual/semi-manual filtering of the data (semi with machine learning). Google gives approximately 700 images, Yandex gives 1500. You could probably get about 10000 from the search systems.
Include search on Pinterest and Twitter. The feed of the last one was endless for me (probably more than 5000 photos, but you should check).
You could also make a crawler for Instagram and Facebook. Most likely Instagram could cover all your needs. But you need to check their license agreement. And the crawler is not so easy to program. |
H: Different results obtained for OneVsOneClassifier (or OneVsRestClassifier) when using ordinary KFold and StratifiedKFold cross validation
When I fitted a OneVsOneClassifier (or OneVsRestClassifier), I noticed I obtained different results when I used ordinary KFold and StratifiedKFold cross validation. The testing set performance is much lower when ordinary KFold is used compared to when StratifiedKFold is used.
Questions:
1) Am I right to say for OneVsOneClassifier (or OneVsRestClassifier) strategy to work, the proportion of each class needs to be preserved in training and testing sets?
2) Is it better to use ordinary KFold or StratifiedKFold when I want to use OneVsOneClassifier (or OneVsRestClassifier)?
Many thanks.
AI: TL;DR
Use StratifiedKFold and the best performing model (either OneVsOneClassifier or OneVsRestClassifier).
Splitting your training and test set.
Below is are the two approaches you could use for splitting your train and test set. The KFold and the StratifiedKFold:
The main difference between the two is that StratifiedKFold will ensure that the class distributions are always the same in train and test. If you would use simple KFold, it might be possible that a class will appear in the test set, but never in the training set. If that happens, all the samples from that class will get wrongly predicted. So, StratifiedKFold at least gives your model a chance to see one sample from each class.
Regardless of the model you pick, use StratifiedKFold to evaluate your model.
OneVsOne or OneVsRest
It is difficult to say which model would perform best because it will depend on your data's distribution. So, I would say for you to evaluate both and pick the best performing one.
What is interesting, is that you get to choose what best performing means. It could be
the mean precision/recall/F1 score in a cross-validation setting. But it could also be the one that uses less memory, or the fastest. In general it is a mix of all these things with tradeoffs between them. In general, the OneVsOneClassifier performs better in the F1 score, but it is slower than the OneVsRestClassifier, but please not the words "in general". Perform your own tests and see which one works for you.
I usually prefer the OneVsOneClassifier, because it tries to model every class against each class. This way, classes are self contained and finding boundaries around that space is more convenient. When you have a OneVsRestClassifier, you need to model the "rest" class, which can be really complex since it is made up of various classes, which can become problematic. |
H: Understanding Terminology in Goodfellow's paper on GANS
I am trying to understand Ian Goodfellow et al's paper Generative Adversarial Nets here.
In section 3 the author's write:
The adversarial modeling framework is most straightforward to apply
when the models are both multilayer perceptrons. To learn the
generator’s distribution $p_g$ over data $x$, we define a prior on input
noise variables $p_z(z)$, then represent a mapping to data space as $G(z;
$ $\theta_g)$, where $G$ is a differentiable function represented by a multilayer
perceptron with parameters $\theta_g$.
I have a few questions.
1) From what I understand, the generator is a multi-layer perceptron input with a random vector sampled from a predefined latent space, (where wikipedia gives the example of a multivariate normal distribution). What does it mean to learn the generator's distribution $p_g$ over data $x$?
2) What does "we define a prior on input noise variables $p_z(z)$" mean? I understand that in Bayesian statistical inference, that a prior probability distribution, called a prior for short, gives a probability distribution on outcomes before some evidence is collected.
Any insights appreciated.
AI: Question 1
1) From what I understand, the generator is a multi-layer perceptron input with a random vector sampled from a predefined latent space, (where wikipedia gives the example of a multivariate normal distribution). What does it mean to learn the generator's distribution $p_g$ over data x?
Generally, supervised learning means to learn a distribution $p_{model}(y\mid x)$ where $x$ are your examples and $y$ your labels. However, in unsupervised learning it is not about labels but instead you are interested in unconditional probabilities and train a model to learn $p_{model}(x)$ (e.g. see p.485/486 in "The Elements of Statistical Learning" by Hastie et al). When applying GANs this $model$ is the generator $g$, i.e. you are learning $p_{g}(x)$. Accordingly, in the original paper the authors state:
The generator $G$ implicitly defines a probability distribution $p_g$ as the distribution of the samples $G(z)$ obtained when $z ∼ p_z$.
Question 2
2) What does "we define a prior on input noise variables $p_z(z)$" mean? I understand that in Bayesian statistical inference, that a prior probability distribution, called a prior for short, gives a probability distribution on outcomes before some evidence is collected.
Prior means we are talking unconditional probabilities, i.e. $p(z)$ is not conditioned on anything related to the task and examples on hand. While for example, $p_d(y\mid x)$ (the distribution learned by the Discriminator) is conditioned on $x$. Often $z$ is sampled from the standard normal distribution $\mathcal{N}(0,1)$.
In "NIPS 2016 Tutorial: Generative Adversarial Networks" the authors provide a good summary of the overall concept of $G$:
The generator is simply a differentiable function $G$. When
$z$ is sampled from some simple prior distribution, $G(z)$ yields a sample of $x$
drawn from $p_{model}$. Typically, a deep neural network is used to represent $G$. |
H: Does k fold cross validation become less useful when number of observations is very large?
As seen in the accepted answer for variance of k-fold cross validation ,
the simulation shows that k-fold CV has the same test error rate for different values of k when n=200.
Does this mean that k-fold validation is likely to be as good as having a holdout set validation? (assuming I have abundant data to make up for the high bias for holdout set validation approach)
Apart from high bias, the problem with holdout set validation approach as described in ISL book is that the test error rate is sensitive to the random splitting of data between train and validate. My intuition is that, With very high n (and well spread out data), the problem due to random splitting seems less likely to occur.
AI: Yes, you are right that when number of observations are very large, k fold cross validation (CV) are less useful. Let's look at why this is so:
1) Very high number of observations imply high training time for model and validation. Already the number of observations is large for the model to be trained and validated and now we are demanding it to be done k times. This is a huge burden on resources that is why in the Deep Learning regime we don't generally follow k fold CV as the data needed for training good neural networks are very high compared to traditional ML algorithms.
2) The higher the number of observations, the higher is the number of data chosen for cross validation set. This inherently makes it less likely that the sampled data points do not represent the original distribution. As you would know, the more data we sample, the better we approximate the original distribution.
Because of these reasons, k fold CV is inefficient when the number of observations is very high so a hold out set for CV will do the job. |
H: Scikit-learn OneHotEncoder effect on feature selection
If I need to run feature selection on my dataset isn't it problematic to use OneHotEncoder? Couldn't it then decide to remove a one of the encoding columns? How should I deal with this? Thank you.
AI: Yes it would be possible that it happens. It means that this event happening has no importance for the target.
Imagine a categorical feature with a lot of categories(high cardinality). Maybe only one of them does not have any influence in the target so if you do feature selection this feature might have a high chance of getting dropped. So this is normal thing, just make sure that it doesnt contain any information at any point of the dataset. |
H: How to adaptively sample n-dimensional data and build an optimum training set
My input space is at least 10 dimensional (after reducing it by various component analysis such as PCA) and the output space is 4 dimensional. I am building a neural network that works something like a function approximator which takes the above 10D data as input (10 neurons in input) and spits out 4D data as output (4 neurons at the output). In between there are hidden layers.
I need to build a good training set that covers all the possible values of input and output. Although it may seem like there is a very large combination possible for 10D input + 4D output, in reality, the input/output is restricted by real world constraints. The problem I'm facing is this: In some part of the training sets, I need to sample the data with higher resolution, in other parts, I can go away by sampling with lower resolution. I can obviously sample the entire dataset with high resolution, but then the number of samples is more than 10 trillion and I know that most part of the dataset is pretty slowly varying and sampling this part with lower resolution is enough. My guess-estimate is, if I could sample it properly, I could get away with 4-5 orders of magnitude less numbers of sample.
My questions is, what should I do/build in order to have something that will adaptively sample a high dimensional dataset (change resolution whenever necessary) and give me an optimum training set?
NB: one of my colleagues advised me to use Markov Chain Monte Carlo method. I am not sure whether it is the best way to go. Please share your opinions.
AI: Take a look at active learning.
https://en.m.wikipedia.org/wiki/Active_learning_(machine_learning)
In this approach you can train with a subset of samples and add more depending on what samples are most informative for the model.
Alternatively you could try reducing the number of samples by clustering and choosing a small number of samples from each cluster. |
H: Error in numpy array assignment
I'm trying to upload 17 images into a 4d numpy array, each image size is (256,256,1), so basically I'm using the 0th dimension for collection of different images. Following is my code:
import numpy as np
test_ip=np.zeros(shape=(17, 256, 256, 1))
count=0
for img in image_generator1:
test_ip[count,:,:,:]=img
count+=1
But it outputs an error:
IndexError: index 17 is out of bounds for axis 0 with size 17
Also I printed shape of img in for loop and instead of (256,256,1) it is (1,256,256,1).
Any help is highly appreciated.
AI: The index of your test_ip array goes from 0 to 16. So 17 is indeed too big (out of bounds).
You could add a try/except:
for counter, img in enumerate(image_generator1):
try:
test_ip[counter] = img
except IndexError as err:
print(f"Filled the array with {counter + 1} images") # f-string requires >= python 3.6
Or if you know the length of that generator, just use this:
test_ip = np.array([im for im in image_generator1]) |
H: What is done first, cross validation or grid search?
When I have the data set to train a model with SVM, which procedure is performed first, cross validation or grid search? I have read this in a couple of books but I don't know in what order all this should be done. If cross-validation is first performed, what hyperparameters do I use there if I have not found the optimal values provided by the grid search? In addition, throughout this procedure, where should the confusion matrix be calculated?
Thanks in advance
AI: Well, grid search involves finding best hyperparameters. Best according to what data set? a held out validation set. If that's what you mean by cross validation, then they necessarily happen simultaneously.
It doesn't really make sense to do something called cross validation before testing hyperparams - indeed, what would you be evaluating?
CV as in k-fold cross validation can also happen within each model fitting process in the search, to produce a better estimate of the loss (and its variance, which is useful in more sophisticated tuning procedures). I think this is less usual but valid.
It's possible to use CV when fitting the final model after hyperparameter search. It might give you a better estimate of the loss, or confusion matrix, as you compute many of them. But each model you fit isn't using all available data. I think it's probably more conventional to take the best model's parameters and loss / confusion matrix, from the fitting process, as an estimate of generalization, and then refit the final model on all data. This means no CV at that stage. |
H: Rendered Image Denoising
I am learning about "Image Denoising using Autoencoders". So, now I want to build and train a model. Hence, when I read into how Nvidia generated the dataset, I came across: We used about 1000 different scenes and created a series of 16 progressive images for each scene. To train the denoiser, images were rendered from the scene data at 1 sample per pixel, then 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, and 131072 samples per pixel.
I was trying to understand-
1) what is meant by rendering images at n samples per pixel?
2) How to do this in python to generate the dataset?
I have read some articles regarding this but could not form a confident opinion from a Data Science perspective.
https://area.autodesk.com/tutorials/what-is-sampling/
Any leads would be much appreciated! Thanks
AI: Your link is to paid course :) In ray-tracing too few samples will generate something like at the top In fact the link with the picture answers your question https://chunky.llbit.se/path_tracing.html
2) Ray-tracing is hard... but not impossible, google for "python ray tracing module"... But something looking close - easily https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv Although actually on the ray-traced images the noise can change because of slope and environment.
If you still want ray-traced noisy images, better to find tutorials for 3D modelling programs, like "ray tracing in 3D Studio MAX tutorial" |
H: How can I find to starting point of skewed data in python?
I have a list like this,
import random
import seaborn as sns
years = []
for i in range(1000):
if i % 100 == 0:
val = random.randint(1900, 2000)
else:
val = random.randint(2000, 2021)
years.append(val)
sns.distplot(years);
Here is output graph,
As you can see, there is a density after 2000. There is not much data before this point. My question is how can I find this point in skewed data? Is there a formula that gives this? Any idea? Thanks in advance.
AI: Depending on the level of what you want, I would suggest to just start with removing the the data with lower count :
Bin your data (equivalent to what you did by plotting the histogram)
Count the value in each bin
Look at the distribution of such values.
Remove the lowest counts
Get the cut off as the min of what is remaining
Try different bin size
That should cover getting the value.
Then you may want to make some assumption on the underlying process then try some statistical test on data before / after to see if the difference is significant. |
H: how to create a searchable tree on Persian text?
I wanna clean my huge text data from stop-words. I already have stop-word data that is provided on the below link. It seems to me, if I have a pre-built tree on stop-words, I could save lots of time. I want to search each word of text in this pre-built tree, if the word is in the tree I delete it from the text, if not I hold it.
O(n * l) to O(n*log(l)).
This is my stop-words
If you have better suggestions than the pre-built tree search, I would be grateful to share it with me.
AI: Finally, I've found this answer with tire tree, but I wonder if you have better option:
readindg data:
#readindg stopword data
stopwords = pd.read_csv('STOPWORDS',header=None)
tire tree:
#creating tire tree
class TrieNode:
# Trie node class
def __init__(self):
self.children = [None]*15000
# isEndOfWord is True if node represent the end of the word
self.isEndOfWord = False
class Trie:
# Trie data structure class
def __init__(self):
self.root = self.getNode()
def getNode(self):
# Returns new trie node (initialized to NULLs)
return TrieNode()
def _charToIndex(self,ch):
# private helper function
# Converts key current character into index
# use only 'a' through 'z' and lower case
return ord(ch)-ord('!')
def insert(self,key):
# If not present, inserts key into trie
# If the key is prefix of trie node,
# just marks leaf node
pCrawl = self.root
length = len(key)
for level in range(length):
index = self._charToIndex(key[level])
# if current character is not present
if not pCrawl.children[index]:
pCrawl.children[index] = self.getNode()
pCrawl = pCrawl.children[index]
# mark last node as leaf
pCrawl.isEndOfWord = True
def search(self, key):
# Search key in the trie
# Returns true if key presents
# in trie, else false
pCrawl = self.root
length = len(key)
for level in range(length):
index = self._charToIndex(key[level])
if not pCrawl.children[index]:
return False
pCrawl = pCrawl.children[index]
return pCrawl != None and pCrawl.isEndOfWord
Example of use:
# Input keys (use only 'a' through 'z' and lower case)
keys = list(stopwords.loc[:,0])
output = ["Not present in trie",
"Present in trie"]
# Trie object
t = Trie()
# Construct trie
for key in keys:
t.insert(key)
print("{} ---- {}".format("از",output[t.search("از")]))
Output:
از ---- Present in trie |
H: What does the number after a machine learning model name mean?
I'm not sure if this is off-topic, but I'm posting here anyway.
So I saw lots of machine learning models have like an ID after their names, for example, resnet101, resnet152, densenet201 etc. What exactly do those numbers 101, 152 and 201 mean? And how it's determined?
AI: As @Icrmorin said the naming conventions may vary but for the examples you gave, ResNet and DenseNet, the numbers in the name correspond to the number of layers:
DenseNet
Table 1 in the Densenet paper provides an overview:
As you can see, for example, in the DenseNet-121 column this network has $1+6*2+1+12*2+1+24*2+1+16*2 + 1 = 121$ layers and that is where the name is derived from.
ResNet
The ResNet paper provides a similar overview:
Again, you can see how the names are derived: for example ResNet-18 has $1+2*2+2*2+2*2+2*2+1=18$ layers.
Note that in both papers only conv. and dense layers are counted but not the pooling layers. |
H: (Basic) statistics
I got a misunderstanding regarding basic statistics (I think) but I can't get my head around that:
I did an online survey regarding the usage of some application. The user could answer 1 (I don't know that application), 2 (I know that) or 3 (I use that).
Now I want to know, how many applications the average user uses:
df['iuse'].mean()
where df['iuse'] was calculated as the count of answer 3 in one returned answer.
The result is something like 2.2.
Now, I want to know, how many applications a user of application X uses:
f = df['q1_1'] == 3 # For application 1, filter all answers where the user uses that
df.loc[f,'iuse'].mean()
That returns a number above 2.2 - for every single app: In short (and ugly):
[(4.235294117647059, 85),
(4.966666666666667, 60),
(2.7495274102079397, 1058),
(4.609195402298851, 87),
(6.391304347826087, 23),
(4.122950819672131, 122),
(4.850746268656716, 67),
(3.1860068259385668, 586),
(2.72192513368984, 1122),
(3.520231213872832, 346),
(4.276595744680851, 94)]
(left is the mean, and right is the count of answers for that application)
Now: Why is the usage overall less than the usage when seen from a specific application? I would expect to have at least some numbers below the total mean, but they are all above that. I'm confused :(
Thanks for any pointers and help
Klaus
AI: This is not necessarily unexpected (or broken). Imagine that users tend to use none or all of the applications. For a specific example, suppose 90 users use no apps at all, and 10 use all (say) 11 of them.
Then the average apps used by a user is $(90\cdot0+10\cdot11)/100=1.1$, but for each app, the average app-usage of a user who uses that app is $(10\cdot11)/10=11$. (In your case, for a sanity check maybe compute the number of users who use no apps.) |
H: Does it make sense to use a tfidf matrix for a model which expects to see new text?
I'm training a model to classify tweets right now. Most of the text classification examples I have seen convert the tweets into tf-idf document term matrices as input for the model. However, this model should be able to identify newly collected tweets without retraining. Does it make sense to use tf-idf in this context? What is the correct way to turn tweets into feature vectors in this task?
AI: The problem is not really "new text", since by definition any classification model for text is meant to be applied to some new text. The problem is out of vocabulary words (OOV): the model will not be able to represent words that it didn't see in the training data.
The most simple way (and probably the most standard way) to deal with OOV in the test data is to completely remove them before representing the text as features.
Naturally OOV words can be a serious problem, especially in data such as Twitter where vocabulary evolves fast. But this issue is not related to using TF-IDF or not: any model trained at a certain point in time can only take into account the vocabulary in the training data, it cannot guess how future words are going to behave with respect to the class. The only solution for that is to use some form of re-training, for instance semi-supervised learning. |
H: Why are the weights of my first layer and last layer in the CNN change while the middle layers don't?
The weights of my first and last convolution layers do change in a noticeable way. However, the rest of my convolution layers, in the middle, do not.
I should add that all convolution layers' biases change noticeably, however the kernel weights of the middle layers (as opposed to the first and last layers) do not change significantly.
Has anyone else encountered this?
Can anyone suggest why this is might be happening?
Can I rule out vanishing gradients...?
Any ideas for resolution (if this is at all a problem)?
Thanks in advance.
I'm happy to add details based on comments.
AI: Its ofcourse dataspecific, but this behavior is mostly likely not harmful. What could indicate dangerous behaviours is wild behaviour of cv-metric, learning rates etc...
You can think of every cnn layer task to learn specific features/pecularities of a photo. In the first layers they will catch the most general patterns, edges etc... Middle layers are more specific and they look at some finer details but not all too specific. And final layers are there for the small details that can really discriminate the classes and make the right prediction.
So in your case weights are not updated significantly because information in middle layers is not that significant. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.