text
stringlengths 83
79.5k
|
---|
H: LSTM returns the same results for different inputs
Hey everyone,
I am working on a LSTM network in TensorFlow that predicts the values of the price-index of different product-categories in a month, based on those same values of the 12 months before. Unfortunately I ran into some issues while trying to predict. My dataset (data) essentially looks like this:
Product 1 Price-index
Product 2 Price-index
...
Product 128 Price-index
Product 129 Price-index
Month 1
99.1
99.5
...
100
100.2
Month 2
100.6
101
...
100.3
100.6
...
...
...
...
...
...
Month 305
150
124
...
135
136
Month 306
151
126
...
137
136.2
I did some preprocessing but did not do any rescaling/standardization as the input/outputvalues are all indexes starting from 100 and are therefore on the same scale:
#Split in training and test set
train_size = int(len(data) * 0.9)
test_size = len(data) - train_size
train, test = data.iloc[0:train_size], data.iloc[train_size:len(data)]
print(len(train), len(test))
#Create dataset
def create_dataset(X, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
v = X.iloc[i:(i + time_steps)].values
Xs.append(v)
ys.append(X.iloc[i + time_steps])
return np.array(Xs), np.array(ys)
time_steps = 12
X_train, y_train = create_dataset(train, time_steps)
X_test, y_test = create_dataset(test, time_steps)
print(X_train.shape, y_train.shape)
Which produces the following output:
(263, 12, 129) (263, 129)
My model looks like this:
tf.keras.backend.clear_session
model = tf.keras.Sequential([
tf.keras.layers.LSTM(50, input_shape=(X_train.shape[1],X_train.shape[2])),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(129)
])
model.compile(loss='mae', optimizer='adam')
history = model.fit(X_train, y_train, epochs=500, batch_size=270, validation_data=(X_test, y_test), verbose=1, shuffle=False)
It gives no error and trains. However, when it is finished training and I try to use predict to get the predicted values for X_train (to plot them against the actual values) I get the following output:
array([[3.1375432, 2.9852755, 3.4141188, ..., 3.3334997, 3.0094063,
2.5739233],
[3.1375432, 2.9852755, 3.4141188, ..., 3.3334997, 3.0094063,
2.5739233],
[3.1375432, 2.9852755, 3.4141188, ..., 3.3334997, 3.0094063,
2.5739233],
...,
[3.1375432, 2.9852755, 3.4141188, ..., 3.3334997, 3.0094063,
2.5739233],
[3.1375432, 2.9852755, 3.4141188, ..., 3.3334997, 3.0094063,
2.5739233],
[3.1375434, 2.9852755, 3.4141188, ..., 3.3334997, 3.0094063,
2.5739233]], dtype=float32)
Which essentially gives me the exact same output for the different months of prediction. Furthermore, if I try to predict for one month, for example X_train[1] I get the following error from the network:
ValueError: Input 0 of layer sequential_11 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 129)
While the shape of X_train = (12,129), the same shape that the network should expect. Is there anyone who can point me in the right direction and show me what I am doing wrong?
Thanks!
AI: it seems like scaling my data helped. I refer to the following thread on GitHub:
https://github.com/keras-team/keras/issues/1727 |
H: In scikit-learn's LDA implementation, how can I sort the topics by frequency over the entire corpus?
I've used scikit-learn to perform LDA topic modeling, and I'd ultimately like to sort the topics by saliency/frequency over the entire corpus, but I'm unsure how to do as such.
I've used pyldavis though there appears to be no way of extracting the frequencies and order, which I need to create certain visualisations.
This is basically what I've done so far in terms of topic modeling.
# Vectorize text data
vectorizer = countVectorizer()
tf = vectorizer.fit_transform(df)
# Fit LDA model
LDA = LatentDirichletAllocation(n_components = k)
LDA.fit(tf)
From what I can gather, the best solution might be to use the document-topic distribution generated from LDA.transform(x), perhaps by manipulating the probabilities, but I don't know how. Any suggestions?
# Create document-topic distribution
doc_topics = LDA.transform(tf)
Thank you!
AI: You should indeed use the distribution across topics by document $p(t|d)$. There are two options:
The "classification" option: for every document select the topic which has the highest probability, i.e. label every document with the most likely topic. Then simply count the number of documents for every topic.
The "probabilistic" option: for every topic $t$ sum the probability $p(t|d)$ across all the documents $d$. This means counting the proportion of each document which is considered as topic $t$.
Assuming there is a quite large number of documents, the two options usually produce similar results. |
H: How to compute performance of a detection-classification system?
I use a yolo (y) to detect only one object and a multiclassifier (mc) that classifies that object.
Now, the problem is: what I have to do with yolo's false positive and false negative, if I want to compute the whole system accuracy, precision and recall?
Now I'm computing overall accuracy like this:
acc = (tp_mc + tn_mc) / (tp_mc + tn_mc + fp_mc + fn_mc + fn_y + fp_y)
To compute precision and recall I'm doing that for each class of mc:
precision_i = tp_mc_i / (tp_mc_i + fp_mc_i + fp_y_i)
recall_i = tp_mc_i / (tp_mc_i + fn_mc_i + fn_y_i)
Where fp_y_i and fn_y_i are the yolo's false positive and false negative that belongs to the class i of the multiclassifier.
Do you think that this is the correct way to compute accuracy, precision and recall?
AI: What matters is whether the instance is classified correctly or not at the end, so it's only about mapping correctly the cases from the y classifier:
tp_y is easy, it's the classification status of the mc classifier
tn_y is a TN for every class $i$.
fp_y is a FP for the class $i$ predicted by mc and a TN for every other class.
fn_y is a FN for the true class $i$ and a TN for every other class.
I'm not sure what you count as tn_mc here? Normally in a multiclass setting accuracy is the sum of $TP_i$ for every class $i$, because there's no TN. That would give us:
$$acc=\frac{\sum_{i}TP_i}{n}$$
With $n$ the total number of instances (from the start, not only the ones fed to mc)
Your precision and recall formulas look correct to me, assuming that fp_y_i are the FP instances where $i$ is the predicted class whereas fn_y_i are the FN instances where $i$ is the true class. |
H: 2 Most probable labels with Gaussian Mixture Model Clustering
I want to get the two most probable labels for each sample in my X.
A little context: I am working on a clustering project where I have 1.6M samples that have to be clustered into 12 clusters. First, I did KMeans and it works fine, except that KMeans is a hard clusterer model so it only gives one exclusive label per sample. For my project, I need to get not one but the top 2 most probable labels, which is why I changed to Gaussian Mixture Model.
I was thinking of doing this by looking through the array of probabilities for each component per sample and find the 2 highest values and this way, assign the labels.
So two questions:
How can I find the two highest values in an array and their location?
or.. if there is an easier way:
How can I assign the top two labels to each sample?
AI: You can use the predict_proba method along with numpy argsort
predict_proba(X)
Predict posterior probability of each component given the data.
import numpy as np, pandas as pd
from sklearn.mixture import GaussianMixture
X = pd.read_csv("/content/sample_data/california_housing_test.csv")
# Fit on all but the last sample
gm = GaussianMixture(n_components=12, random_state=0).fit(X.iloc[:-1,:])
# Predict for the last sample
prob = gm.predict_proba(X.iloc[-1:,:])
# Top 2
np.argsort(prob[0])[-2:]
Output - array([7, 3]) |
H: In general, what are precision, recall, F1 that are reported in papers?
I used classification_report in sklearn library
And, the picture below shows evaluation on my model (anomaly detector)
In general, what are precision, recall, F1 that are reported in papers ?
I think it's reasonable to use precision and recall with macro avg (in my case, 0.5001, 0.7000)
So, when writing a paper, can I report these values?
Otherwise, what are precision, recall, F1 that are reported in papers ?
AI: There's no standard range of values because evaluation scores are never good or bad in absolute, they are relevant with respect to a reference. The standard way to report evaluation scores in a paper is to present them in the context of other methods for the same task:
If there are other results about the same task (or a similar task) in the literature, compare to these. If the data is different, then you should ideally apply the state of the art methods to your data as well for an accurate comparison (either directly using the software if it's available or reproducing the method following the description in the paper).
If there's really nothing comparable, then a minimal comparison is to show the performance of a baseline classifier. A basic example is a majority class baseline, but depending on the task there can be more relevant heuristic methods.
In a binary classification problem you should only report the F1-score for the positive class, and usually the minority class (in this case anomaly). |
H: What is few-shots extrapolation?
I'm reading the paper "Learning how to ask" by Qin & Eisner and in the abstract, they mention that using prompts, language models can perform tasks other than text generation. Examples include fill-in-the-blanks (BERT) and few-shots extrapolation (GPT-3).
I am not sure I understood correctly what the authors mean by few-shots extrapolation. Do they mean extracting factual and other types of knowledge by using a few examples? Thanks.
AI: Here is the GPT-3 paper.
The "shot" are the number of example question/answer pairs provided to the ML model, before it is asked to answer a question by itself.
For each task, we evaluate GPT-3 under 3 conditions: (a) “few-shot learning”, or in-context learning where we allow as many demonstrations as will fit into the model’s context window (typically 10 to 100), (b) “one-shot learning”,where we allow only one demonstration, and (c) “zero-shot” learning, where no demonstrations are allowed and only an instruction in natural language is given to the model.
Figure 2.1, on page 7, provides a good example.
In the zero-shot case, the model is asked to complete the following text:
Translate English to French:
cheese =>
In the few-shot case, the model is asked to complete the following text: ("girafe" is spelled this way in the paper)
Translate English to French:
sea otter => loutre de mer
peppermint => menthe poivrée
plush girafe => girafe peluche
cheese =>
This is a 3-shot example. The text (quoted above) says the few-shot case allows 10 to 100 demonstrations; perhaps they omitted most of them to demonstrate the point more concisely.
They contrast this with "fine-tuning", where the model is explicitly trained on the task at hand. With "fine-tuning" the model is trained for a while with training data like sea otter => loutre de mer and then asked to complete cheese =>. With few-shot extrapolation, the model is not trained on these examples - the examples are only part of the text it is meant to complete. |
H: Image autoencoder w/o thousands of dense neurons? prevent large model
I am trying to get around producing large models. If my desired output is a 120x100 image, then do I need a 120*100=12,000 neuron dense layer preceding it?
...
model.add(Dense(120*100))) #<--- mandatory?
model.add(Reshape((120, 100)))
return model
The model size comes out to 5GB+ if I do.
AI: You only need such a projection if you are using only dense layers for your model (i.e. a multilayer perception (MLP)). You can simply have a convolutional autoencoder, where the layers are convolutions and max pooling, and therefore the number of parameters is drastically reduced with respect to an MLP.
You can check Keras' tutorial on autoencoders, specifically the section "Convolutional autoencoders" for an example. |
H: Xgboost fit won't recognize my custom eval_metric. Why?
Do you know why my custom_eval_metric doesn't work? I get the error:
XGBoostError: [07:56:32] C:\Users\Administrator\workspace\xgboost-win64_release_1.4.0\src\metric\metric.cc:49: Unknown metric function custom_eval_metric
def custom_eval_metric(preds, dtrain):
labels = dtrain.get_label()
preds = preds.reshape(-1, 3)
preds_binary = []
for element in range(0,len(preds)):
tmp = []
tmp = preds[element][2]
preds_binary.append(tmp)
labels_adj = [0 if x == 1 else x for x in labels]
labels_adj = [1 if x == 2 else x for x in labels_adj]
preds_binary = np.asarray([preds_binary])
labels_adj = np.asarray([labels_adj])
return 'ndcg score', metrics.ndcg_score(new_items, preds)
clf_xgb_out_of_sample = xgb.XGBClassifier(objective = "multi:softmax",
num_class=3,
seed = 42,
n_estimators=2,
max_depth = 8,
learning_rate = 0.1,
gamma = 0.25,
colsample_bytree = 0.8,
use_label_encoder=False)
clf_xgb_out_of_sample.fit(X_train,
y_train,
sample_weight=weights,
verbose = True,
early_stopping_rounds = 10,
eval_metric='custom_eval_metric',
eval_set = [(X_test, y_test['Target'])])
AI: From the documentation:
If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
...
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true) where y_true will be a DMatrix object such
that you may need to call the get_label method. It must return a str,
value pair where the str is a name for the evaluation and value is the
value of the evaluation function. The callable custom objective is
always minimized.
So you need to pass the function itself instead of a string of the function name |
H: How to move column in rows direction in Pandas DataFrame?
For example, I want get right from left DataFrame like on picture (move column B on 2 steps down):
PS. "Na" is not necessary, it can be any controlled value like null, zero or empty string.
AI: You can use the .shift method for this:
df["B"] = df["B"].shift(2)
The value used for the first two rows (NA or something else) can be controlled using the fill_value argument. |
H: What is the logic behind recommended normalization parameters in PyTorch?
On the PyTorch documentation for torchvision.models, it is states that images have to be loaded in a range of [0,1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. What is the logic behind these specific values?
AI: You can find the answer in stackoverflow here and here, and also in the pytorch discussion forums
As described in the links, these values were computed over the Imagenet training data. From the last link, which is the explanation by Soumith Chintala:
On Imagenet, we’ve done a pass on the dataset and calculated per-channel mean/std. |
H: Anomaly Detection and Removal/Interpolate
I am performing a machine learning regression task on time series data. I have a data frame filled with the close prices of various assets and economic data. I am looking to perform outlier detection on the entire dataframe. Here is my question:
What are some good methods to perform this?
Here is a subsection of my dataframe (df):
audjpy audnzd audusd usdcad cadchf cadjpy
2008-01-01 98.050003 1.140000 0.877116 0.99320 1.13960 112.599998
2008-01-02 96.559998 1.139500 0.884017 0.99440 1.12560 109.820000
2008-01-03 96.550003 1.140900 0.881601 0.98880 1.12370 110.693001
2008-01-04 95.168999 1.138300 0.876578 0.99880 1.10470 108.459999
2008-01-07 95.220001 1.134500 0.871916 1.00550 1.11130 108.559998
... ... ... ... ... ... ....
AI: In my opinion, I would treat each signal on its own. The approach also depends on the signals and on your definition of anomalies/outliers (for example unexpected long peaks?). But I can point some methods that you can try if they work on your signals:
If your signal is normally distributed (or very close to normal distribution) you can remove points (or replace by NaN) all those points lying outside $[\mu - n\sigma, \mu + n\sigma]$ where $\mu$ is the mean of the signal, $\sigma$ its standard deviation and $n$ is to be examined (you can try plotting the points histogram and examine closely the distribution).
A more sophisticated approach is to pass a rolling mean filter on the signal to capture the trend, and compute the residual signal = original - trend. And then clean the residual (If you plot the residual distribution, you will see it is most probably unimodal -- sometimes symmetric ... , so cleaning such a signal can be nicely done.) You compute the IQR (interquartile range) of your residual and remove points lying outside $[q_1 - n*IQR, q_3 + n * IQR]$ where $q_1$ and $q_3$ denote the first and third quantiles of the residual signal (respectively), $n$ typically people use 2 or 3...
Of course when you detect an outlier, you can replace the value by NaN and interpolate later. That should be totally fine ...
Finally, many other methods exist and a lot has been done, I hope that my answer will help you and even motivate other approaches. |
H: 'str' object has no attribute 'predict'
I am trying to deploy my ML model using flask. My model contains both categorical and numerical variables. Below is my model.py code:-
#PIPELINE FOR PREPROCESSING
dtr_pipe = Pipeline(steps = [('preproc', preproc), ('model',
dtr_model)])
train_x, test_x, train_y, test_y = train_test_split(data2, y, test_size
= 0.2, random_state = 69)
#OPTUNA
def objective(trial):
model__max_depth = trial.suggest_int('model__max_depth', 2, 32)
model__max_leaf_nodes = trial.suggest_int('model__max_leaf_nodes',
50, 500)
model__max_features = trial.suggest_float('model__max_features',
0.0, 1.0)
model__min_samples_leaf=trial.suggest_int('model__min_samples_leaf',
1, 50)
params = {'model__max_depth' : model__max_depth,
'model__max_leaf_nodes' : model__max_leaf_nodes,
'model__max_features' : model__max_features,
'model__min_samples_leaf' : model__min_samples_leaf}
dtr_pipe.set_params(**params)
return np.mean(-1 * cross_val_score(dtr_pipe, train_x, train_y,
cv = 5, n_jobs = -1, scoring =
'neg_mean_squared_error'))
dtr_study = optuna.create_study(direction = 'minimize')
dtr_study.optimize(objective, n_trials = 10)
dtr_pipe.set_params(**dtr_study.best_params)
dtr_pipe.fit(train_x, train_y)
pickle.dump(dtr_pipe, open('model.pkl', 'wb'))
model = pickle.load(open('model.pkl', 'rb'))
Below is my app.py code:-
@app.route('/', methods = ['GET', 'POST'])
def main():
if request.method == 'GET':
return render_template('index.html')
if request.method == 'POST':
powerPS = request.form['powerPS']
model = request.form['model']
kilometer = request.form['kilometer']
fuelType = request.form['fuelType']
vehicleType = request.form['vehicleType']
gearbox = request.form['gearbox']
notRepairedDamage = request.form['notRepairedDamage']
brand = request.form['brand']
age = request.form['age']
data = [[powerPS, model, kilometer, fuelType, vehicleType, gearbox,
notRepairedDamage, brand, age]]
input_variables = pd.DataFrame(data,columns=['powerPS', 'model', 'kilometer', 'fuelType',
'vehicleType','gearbox', 'notRepairedDamage',
'brand', 'age'],
dtype='float',
index=['input'])
predictions = model.predict(input_variables)[0]
print(predictions)
return render_template('index.html', original_input={'powerPS': powerPS, 'model': model,
'kilometer': kilometer,
'fuelType':fuelType,
'vehicleType': vehicleType,
'gearbox': gearbox,
'notRepairedDamage':
notRepairedDamage, 'brand': brand,
'age':age}, result=predictions)
If I run only my ML model, it runs perfectly without error. But when I deploy it using flask (above code), and enter the values in the respective fields and press submit, I get following error:-
AttributeError: 'str' object has no attribute 'predict'
Why am I getting this error and what's the solution?
AI: You are overwriting the model variable in the following statement:
model = request.form['model']
You should not use the same variable name. |
H: How to impute missing text data?
Lets say I have a dataframe consisting of two text columns. By text, I mean the values in those columns are either sentences/paragraphs. In such a case, how do I handle missing 'NaN' values?
If it were a numeric data, I would use frequent/mean/median/knn imputation. But, what to do about text data? Any ideas?
AI: First most of the time there's no "missing text", there's an empty string (0 sentences, 0 words) and this is a valid text value. The distinction is important, because the former usually means that the information was not captured whereas the latter means that the information was intentionally left blank. For example a user not entering a review is not missing information: the user chose not to enter any text and it cannot be assumed that this choice is equivalent to whatever text is the most common.
To the best of my knowledge there's no imputing in NLP. Imputing can make sense in some cases with a numerical value (even then it should be used cautiously), but in general text is too diverse (unstructured data) for the concept of "most frequent text" to make any sense. In general substituting real text (or absence of text) with artificially generated data is frowned upon from the point of view of evaluation.
Thus in my opinion the main design options are the following:
Leave the text empty. Most of the time an empty text can be represented like any other text value, e.g. as a TFIDF vector made of zeros.
Discard instances which have no text. For example in text classification no text means no input data at all, so there's no point performing the task for such cases.
Treating instances with no text as special cases based on the specifics of the task. For example such instances could be systematically assigned the majority class, if that makes sense for the task. |
H: Train-test split and augmentation strategy for small dataset for video classification problem
I have a small data set of videos of approximately 100 videos for each class for a binary classification problem. This results in a total of 200 videos. I am applying two types of augmentations on the videos.
Transformative augmentations which rotate and flip the videos. These are of 5 types so the resultant data set increases 5 times in size.
Increasing and decreasing brightness, saturation, contrast, blur, shake, noise etc.
All the videos have captured the same phenomenon of drop of some liquids spreading on paper. I have attached two frame of a video below, one near the start and another when the drop has expanded. On a 3D CNN i am getting a decent accuracy of around 85%.
My concern is:
As i have a small data set is it fine to apply transformative augmentations first and then do a train-test split so that my data set expands to 1000 videos and the split results in 800:200 videos? Or should i do split before augmentation which results in 80x5=400 train videos and 20 test videos?
Also, i have my CNN with three pretty standard convolutional layers and two batch normalization layers. On every run where i get a good accuracy, the test accuracy is 1-2% more than train accuracy. Should it be concerning?
Also, when i include second type of augmentation of brightness etc. i see the difference in train and test accuracy of 10-15%. I think this can be caused by training set becoming very hard in comparison to test set. Am i thinking right?
AI: Your three questions are tightly related:
You should not augment the data before splitting. This leads to data leakage, as there is an overlap between the training and the test data, because you are testing your model on some images that have been already seen (although in a transformed version) during training. Therefore, you should first split, then augment.
It is possible that the training accuracy is lower than the test accuracy when you have train-only regularization elements like dropout in your network. From your description of the network architecture, this is not your case. Your performance difference is most probably caused by the data leakage derived from your augmentation.
Same as 2. |
H: How Are Kernel Weights Trained in 1-D CNN's with Multi-dimensional Input?
I have far from a perfect understanding of how 1-D convolution neural networks learn, but I think I understand how the kernel operates on 1-D input data. How does 1-D convolution work with multi-dimensional input data? An image from this article:
has led me to think that the "kernel_size" argument in tensorflow.keras Conv1D layers actually controls the width of the kernel (which seems obvious with 1-D input data) but that the kernels themselves are as "deep" as the number of rows in the input data. Is this understanding correct? If so, what is the relationship between the weights in the "rows" of the kernel? Are they all the same, different but related in some way, or entirely different? Please let me know if my understanding seem off-base, and thanks in advance for your help!
AI: Yes, your understanding is correct: kernels are as deep as the input data.
The weights in the rows depend entirely on the patterns in the input data and the target data. CNNs are feature detectors so, after training, the weights in the kernel will have values that lead to activation maps that are useful for the task the network was optimized for. They would probably not have the same values, because normally the input data does not have the same values in all rows and, even in that case, the random weight initialization would lead to different final values. |
H: How can I convert text data to CoNLL format?
This is the same question that I posted on stackoverflow, but I wondered stackexchange would be appropriate for this question.
I would like to convert text data to CoNLL format.
words.txt
I was born in 1981.
From 12 to 24.
tags.txt
O O O O B-DateTime O
O B-DateTime I-DateTime B-DateTime O
CoNLL(.conll file)
I O
was O
born O
in O
1981 B-DateTime
. O
However, I only found a library for CoNLL-U format(conllu) and a library looked like for CoNLL(pyconll) but no sample code, so I have no idea to apply for it to text-CoNLL conversion.
I'm stacked with how to convert the data to CoNLL and how to write Python3 script to do it.
AI: I think there's a bit of confusion here: the sample you're showing is not a full "conll" format, at least not any recent one. It's simply a BIO format for NER.
As far as I know conllu has been the standard "conll" format for probably at least 10 years, so if you're using some old data it's possible that it used the name "conll" for something different. Or maybe somebody just used the name "conll" because the data was related to conll even though the format is not a standard conll format. Btw the library you link is for parsing the conllu format, not for generating it.
Normally The conllu format includes several columns for every token: at least token, lemma, POS and usually some dependency tree information (index of the head of the dependency). It is used in particular by the Universal Dependencies project.
So as far as I know there's no particular conll standard in this format. But it's a pretty simple conversion that can be implemented manually: you can simply iterate over the words and the tags in parallel and print the token and BIO tag as columns. The only posssible issue is whether the words are already tokenized or not. |
H: How can I weight each point in one-class SVM?
I want to give weights to some data points
Specifically, these are points related to anomalies
(I'm implementing one-class SVM for anomaly detection)
Exactly, I want to consider some data points that are likely to be anomalies as more important data points
Is it possible in one-class SVM ?
AI: If I understood correctly, you are tring to apply more weight in advance to certain points which you consider (based on domain knowledge?) that are likely to be anomalies, correct?
Your one-class support vector machine is meant to give you that insight, instead of specifying it in advance, so you could check if those points are actually far from the "normality" decision surface found by the algorithm itself, to confirm that those are novelties, also quantitative via the decision_function method:
(source)
Here you can find a more detailed answer on how it works, in case you want to have a look. |
H: Bert to extract local features
Bert is pre-trained model which can be fine-tuned for the text classification. How to extract local features using BERT
AI: First, it is different to fine-tune BERT than extracting features from it.
In feature extraction, you normally take BERT's output together with the internal representation of all or some of BERT's layers, and then train some other separate model on those features.
In fine-tuning, you re-train the whole BERT model on the specific task you are interested in (e.g. classification). Normally, you choose a very low learning rate to avoid catastrophic forgetting.
To choose which approach is appropriate in your case, you can check this article (basically you will probably get better performance with fine-tuning).
BERT can given you both sentence-level representations of token-level representations. For the sentence-level representations, you just take the BERT representations at the first token position, which is the special token [CLS] (see this). For token-level representations, you just take the representations of the token at the position you need. Take into account that BERT uses subword-level tokens, so it won't give you word-level representations (see this).
Regardless of the approach you choose, you can use the Huggingface Transformers python library, which has a lot of examples on the internet (e.g. fine-tuning and feature extraction). |
H: How to interepret BERT Attention
Can we tell BERT extracts local features?
For example consider the sentence "This is my first sentence. This is my second sentence".
Now How Bert extracts the features.
attention is computed for each sentence or as whole?
AI: BERT's self-attention will be computed for each pair of tokens. If the input sentence has $N$ tokens, then the attention weights will be computed over the $N^2$ pairs of tokens.
The attention, nevertheless, will be computed in each one of the attention heads of each of the layers of BERT.
If you want to understand the self-attention patterns that are normally found in BERT, you can check this article, where you will find analyses like this one: |
H: Data type when registering Twitter developer account?
When registering the Twitter developer account, I saw the response as below
"Thank you for your request. In order for us to review, we need a few additional details about your plans for the academic access to our API that you’re requesting. The information we need is listed below:
Will your research present Twitter data individually or in aggregate?
Please reply to this email to provide us this information. Please keep in mind, we need to receive the information listed above within 21 days, or Twitter will close the case without approving access. "
I am wondering what I should reply the Twitter team because I did not fully understand what does "individually" and "aggregate" mean to answer properly?
AI: It means will your paper analyse each tweet individually and present the result or it will collect thousands of tweet and calculate some statistics based on them and present the aggregated (mean, median, std) result. |
H: Stop words list to use for CountVectorization
The sci-kit learn library by defaults provides two options either no stop words
or one can specify stop_words=english to include a list of predefined English words
I am using Naive Bayes for SMS spam detection. Is there any other list of stop words
I can experiment with?
AI: import nltk
from nltk.corpus import stopwords
print(stopwords.words('english')) |
H: How do I split a data set into train and test sets using
I have the following matrices for training a model:
INPUT FEATURES MATRIX
$$ X =
\begin{bmatrix}
| & | & & |\\
X_1 & X_2 & ... & X_m\\
| & | & & |
\end{bmatrix};
~ X.shape = (n_x, m)
$$
OUTPUT MATRIX
$$
Y =
\begin{bmatrix}
y_1 & y_2 & y_3 & ... & y_m
\end{bmatrix}
~ ; ~ Y.shape=(1,m)
$$
And I want to split into train and test data sets using scikitlearn.
This is what I have tried:
>>> print(X.shape)
>>> print(X.dtype)
>>> print(type(X))
>>> print(Y.shape)
>>> print(Y.dtype)
>>> print(type(Y))
(16, 504)
float64
<class 'numpy.ndarray'>
(1, 504)
float64
<class 'numpy.ndarray'>
>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
But I get the following error:
ValueError Traceback (most recent call last)
<timed exec> in <module>
/usr/local/lib/python3.8/site-packages/sklearn/model_selection/_split.py in train_test_split(*arrays, **options)
2114 raise TypeError("Invalid parameters passed: %s" % str(options))
2115
-> 2116 arrays = indexable(*arrays)
2117
2118 n_samples = _num_samples(arrays[0])
/usr/local/lib/python3.8/site-packages/sklearn/utils/validation.py in indexable(*iterables)
235 else:
236 result.append(np.array(X))
--> 237 check_consistent_length(*result)
238 return result
239
/usr/local/lib/python3.8/site-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
209 uniques = np.unique(lengths)
210 if len(uniques) > 1:
--> 211 raise ValueError("Found input variables with inconsistent numbers of"
212 " samples: %r" % [int(l) for l in lengths])
213
ValueError: Found input variables with inconsistent numbers of samples: [16, 1]
```
AI: The sample size should be on the first axis. i.e.
$X.shape = (m, n_x) $
$Y.shape = (m, 1) $
You can simply transpose the data. |
H: How to program conditional statements for this problem in r
Situation: I'm trying to program the following in r.
Task: I am trying to select for words that appear as nouns in my dataset more than they do as adjectives, verbs, or adverbs etc. I have all these counts and below is an example of one instance of what I am trying to do. Imagine the information below is in a dataframe. I do not want to select for this lemma (ability), because it appears most times as a VERB; i.e., its appearance as a noun is not greater than VERB or ADJ:
id <- (c(4, 4, 4))
lemma <- (c("ability", "ability", "ability"))
count_lemma+pos <- (21, 66, 89332)
pos <- ("ADJ", "NOUN", "VERB)
Action: I tried to start programming the fail below to get to the following logic:
group the data by id
for every row i id, check if pos == "NOUN"
If not, then delete the row in id
check id for max value
return pos
pos != "NOUN", then delete id
#This is my failed attempt at the first step in r:
noun_count_all <- ddply(noun_count, .(lemma), function(noun_count) {
filter1 <- filter(noun_count, pos=="NOUN")
#filter2 <-
return(filter1)
} )
Result: Not getting anywhere. If I've written this question incorrectly, sorry about that. Not a programmer or data scientist, I'm just trying to use R to do this thing I can't do in excel.
AI: Using dplyr, the following code selects only the rows where the pos column has the value "NOUN" and where the count_lemma+pos is the highest within the group.
library(dplyr)
df %>%
# group by id
groupby(id) %>%
# filter on rows where pos == "NOUN" and count_lemma_pos is the max value within the group
filter(pos == "NOUN" & count_lemma_pos == max(count_lemma_pos)) |
H: MNIST dataset for digit recognition
I noticed that the MNIST dataset for digit recognition is just a CSV file
They don't provide the images.
https://www.kaggle.com/c/digit-recognizer/data
Is it possible to get the corresponding images for the dataset?
AI: Those csv files contain the actual data that would normally in image format, with each column being the value of a single pixel in the image (28 x 28 image gives 784 pixels). See also the description of the linked dataset:
Each image is 28 pixels in height and 28 pixels in width, for a total
of 784 pixels in total. Each pixel has a single pixel-value associated
with it, indicating the lightness or darkness of that pixel, with
higher numbers meaning darker. This pixel-value is an integer between
0 and 255, inclusive.
If you need the data as the image format you can simply reshape them to a shape of (28, 28) using np.reshape. |
H: Does validation data has any effect on training or it acts solely without affecting the training?
When using Keras library of Python, we use validation data with training data while training our model. In every epoch, we get a validation accuracy. Does this validation accuracy have any effect on training on the next epoch?
AI: To make it short - no.
When you train a model (e.g a Neural Network) you parse some data X into the model, the model predicts something, $\hat{y}$, you look at the correct target,$y$, corresponding to X. Then you say "to reduce the difference between $\hat{y}$ and $y$, what should my weights be?". Then the weights gets updated, and you parse in another round of X and $y$ untill you are happy.
The validation set is used, often after each epoch but you could use it whenever you want, to say "If I stop training my network now - how would it act on unseen data"? If there is a too big difference between the error from the training and the validation error, then your model is overfitting and you need to correct that by e.g dropout.
One of the core ideas behind the validation set is to not let the model learn from it, since it should act as "unseen/new" data. |
H: Managing Multiple Observation at the same time stamp timeseries forecasting deep learning
I have a dataset timeseries forecasting that includes the categorical columns and numeric as well.
here is a sample of it
Date | categorical _fature_1 |categorical _fature_2| Feature_1_numeric | feature_2_numeric | price
1-1-2020 | USA | A | 5.5 | 7.6 | 100
1-1-2020 | USA | B | 8.3 | 1.7| 20
1-1-2020 | USA | C | 3.6 | 2.1 | 17
1-2-2020 | USA | D | 5.5 | 7.6 | 40
1-2-2020 | USA | E | 77.5 | 35 | 22
1-2-2020 | USA | F | 69.5 | 2 | 22
as you can see in the sample in the date lets pick up the 1-1-2020 we have multiple observations at the same date .
i want to predict the Price column as a Y_label and taking the categorical _fature_1, categorical _fature_2, Feature_1_numeric, and Feature_2_numeric as the X_features
so from my understanding as im using multiple features for time series Forecasting predicting the Price column this is called Multivariate Time-Series Forecasting
My Question is
1-how can i manage the multiple observations at the same time from the different features as we saw for example in 1-1-2020 we have three different observations
2-i believe if we have multiple observations at the same time/date then we have a new kind of Time-series forecasting what is it Multi-timestep Multivariate Time-Series Forecasting or what ???
thanks
AI: It is multivariate time series forecasting indeed, and you need to train your model on the past time of the features as well as the interdependencies between them.
One common solution is Vector Auto Regression (VAR).
They are basically models that calculates both time and values interdependencies using a matrix:
Source: Wikipedia
Here is an overview for 3 features and second order:
Source: https://www.machinelearningplus.com/time-series/vector-autoregression-examples-python/
In your case, you will consider Y1 as feature_1, Y2 as feature_2 and Y3 as price.
This will calculate the interactions between Y1, Y2 and Y3, and you will get the prediction you want for the 3 of them, including Y3.
There is a library in Python, Statsmodel, that has a VAR function here.
VAR is a good start, but it is a linear algorithm, i.e you may not have good results if your features are complex to predict.
If you have complex features, there are non linear algorithms that would perform better, like Multivariate LSTM and Random Forest. |
H: Transformers (BERT) vs LSTM on Sentiment Analysis/NER - dataset sizes comparison
I am aware (continuously learning) of the advantages of Transformers over LSTMs.
At the same time, I was wondering from the viewpoint of size of the data needed, contrast of those two techniques, supposing I want to train for a downstream task, (classification or NER for instance), in which case would I need more data to achieve a specific result (although I am fully aware we never know in advance for a task how much data we need).
Presuming a result of N% (supposing that threshold is achievable for both LSTM and BERT), which architecture (LSTM or BERT) would require a bigger dataset (regardless of the size, I am aware dataset size is task-dependent and subject to change) to reach that point.
Does BERT need a bigger dataset to achieve "good results" (an empirical observation would help me) or a, say, bidirectional LSTM?
AI: You can't know a prior how much data you need to achieve a specific accuracy. However, if you just want to know how much people have achieved on other datasets, you can check BERT's original paper:
As you can see, there are some datasets of 3.5k and 2.5 examples.
You should also know that comparing BERT and LSTMs is not a fair comparison, as with BERT you are doing transfer learning, so it would profit from the pre-trained data, not only from your training data. |
H: When should you use Ensemble?
I have been working on a project as a part of my studies(computer/data science).
I tried to make the best classifier I can with what I learned, and recently I have tried to upgrade this classifier using new things I learned.
I have tried using several ensemble methods such as bagging, pasting and voting, and the results I get are similar to using a single classifier or even a bit worse, but taking more time to run.
My question is, in which cases should you use ensemble learning, regarding to data size, data kind(text/images)?
AI: When your model has better performance in training data than in validation data, it makes sense to ensemble in order to reduce the variance of the final model.
If the model doesn't overfit at all, I don't think it makes sense. |
H: When is it meaningful to use ML on data without statistical statistical significance
I have a dataset with three columns, the number of days behind or ahead of the deadline a project was completed, the name of the department whose project it was and a user number for the employee.
The company are interested in predicting when work is likely to be completed, given the department and the employee.
Looking at these box graphs you can see that there is a slight difference in project delivery delay between departments and for individuals but that there is a vastly bigger difference in variance between departments and individuals.
Largely because of this, if I use statistical tests I find that there is no significant difference between the means of the classes.
However I can build a complex decision tree or similar which attempts to predict the delay on a project based on the user/ department.
Is this meaningful?
Under what circumstances would it be meaningful
to use ML tools on a data set that statistical tools would reject the
idea of a pattern?
If you use ML tools without statistics, when would
you reject the idea that there is any kind of significant pattern in
your data?
Delay for projects from each department^
Delay for projects for each employee^
AI: It depends on what kind of difference you examine statistically.
If you have two groups drawn from the same distribution, you should not be able to distinguish between them reliably. This is a feature, not a bug, of machine learning.
If you have two groups drawn from different distributions that have the same mean, you should be able to distinguish between them. Your performance might not be great, but you can do better than randomly guessing. For example, if you include a quadratic term in a logistic regression, you can distinguish between $N(0,1)$ and $N(0, 2)$. Your performance will not be amazing, but it beats randomly guessing.
# install.packages('pROC') # You might need to run this.
library(pROC)
set.seed(2021)
N <- 100
x0 <- rnorm(N, 0, 1)
y0 <- rep(0, N)
x1 <- rnorm(N, 0, 2)
y1 <- rep(1, N)
x <- c(x0, x1)
y <- c(y0, y1)
L1 <- glm(y ~ x, family = binomial)
L2 <- glm(y ~ poly(x, 2), family = binomial) # I think poly is not the best way to do this, but it's what I remember right now.
my_roc_1 <- pROC::roc(y, 1/(1+exp(-predict(L1))))
my_roc_2 <- pROC::roc(y, 1/(1+exp(-predict(L2))))
par(mfrow = c(2, 1))
plot(my_roc_1, main = "No Quadratic Term")
plot(my_roc_2, main = "Quadratic Term")
par(mfrow = c(1, 1))
Without the quadratic term, the performance is terrible, and the AUC is basically chance, at $0.5426$. With the quadratic term, the performance is far from amazing, but we do better than chance, with an AUC of $0.6847$. (Even better could be to compare the log loss or Brier score, though those metrics seem less popular in data science than in statistics.)
If you plot the densities of the two groups, it is clear why they can be distinguished better than chance.
library(ggplot2)
d0 <- data.frame(x = x0, group = "0")
d1 <- data.frame(x = x1, group = "1")
d <- rbind(d0, d1)
ggplot(d, aes(x = x, fill = group)) + geom_density(alpha = 0.3) + theme_bw()
If you get a point near $0$, it is more likely to be from group $0$ than group $1$. If you get a point far from $0$, it is more likely to be from group $1$ than group $0$. Around $\pm2$, the group is more ambiguous, but you can do decent when you are in the middle or way out far.
The lesson to learn here is that some model should be able to make some progress in distinguishing between the two distributions, unless the distributions are identical. Linear methods like vanilla logistic regression (with just linear terms, so no $x^2$ like my L2 model) work best when the means are different. A logistic regression with nonlinear terms, such as my L2 model that has a quadratic term or a model with splines, can fit (but also overfit) other differences. In the extreme, a model that finds its own nonlinear terms, such as a neural network, can find all kinds of differences (while also risking overfitting).
Getting back to your example, you know that if you make an extreme observation, it is more likely to belong to some groups that have extreme observations than other groups that are clustered near the mean. |
H: When should I use 'rbf' and 'polynomial' kernel trick in machine learning algo?
I have a problem about hate-speech classification using support-vector machine algorithm.
The task is to identify the sentence that contains 'positive' or 'negative' sentiment.
Which is the best Kernel Trick? ('rbf' or 'polynomial')
AI: This looks like it is a duplicate, thought not a complete one, of the following stack
https://stackoverflow.com/questions/27103456/linear-kernel-vs-rbf-kernel
In there is a link to a guide that states, starting on page 12, three scenarios to consider when choosing between a kernel or a linear method:
Number of instances < number of features
Both numbers of instances and features are large
Number of instances > number of features
To summarize, linear is suggested for when the number of features is large or at least larger than the number of instances. Any other instance, a kernel would be suggested.
I also found a link to good primer I deciding between linear and kernel here.
You can condense the advice to the fact that when using SVM decide on the simplest approach first (linear) and if that does not work use RBF as polynomial does not tend to offer any performance improvements above using RBF. |
H: An issue for sub-word tokenization preprocessing transformer
I'm stacked with executing the sub-word tokenization preprocessing to use transformer.
According to the tutorial on the article, I have executed the sample code.
However, one function was not defined properly and no hint to fix it on the article.
If you have any ideas to fix the code, could you help me?
Error
22 return X, y
23
---> 24 X_train, y_train = preprocess(train_samples)
25 X_val, y_val = preprocess(val_samples)
12 def preprocess(samples):
13 tag_index = {tag: i for i, tag in enumerate(schema)}
---> 14 tokenized_samples = list(tqdm(map(tokenize_sample, samples)))
15 max_len = max(map(len, tokenized_samples))
16 X = np.zeros((len(samples), max_len), dtype=np.int32)
TypeError: 'module' object is not callable
Code
This code is from the article to build a model for named entity recognition using transformer.
import numpy as np
import tqdm
def tokenize_sample(sample):
seq = [
(subtoken, tag)
for token, tag in sample
for subtoken in tokenizer(token)['input_ids'][1:-1]
]
return [(3, 'O')] + seq + [(4, 'O')]
def preprocess(samples):
tag_index = {tag: i for i, tag in enumerate(schema)}
tokenized_samples = list(tqdm(map(tokenize_sample, samples)))
max_len = max(map(len, tokenized_samples))
X = np.zeros((len(samples), max_len), dtype=np.int32)
y = np.zeros((len(samples), max_len), dtype=np.int32)
for i, sentence in enumerate(tokenized_samples):
for j, (subtoken_id, tag) in enumerate(sentence):
X[i, j] = subtoken_id
y[i,j] = tag_index[tag]
return X, y
X_train, y_train = preprocess(train_samples)
X_val, y_val = preprocess(val_samples)
What I tried
I checked that the function, tokenize_sample is executable with the below code.
However, I'm not sure how to insert it to the original code.
for sample in samples:
print(tokenize_sample(sample))
AI: Your import of tqdm is incorrect. In your code you are trying to use tqdm as a function (note the parentheses) whereas you only imported the top level tqdm module, resulting the the error saying that the 'module' object is not callable (i.e. cannot be used as a function). Changing your import to import tqdm from tqdm imports the tqdm function and should solve error. |
H: Sub-word tokenization preprocessing to use transformer
I'm stacked with executing the sub-word tokenization preprocessing to use transformer.
According to the tutorial on the article, I have executed the sample code.
However, one function was not defined properly and no hint to fix it on the article.
If you have any ideas to fix the code, could you help me?
Error
22 return X, y
23
---> 24 X_train, y_train = preprocess(train_samples)
25 X_val, y_val = preprocess(val_samples)
12 def preprocess(samples):
13 tag_index = {tag: i for i, tag in enumerate(schema)}
---> 14 tokenized_samples = list(tqdm(map(tokenize_sample, samples)))
15 max_len = max(map(len, tokenized_samples))
16 X = np.zeros((len(samples), max_len), dtype=np.int32)
TypeError: 'module' object is not callable
Code
This code is from the article to build a model for named entity recognition using transformer.
import numpy as np
import tqdm
def tokenize_sample(sample):
seq = [
(subtoken, tag)
for token, tag in sample
for subtoken in tokenizer(token)['input_ids'][1:-1]
]
return [(3, 'O')] + seq + [(4, 'O')]
def preprocess(samples):
tag_index = {tag: i for i, tag in enumerate(schema)}
tokenized_samples = list(tqdm(map(tokenize_sample, samples)))
max_len = max(map(len, tokenized_samples))
X = np.zeros((len(samples), max_len), dtype=np.int32)
y = np.zeros((len(samples), max_len), dtype=np.int32)
for i, sentence in enumerate(tokenized_samples):
for j, (subtoken_id, tag) in enumerate(sentence):
X[i, j] = subtoken_id
y[i,j] = tag_index[tag]
return X, y
X_train, y_train = preprocess(train_samples)
X_val, y_val = preprocess(val_samples)
What I tried
I checked that the function, tokenize_sample is executable with the below code.
However, I'm not sure how to insert it to the original code.
for sample in samples:
print(tokenize_sample(sample))
AI: This look like a tqdm problem. Both the module tqdm and the main function tqdm have the same name. This often create some problem as people will just:
import tqdm
When the right import is:
from tqdm import tqdm
See here: https://stackoverflow.com/questions/39323182/tqdm-module-object-is-not-callable
(The first step might be to just remove tqdm from the line you want to execute to test if this is the origin of the problem) |
H: Model performance worsens after Cross Validation
I am training a logistic regression model on a dataset with only numerical features. I performed the following steps:-
1.) heatmap to remove collinearity between variables
2.) scaling using StandarScaler
3.) cross validation after splitting, for my baseline model
4.) fitting and predicting
Below is my code:-
# SPLITTING
train_x, test_x, train_y, test_y = train_test_split(data2, y, test_size = 0.2, random_state =
69)
#MODEL INSTANCE
model = LogisticRegression(random_state = 69)
# SCALING
train_x2 = train_x.copy(deep = True)
test_x2 = test_x.copy(deep = True)
s_scaler = StandardScaler()
s_scaler.fit(train_x2)
s_scaled_train = s_scaler.transform(train_x2)
s_scaled_test = s_scaler.transform(test_x2)
# BASELINE MODEL
cross_val_model2 = -1 * cross_val_score(model, s_scaled_train, train_y, cv = 5,
n_jobs = -1, scoring = 'neg_mean_squared_error')
s_score = cross_val_model2.mean()
# FITTING AND PREDICTING
model.fit(s_scaled_train, train_y)
pred = model.predict(s_scaled_test)
mse = mean_squared_error(test_y, pred)
CV score is 0.06 and score after fitting and predicting is 0.23. I find this weird as CV is a measure of how good your model performs. So I should atleast get a score equal to the CV score right?
AI: A couple additional points:
With MSE or any error measure, the value is relative to the data. This implies that one cannot know what the difference between 0.06 and 0.23 represents: this difference might be very large or very small, it all depends on the distribution of the target.
As Sammy mentions, the small size of the data is likely to cause high variations depending on the split. In order to understand the difference it would be useful to observe the performance variation across CV folds, for example by calculating the standard deviation: if the variation is high then the model is not stable, and that could explain the high difference. |
H: Select current column when iterating dataframe
I have a dataframe with 1k+ columns like this(i cut it).
I want to iterate the dataframe and check row by row if value is True then assign it the Incident Type.
How do i get the current column it is iterating to? I thought of something like that:
for index,rows in df3.iterrows():
if df3.loc[index, #current column ] = True:
df3.loc[index,'Incident Type'] = '#something'
AI: Iterating over rows in pandas should always be used as a last resort as it is quite inefficient. Always to find a way to vectorize your operations over the dataframe, which in this case is quite easy. If you want to set the Incident Type column based on the malware type column that has the value True you can use the following code:
for col in [col for col in df.columns if col.startswith("actions.malware")]:
df.loc[df[col] == True, "Incident Type"] = col.split(".")[-1]
This code loops over all columns in the dataframe where the column name starts with actions.malware and sets the value of the Incident Type column to the last word after the '.' in the column name only for those rows where the value in that column is True. |
H: Labeling and aggregating features issue
I am trying build a simple binary classifier (some tree based algorithm for now) and my training data will have features aggregated at the user level. So I'll have a unique records of each user. These aggregated features are like "number of logged in sessions", "number of times profile button was clicked" etc - essentially these are website browse behavior features.
What I am trying to predict is if someone would be interested in subscribing or not. Some users might subscribe immediately after opening an account, some might do after a few days and some may not at all. My labels will be 1 (subscribed) and 0 (not subscribed).
Customers can only subscribe after logging in. So in my dataset I'll have users whose login counts range from 1-N. Hence my aggregation for features will also have wide range of values, because users that have logged in say only once will be smaller feature values than users who have logged in multiple times.
My problem is twofold:
Label generation - Should I only select users who have say at-least 3 logged in sessions to assign labels of subscribed on not ? I do this because users who have only one session and have not subscribed will get label as 0 (not subscribed). I don't think I should assign them as label 0 as I don't think I have enough data to correctly conclude that label 0 is apt for them.
Say I select users who have at-least 5 logged in sessions and generate aggregate features. I feel my model wont be trained accurately if I have variation in features because of varying number of logged in sessions. ( e.g user A has 3 sessions hence aggregate feature values will be small compared to user B who has say 10 logged in sessions). May be I should level the field by aggregating data from only the fist 3 logged in sessions for each user and see if they subscribed or not in future.
Am I thinking about this correctly ?
AI: The goal is to predict if a user will subscribe in the future. By definition you cannot have labelled data now about what people will do in the future. However you can phrase the problem like this: given a set of users at time $t$, predict whether they will subscribe by time $t+u$, where $u$ is for instance 1 year, 1 month or whatever duration fits your data.
Under this definition you can use any past data you have: for every unsubscribed user at time $t$ (e.g. one year ago), label them as "will subscribe" or "won't subscribe" using the data that you have now about them. You can even collect data at different points $t$ in time and calculate for each point $t$ who among the users will be subscribed within a year (i.e. the same user could potentially be used several times as an instance).
Naturally for every time $t$ you should only use users who were not subscribed at this time $t$. |
H: Transforming time series into static features?
I'm working on a side project where I have a mixture of static data and time series, and the goal would be to perform clustering on the data. There's a bunch of data sources, but basically the main thing would be some static information about users (like age, sex, location etc.) and some time series data (user 123 did xyz at 2pm, then yxz at 3pm, then yyy at 4pm).
The goal would be to perform a clustering/segmentation via unsupervised learning to create user segments. The most data I have is from the time series kind, but I'd like to incorporate both time series and the static data into my model.
The question is, would it be viable to transform the time series data into static? If yes, what would be a method for this? Or, what would be some methods to perform clustering on time series data?
I'm currently thinking maybe an autoencoder could help me somehow, but I'm not entirely sure how. What are some common methods for this (if any)? Can you maybe give me some pointers in where to start looking? Thank you!
AI: So the first that comes to mind for me is to ask, "What is the end goal"? Are you trying to classify them by how active they are and at what times? If you are then I would refer you to this paper here. The relevant section is 2.3 where they explain that there are two main approaches to dealing with this issue in the literature.
"The first approach is to feed time-series features to RNN and then concatenate with static features."
"The second approach for combining the two types of features is to include the time invariant features as part of the temporal features and feed them together to RNN units."
In short, you can either train it all in one model or first use the time series model and feed to another model for the time invariant features. Where the paper mentions these solutions, there are further citations to other sources that discuss this further. |
H: An example of explainable, but not interpretable ML model
This post attempts to explain the difference between explainability and interpretability of ML models. However, the explanation is somewhat unclear. Can somebody provide specific examples of models that are explainable but not interpretable (or the over way round)?
AI: I am following the definitions by Cynthia Rudin (and the article by Keith O’Rourke which is based on it) here:
Explainable ML – using a black box and explaining it afterwards.
Interpretable ML – using a model that is not black box.
Accordingly, a decision tree, for example, is interpretable since it inherently makes its decision explicit through the nodes/split points. And according to above definitions it is not explainable since it is not a black box model (interpretable models are not a subset of explainable models according to the definitions).
In contrast, a CNN, for example, is a black box model which implicitly encodes its decision making procedure. However, ex-post analysis is an approach to make such a model explainable. You can, for example, assess the feature map activations per layer to do so, as done in this article:
This analysis reveals, for example, that layer 2 gets activates by patterns such as edges and layer 3 detects more complex patterns. Obviously this ex-post analysis has a different quality than the explicitly encoded rules of a decision tree.
(Somewhat contradicting the given definitions you could say that explainable models require a larger degree of interpretation while interpretable models explain their decisions inherently - but that is only my wording and not how the authors of above articles phrase it.) |
H: Whats the minimum size sample required for a LSTM RNN model?
I have a data set of 100 rows x 100 to 300 columns. Will an LSTM RNN model work for my data or do I need more data?
If my sample size is a problem are there other RNN architectures capable of modeling using small dataset sizes?
AI: This totally depends on the actual data and on the chosen hyperparameters (e.g. hidden size, regularization). Only experimentation will tell you if you can get good performance.
There is nothing inherently wrong with using an LSTM with small data but, given how small your dataset is, you will probably need a small hidden size and a lot of regularization to avoid overfitting. |
H: Perfect scores for multiclass classification
I am working on a multiclass classification problem with 3 (1, 2, 3) classes being perfectly distributed. (70 instances of each class resulting in (210, 8) dataframe). Now my data has all the 3 classes distributed in order i.e first 70 instances are class1, next 70 instances are class 2 and last 70 instances are class 3. I know that this kind of distribution will lead to good score on train set but poor score on test set as the test set has classes that the model has not seen. So I used stratify parameter in train_test_split. My code:-
# SPLITTING
train_x, test_x, train_y, test_y = train_test_split(data2, y, test_size = 0.2, random_state =
69, stratify = y)
cross_val_model = cross_val_score(pipe, train_x, train_y, cv = 5,
n_jobs = -1, scoring = 'f1_macro')
s_score = cross_val_model.mean()
def objective(trial):
model__n_neighbors = trial.suggest_int('model__n_neighbors', 1, 20)
model__metric = trial.suggest_categorical('model__metric', ['euclidean', 'manhattan',
'minkowski'])
model__weights = trial.suggest_categorical('model__weights', ['uniform', 'distance'])
params = {'model__n_neighbors' : model__n_neighbors,
'model__metric' : model__metric,
'model__weights' : model__weights}
pipe.set_params(**params)
return np.mean( cross_val_score(pipe, train_x, train_y, cv = 5,
n_jobs = -1, scoring = 'f1_macro'))
knn_study = optuna.create_study(direction = 'maximize')
knn_study.optimize(objective, n_trials = 10)
knn_study.best_params
optuna_gave_score = knn_study.best_value
pipe.set_params(**knn_study.best_params)
pipe.fit(train_x, train_y)
pred = pipe.predict(test_x)
c_matrix = confusion_matrix(test_y, pred)
c_report = classification_report(test_y, pred)
Now the problem is that I am getting perfect scores on everything. The f1 macro score from performing cv is 0.898. Below are my confusion matrix and classification report:-
14 0 0
0 14 0
0 0 14
Classification Report:-
precision recall f1-score support
1 1.00 1.00 1.00 14
2 1.00 1.00 1.00 14
3 1.00 1.00 1.00 14
accuracy 1.00 42
macro avg 1.00 1.00 1.00 42
weighted avg 1.00 1.00 1.00 42
Am I overfitting or what?
AI: Sorry for the delay.
Having 100% accuracy means that the task is easy enough and the network has no trouble performing it.
Just a quick reminder about how K-NN algorithm works : During training, you put your data in it and it somehow remembers the data. Then when using it on new values (testing here), it just looks for the nearest neighbours and look at the classes of these ones. After finding that K Nearest Neighbours are from class X, it tells that the input was from class X.
Here for example, the algorithm finds out that most points (the 'most' depends on the K you choose) around your input are from class 2, so the input should belong to class 2 as well.
Now here is how i would try to analyse the data you send in the KNN :
import matplotlib.pyplot as plt
import numpy as np
data2 = np.random.randn(40, 2) # Replace with your data
y = np.random.randint(0, 3, 40) # Replace with your data
data2[:, 1] = data2[:, 1] + 4*y # Replace with your data
for feature in range(data2.shape[1]):
plt.figure()
for sample in range(data2.shape[0]):
plt.title('feature ' + str(feature))
plt.scatter(data2[np.where(y == 0), feature].squeeze(), y[np.where(y == 0)], color='red')
plt.scatter(data2[np.where(y == 1), feature].squeeze(), y[np.where(y == 1)], color='blue')
plt.scatter(data2[np.where(y == 2), feature].squeeze(), y[np.where(y == 2)], color='green')
plt.show()
In my example, I have 40 samples with 2 features each and 3 classes and here are the 2 different plots I get :
On these plots, you get each class values on a different line. So you can see if your classes are clustered or not. A feature which output something similar to feature 0 is nearly useless for the KNN classification as classes seem to have random values and are not forming clusters. A feature like feature 1 is a feature that provide much information to the classifier as we can clearly see the difference between classes.
Hope this helps, if you have any questions, feel free to ask |
H: Which is meant by +/-9.2e18 years in timespan?
I was able to convert the 9.2e18 AD to a date, but I am confused about the exact date. Which date is 9.2e18 AD and and 9.2e18 BC?
Time span (absolute) - [9.2e18 BC, 9.2e18 AD] i.e +/- 9.2e18 years
NumPy documentation, section "Datetime Units" under "Datetimes and Timedeltas"
Code
Meaning
Time span (relative)
Time span (absolute)
Y
year
+/- 9.2e18 years
[9.2e18 BC, 9.2e18 AD]
M
month
+/- 7.6e17 years
[7.6e17 BC, 7.6e17 AD]
W
week
+/- 1.7e17 years
[1.7e17 BC, 1.7e17 AD]
D
day
+/- 2.5e16 years
[2.5e16 BC, 2.5e16 AD]
I have converted the 9.2e18 (epoch - I believe it represented in epochs) to a date. It gave me very big date which I did not expect. Are my assumptions accurate?
How many years was covered according to the date 1970-01-01 from 9.2e18 BC?
What are some examples using this timespan to judge my assumptions to get the date of 9.2e18 BC and 9.2e18 AD with units by NumPy?
AI: From the documentation you referred: "The length of the span is the range of a 64-bit integer times the length of the date or unit."
64 bit integer has values from -2^63 to 2^63-1, which is the same as from -9.2e18 to 9.2e18. So, the time span column shows you which dates would you cover if use only the corresponding units. Note, i.e. that time span for years 12 times bigger then time span for months and 52 times bigger then timespan for weeks.
So, the date 9.2e18BC is literally 9.2 quintillions years before christ
UPD with clarification to comment
First of all, there are two different concepts - the date (like 10th august of 2021) and time duration (like two years). The later is referred as time delta in python. And you can't add/subtract years in numpy from date because different years have different amount of time -- like 365 or 366 days. However you can subtract basically any amount of days like that:
start_date = np.datetime64('0000-01-01')
days_to_substract = np.timedelta64(100, 'D')
print(start_date - days_to_substract) # initial date minus 100 days
>>> -001-09-23
Note, that you can in fact manipulate with dates in vanilla python with datetime, but as mentioned in the other answer, the dates can not be less then 01-01-0001 for basic python without numpy |
H: In keras seq2seq model, what is the difference between `model.predict()` and the inference model?
I am looking into seq2seq model in keras, for example, this blog post from keras or this. All the examples I have seen have some inference model, that depicts the original model. That inference model is then used to make the predictions.
My question is why can't we just do the model.predict(). I mean, we can because I have used it and it works but what is the difference between these two approaches. Is it wrong to use model.predict() and do the reverse word tokenizer for the argmax ?
AI: I understand that with "All the examples I have seen have some inference model, that depicts the original model" you mean that there is a function that performs complex operations with the model instead of just invoking model.predict(). Such a function is called decode_sequence in the linked examples.
Note that you can't just invoke model.predict() once because you don't have any inputs to feed to the decoder.
The thing with this type of seq2seq models is that they are autoregressive. This means that they predict the next token based on its previous predictions. Therefore, you need to predict one token at a time: first, you predict the first token, then you invoke again the model with such a prediction to get the next token, and so on. This is precisely what function decode_sequence does: it just invokes model.predict() to get the next token, until the stop condition is met, that is either predicting the \n token or having predicted the maximum number of tokens. |
H: Problem with CNN
I am using the BreakHis database. More specifically, I am trying to classify the 400X images. The sizes of the images are $700x460x3$.
Here are the details of the dataset. Also, here is the code for the classification:
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator()
train_it = datagen.flow_from_directory( 'C:/Users/ahmed.zaalouk/Downloads/train' , class_mode= 'categorical' , batch_size=32,color_mode='rgb')
val_it = datagen.flow_from_directory( 'C:/Users/ahmed.zaalouk/Downloads/validation' , class_mode= 'categorical' , batch_size=32,color_mode='rgb')
test_it = datagen.flow_from_directory( 'C:/Users/ahmed.zaalouk/Downloads/test' , class_mode= 'categorical' , batch_size=32,color_mode='rgb')
from keras.regularizers import l2
from keras.models import Sequential
from keras.layers import Add, Conv2D, MaxPooling2D, Dropout, Flatten, Dense, BatchNormalization, Activation
from tensorflow.keras import activations
# Creating the model
CNN_model = Sequential()
# The First Block
CNN_model.add(Conv2D(32, kernel_size=3,kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same', input_shape=(700, 460,3)))
CNN_model.add(Activation(activations.relu))
CNN_model.add(BatchNormalization())
CNN_model.add(MaxPooling2D(2, 2))
# The Second Block
CNN_model.add(Conv2D(32, kernel_size=3, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same'))
CNN_model.add(Activation(activations.relu))
CNN_model.add(BatchNormalization())
CNN_model.add(MaxPooling2D(2, 2))
from keras.optimizers import Adam, SGD
from keras.engine.training import Model
from keras import backend as K, regularizers
from keras import losses
CNN_model.add(Flatten())
# Layer 1
CNN_model.add(Dense(512)) # 512 units
# Layer 2
CNN_model.add(Dense(512, activation='relu')) # 512 units
CNN_model.add(Dropout(0.5))
# Layer 3
CNN_model.add(Dense(8, activation='softmax'))
CNN_model.compile(optimizer="Adam", loss = 'categorical_crossentropy', metrics = ['acc'])
CNN_model.fit_generator(train_it, steps_per_epoch=19, validation_data=val_it, validation_steps=5)
Here is the model summary :
Model: "sequential_11"
_________________________________________________________________
Layer (type) Output Shape Param
=================================================================
conv2d_29 (Conv2D) (None, 700, 460, 32) 896
_________________________________________________________________
activation_27 (Activation) (None, 700, 460, 32) 0
_________________________________________________________________
batch_normalization_27 (Batc (None, 700, 460, 32) 128
_________________________________________________________________
max_pooling2d_27 (MaxPooling (None, 350, 230, 32) 0
_________________________________________________________________
conv2d_30 (Conv2D) (None, 350, 230, 32) 9248
_________________________________________________________________
activation_28 (Activation) (None, 350, 230, 32) 0
_________________________________________________________________
batch_normalization_28 (Batc (None, 350, 230, 32) 128
_________________________________________________________________
max_pooling2d_28 (MaxPooling (None, 175, 115, 32) 0
_________________________________________________________________
flatten_8 (Flatten) (None, 644000) 0
_________________________________________________________________
dense_24 (Dense) (None, 512) 329728512
_________________________________________________________________
dense_25 (Dense) (None, 512) 262656
_________________________________________________________________
dropout_8 (Dropout) (None, 512) 0
_________________________________________________________________
dense_26 (Dense) (None, 8) 4104
=================================================================
Total params: 330,005,672
Trainable params: 330,005,544
Non-trainable params: 128
_________________________________________________________________
None
I am getting this error and I don't know how to fix it :
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-80-7fdd4a4a32e1> in <module>
----> 1 CNN_model.fit_generator(train_it, steps_per_epoch=19, validation_data=val_it, validation_steps=5)
~\Anaconda3\lib\site-packages\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1916 'will be removed in a future version. '
1917 'Please use `Model.fit`, which supports generators.')
-> 1918 return self.fit(
1919 generator,
1920 steps_per_epoch=steps_per_epoch,
~\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1156 _r=1):
1157 callbacks.on_train_batch_begin(step)
-> 1158 tmp_logs = self.train_function(iterator)
1159 if data_handler.should_sync:
1160 context.async_wait()
~\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
887
888 with OptionalXlaContext(self._jit_compile):
--> 889 result = self._call(*args, **kwds)
890
891 new_tracing_count = self.experimental_get_tracing_count()
~\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
948 # Lifting succeeded, so variables are initialized and we can run the
949 # stateless function.
--> 950 return self._stateless_fn(*args, **kwds)
951 else:
952 _, _, _, filtered_flat_args = \
~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs)
3021 (graph_function,
3022 filtered_flat_args) = self._maybe_define_function(args, kwargs)
-> 3023 return graph_function._call_flat(
3024 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
3025
~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1958 and executing_eagerly):
1959 # No tape is watching; skip to running the function.
-> 1960 return self._build_call_outputs(self._inference_function.call(
1961 ctx, args, cancellation_manager=cancellation_manager))
1962 forward_backward = self._select_forward_and_backward_functions(
~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager)
589 with _InterpolateFunctionError(self):
590 if cancellation_manager is None:
--> 591 outputs = execute.execute(
592 str(self.signature.name),
593 num_outputs=self._num_outputs,
~\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
57 try:
58 ctx.ensure_initialized()
---> 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
InvalidArgumentError: Input to reshape is a tensor with 4194304 values, but the requested shape requires a multiple of 644000
[[node sequential_11/flatten_8/Reshape (defined at C:\Users\ahmed.zaalouk\Anaconda3\lib\site-packages\keras\layers\core.py:672) ]] [Op:__inference_train_function_12909]
Errors may have originated from an input operation.
Input Source operations connected to node sequential_11/flatten_8/Reshape:
sequential_11/max_pooling2d_28/MaxPool (defined at C:\Users\ahmed.zaalouk\Anaconda3\lib\site-packages\keras\layers\pooling.py:355)
sequential_11/flatten_8/Const (defined at C:\Users\ahmed.zaalouk\Anaconda3\lib\site-packages\keras\layers\core.py:667)
Function call stack:
train_function
Edit : The number of images in the training set is 1275
The number of images in the validation set is 365
AI: Default target size in flow_from_directory is 256 * 256 (height * width). So your data is resized to a dimension 256 * 256 while reading and you specified input_shape=(700, 460,3) in the layer
ImageDataGenerator.flow_from_directory(
directory,
**target_size=(256, 256)**,
color_mode="rgb",
classes=None,
class_mode="categorical",
batch_size=32,
shuffle=True,
seed=None,
save_to_dir=None,
save_prefix="",
save_format="png",
follow_links=False,
subset=None,
interpolation="nearest",
) |
H: Invalid shape (4, 460, 513) for image data
I am using read_image to read the image.
from torchvision.io import read_image
image = read_image("/content/train/000001-11.jpg")
Now, when I try to find the shape of the image, I get $(4, 460, 513)$ as the image shape.
But, when I use opencv to read the image, I get $(460, 513, 3)$ as the image shape.
img=cv2.imread("/content/train/000001-11.jpg")
Could anyone explain to me why this happens? Why are there 4 channels instead of three?
I tried to print the 4 channels for a particular case of the former case. I found the last channel has the value 255 for all cells.
I need to plot the image in both the cases.I am unable to plot in the case of read_image.
How to plot in case of read_image?
AI: read_image from torchvision.io by default will read the image using reading mode ImageReadMode.UNCHANGED, meaning that it will read the image as it stored on disk (see the documentation). In this case, that means you will get an image with four channels: (R, G, B, A), with A being the alpha channel. imread, however, by default uses the flag cv.IMREAD_COLOR to read in the image (see the documentation), meaning that you will only get three channels: (R, G, B). The difference in the number of channels is simply caused by how the images are read. Depending on what you want you can change the default flag in read_iamge.
There are several ways to plot an image with four channels, using matplotlib is one of them. The imshow function of matplotlib allows you to plot RGBA images as per the documentation. |
H: Why does the test set class imbalance influences my model's performance?
Considering a balanced training set, I noticed that the results of a classification primarily depend on the class imbalance of the test set.
As shown in this article, unless the classes are perfectly separable, the performance (precision & recall) of a model for a given class will always decrease based on the imbalance of the class. Ie: the more the test set is imbalanced, the less the model is capable of classifying the minority class.
This means that for any given model, the classification performance will always primarily depend on the balance of the data you are testing it with.
How can the imbalance of a test set define the predictive capabilities of my model once it is already trained? Does the performance of a classifier always depend on the class balance of the target population? What is the mathematical reasoning behind this?
AI: There is a confusion between the "true" performance of the classifier, which is indeed fixed once the classifier is trained, and the observed performance on a particular test set.
The "true" performance can only be estimated, and it should be estimated using a random sample which follows the "true" distribution of the data. Supervised learning always assumes a "true population" and both the training set and test set are supposed to be subsets of this true population.
If one uses a the test set with a different distribution then there's no guarantee that the performance will be the same as the true performance. This can be relevant in some experiments, but it's not a proper evaluation of the classifier itself.
Intuitively this can be compared to a test given to some students after they have studied some exercises:
If the test questions are similar to the questions they studied in the exercises then their mark reflects their true performance, i.e. how well they learned from the exercises.
If the professor makes the test with questions which are not seen or rarely seen in the exercises, then the mark is likely to be lower even for the good students.
Edit: study of the specific case of balanced training set vs. imbalanced test set (asked by OP in comments).
Re-edited after fixing mistake found by OP
This is an interesting case to study, thanks for asking :)
Using your code as a basis I tested the following code:
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
#from scikitplot.metrics import plot_roc
#from scikitplot.metrics import plot_precision_recall
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.metrics import classification_report
import numpy as np
from collections import Counter
from imblearn.under_sampling import RandomUnderSampler
import statistics as s
from collections import defaultdict
import random
from sklearn import tree
N_RUNS = 20
OPT_KNN = True
def fit_and_apply(X_train, y_train, X_test, y_test):
#training and testing on balanced data
if OPT_KNN:
clf = KNeighborsClassifier(n_neighbors=5)
clf = clf.fit(X_train, y_train)
else:
clf = tree.DecisionTreeClassifier().fit(X_train, y_train)
y_pred = clf.predict(X_test)
y_pred_tr = clf.predict(X_train)
# print('train acc. : ',accuracy_score(y_train, y_pred_tr))
# print('test acc. : ',accuracy_score(y_test, y_pred))
# print('confusion matrix: \n',confusion_matrix(y_test, y_pred))
# print(classification_report(y_test, y_pred))
conf_mat_train = confusion_matrix(y_train, y_pred_tr)
conf_mat_test = confusion_matrix(y_test, y_pred)
report_train = classification_report(y_train, y_pred_tr,output_dict=True)
report_test = classification_report(y_test, y_pred,output_dict=True)
return conf_mat_train, conf_mat_test, report_train, report_test
def print_results(proportions, perf,summary=True):
print("")
for k,v in proportions.items():
print("Prop. ",k,"=",v)
for t,d0 in perf.items():
for c,d1 in d0.items():
if summary:
print(t,"class",c,"P,R,F:\t",end='')
for m,values in d1.items():
if m != "support":
if summary:
print("%.3f" % (s.mean(values)),end="\t")
else:
print(t,"class",c,m,":"," ".join([ "%.3f" % (p) for p in values ]), ". MEAN:",s.mean(values))
if summary:
print("")
def accu(conf_mat):
correct = conf_mat[0][0]+conf_mat[1][1]
incorrect= conf_mat[0][1]+conf_mat[1][0]
return correct/(correct+incorrect)
# perf[train|test][class][measure] = list of values
perf = defaultdict(lambda: defaultdict(lambda: defaultdict(list)))
proportions = {}
avg_conf_mat = defaultdict(lambda: [[0,0],[0,0]])
print("*** BALANCED -",end='')
for i in range(N_RUNS):
print(i,end=' ',flush=True)
#creating balanced dataset
X, y = make_classification(n_samples=10000, n_features=5, n_informative=5, n_redundant=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0, class_sep=0.5, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None)
proportions["data"] = Counter(y)
#splitting data
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=None, stratify=y)
proportions["train"] = Counter(y_train)
proportions["test"] = Counter(y_test)
conf_mat_train,conf_mat_test,report_train, report_test = fit_and_apply(X_train, y_train, X_test, y_test)
for c in range(2):
for m,v in report_train[str(c)].items():
perf["train"][c][m].append(report_train[str(c)][m])
perf["test"][c][m].append(report_test[str(c)][m])
for i in range(2):
for j in range(2):
avg_conf_mat["train"][i][j] += conf_mat_train[i][j] / N_RUNS
avg_conf_mat["test"][i][j] += conf_mat_test[i][j] / N_RUNS
print_results(proportions, perf)
print("avg confusion matrix train: ",avg_conf_mat["train"]," avg accuracy=",accu(avg_conf_mat["train"]))
print("avg confusion matrix test: ",avg_conf_mat["test"]," avg accuracy=",accu(avg_conf_mat["test"]))
print("")
perf = defaultdict(lambda: defaultdict(lambda: defaultdict(list)))
proportions = {}
avg_conf_mat = defaultdict(lambda: [[0,0],[0,0]])
print("*** IMBALANCED A -",end='')
for i in range(N_RUNS):
print(i,end=' ',flush=True)
#making imbalanced data set (80%-20%)
imbalance = (0.8,0.2)
X, y = make_classification(n_samples=10000, weights=imbalance, n_features=5, n_informative=5, n_redundant=0, n_classes=2, n_clusters_per_class=2, flip_y=0, class_sep=0.5, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None)
# print(Counter(y))
proportions["data"] = Counter(y)
#splitting data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=None, stratify=y)
#undersampling majority class to obtain balanced training set
res = RandomUnderSampler()
X_train_res, y_train_res = res.fit_resample(X_train, y_train)
# print("y_train_res:",Counter(y_train_res))
# print("y_test:",Counter(y_test))
proportions["train"] = Counter(y_train_res)
proportions["test"] = Counter(y_test)
conf_mat_train,conf_mat_test,report_train, report_test = fit_and_apply(X_train, y_train, X_test, y_test)
for c in range(2):
for m,v in report_train[str(c)].items():
perf["train"][c][m].append(report_train[str(c)][m])
perf["test"][c][m].append(report_test[str(c)][m])
for i in range(2):
for j in range(2):
avg_conf_mat["train"][i][j] += conf_mat_train[i][j] / N_RUNS
avg_conf_mat["test"][i][j] += conf_mat_test[i][j] / N_RUNS
print_results(proportions, perf)
print("avg confusion matrix train: ",avg_conf_mat["train"]," avg accuracy=",accu(avg_conf_mat["train"]))
print("avg confusion matrix test: ",avg_conf_mat["test"]," avg accuracy=",accu(avg_conf_mat["test"]))
print("")
perf = defaultdict(lambda: defaultdict(lambda: defaultdict(list)))
proportions = {}
avg_conf_mat = defaultdict(lambda: [[0,0],[0,0]])
print("*** IMBALANCED B -",end='')
for i in range(N_RUNS):
print(i,end=' ',flush=True)
#creating balanced dataset
X, y = make_classification(n_samples=10000, n_features=5, n_informative=5, n_redundant=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0, class_sep=0.5, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None)
# print(Counter(y))
proportions["data"] = Counter(y)
#splitting data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=None, stratify=y)
#undersampling class 1 to obtain imbalanced test set
X_test_res = []
y_test_res = []
for i,c in enumerate(y_test):
# pick value in [0,1]
p = random.uniform(0,1)
if c == 0 or p<0.1:
X_test_res.append(X_test[i])
y_test_res.append(y_test[i])
proportions["train"] = Counter(y_train_res)
proportions["test"] = Counter(y_test_res)
conf_mat_train,conf_mat_test,report_train, report_test = fit_and_apply(X_train, y_train, X_test_res, y_test_res)
for c in range(2):
for m,v in report_train[str(c)].items():
perf["train"][c][m].append(report_train[str(c)][m])
perf["test"][c][m].append(report_test[str(c)][m])
for i in range(2):
for j in range(2):
avg_conf_mat["train"][i][j] += conf_mat_train[i][j] / N_RUNS
avg_conf_mat["test"][i][j] += conf_mat_test[i][j] / N_RUNS
print_results(proportions, perf)
print("avg confusion matrix train: ",avg_conf_mat["train"]," avg accuracy=",accu(avg_conf_mat["train"]))
print("avg confusion matrix test: ",avg_conf_mat["test"]," avg accuracy=",accu(avg_conf_mat["test"]))
print("")
The two main modifications are:
Running every experiment N_RUNS times in order to get a good estimate of the performance in every case. This is the same principle as cross-validation except that the generation of the data is included. Also I set random_state to None everywhere to avoid any bias.
Your version of the imbalanced experiment is included as "imbalanced A". I added another version "B" where the imbalance is produced directly in the test set. In fact the results with version A show that the method of undersampling a class from an imbalanced dataset is not equivalent to training on a balanced training set, certainly because of differences in the features as generated by make_classification (I don't know the details). This is visible in the fact that the 2 classes perform differently on the training set, something which is not supposed to happen if the training data is balanced.
I think your version (called A in my code) is an interesting illustration of the point that I was making above: the "true performance" can only be found if both the training set and the test set follow the "true distribution" of the data. Btw there's an ambiguity when we talk about "distribution of the data", people often assume that it's only the distribution of the classes, but in general it's about the distribution of the full instances (features+class) because otherwise the statistical relation between features and classes is potentially lost. In the case of version A the training set doesn't follow the "true distribution" of the data whereas the test set does.
[edited]
Now if we compare the performance obtained with the imbalanced test set in option B to the performance obtained with the balanced test set, the performance still differs in F1-score. Let's see what happens in detail:
For both classes, the recall values are practically the same as for the balanced test set. This is because the proportion of correctly identified instances among true instances of this class stays the same, even though the number of instances in class 1 is lower.
Importantly this implies that the accuracy is (nearly) identical, since the global proportion of correctly identified instances is directly based on these two proportions.
However the precision values for the two classes differ, thus causing the F1-scores to differ as well. This is worth a detailed explanation by looking at the confusion matrices:
From the point of view of class 0, the precision increases because there are less possibilities of FP errors since there are less instances of class 1.
From the point of view of class 1, the precision decreases because there are proportionally more possibilities of FP errors: the number of TP instances for class 1 has decreased, so the precision value is weighted down by the comparatively high number of class 0 instances, mechanically causing a higher proportion of FPs for class 1.
What this means is that the difference in precision (and F1-score) compared to the balanced case is an artifact of the new distribution of classes: while the model has exactly the same chance to correctly recognize an instance of either class, its F1-score performance is lower for class 1 and higher for class 0. Btw this is a good example of the difficulty to choose a global performance metric: accuracy (or equivalently micro f1-score) is identical as in the balanced case, but macro f1-score differs. In this case I would consider that the performance is the same in reality, but strictly speaking it can be seen as different indeed. |
H: How to properly present data set with categorical, numeric and time series data?
I have a following toy dataset example with target variable repair_type
id | car | mileage | repair_type | sex | age
1 | Honda | 12000 | engine | 1 | 50
1 | Honda | 12000 | suspension | 1 | 50
1 | Honda | 15000 | brakes | 1 | 50
Basically the dataset represents that some customer with id 1 at mileage 12000 repaired engine and suspension. After a while he returned and at mileage 15000 repaired brakes. I cleary understand that mileage and repair_type should thread as time series data. Also i have a categorical and numeric variables. Should i recombine a dataset? Should every records of customer be transposed as a single record ? In that case, with mixed time of data, what ml algoritm should i try to predict repair_type
AI: If I got it right, the application of your model would be predicting the most likely repair type given characteristics like: car brand, mileage...
The format of your dataset should be still valid for your classification model, but as you mentioned, informing the evolution of that car (belonging to a customer) is also necessary, but not changing it to a time series data (which sampling frequency? how many time samples?...) but informing, instead, the past repairs already made to the car.
What I would try, keeping the dataset format, is:
adding an additional categorical attribute informing past repair types already made to the car when a new one is made, andanother attribute informing the mileage passed since the last repair (checking whether it is highly correlated or not with other variables etc), so you could have something like:
id
car
mileage
past_repair_types
mileage_since_last_repair
sex
age
repair_type
1
Honda
12000
nothing
0
1
50
engine
1
Honda
12000
nothing
0
1
50
suspension
1
Honda
15000
engine&suspension
3000
1
50
brakes
For classification tasks with mixed data types, decision-tree based algorithms should work fine, you can try XGBoost, and you can have a look at this worked out example. |
H: 'Collision' resolution for precision in object detection
For object detection we often use metrics based on precision/recall. My question is what is generally the process of matching the prediction and ground truth bound boxes, when there are multiple intersecting boxes.
I.e. consider the image bellow for single-class detection. Red are for the two ground truth boxes, blue and green are predictions. Apparently the blue prediction had higher IoU for both boxes, but as it matches with the left one, can the green one consider the correct prediction, given low enough IoU threshold?
AI: Ok, I'm going to answer my own question.
The basic procedure is basically as follows
Calculate all IoU for pairs prediction vs ground box. Leave only the pairs with IoU greater than the threshold
If for any prediction there are many ground boxes, leave a single one with maximum IoU
If for any ground box there are many predictions after step 2, then the first one counts as True Positive and the rest are False Positive
Another thing is that IoU thresholds are canonically bigger the 0.5, so it's very rare we have to do something on step 2.
Given the picture in the question, the green box would probably have an IoU below a threshold, but otherwise, it's a valid prediction. |
H: Shouldn't a test be repeated X times and average the results to determine the best machine learning model?
I have searched in several web pages how to choose the best machine learning model for a dataset and they all seem to agree that they should be compared using the same seed. However, they only run the test once and choose the best result of all.
Shouldn't you run a lot of tests and average the results before choosing one? It may happen that for a given seed a model has obtained very good results when, in general, it does not give good results.
AI: Indeed! If you were to re-train a given model repeatedly using different conditions you would generally find that the accuracy (or metric of your choice) forms a (probably) Gaussian. A single model is like taking one sample out of this distribution.
And the model’s initial conditions are not the only source of variance from this seed. Consider the train/test split of your data: exactly which 80% (or whatever) went to training vs test? The smaller the dataset, the more this matters. |
H: Activation Function
I am very new to machine learning and made an experiment myself. I have a few questions:
Can I use $Y = sin(x)$ or $Y = 2x$ as an activation function for a neural network?
Is it necessary to increase the size of a convolutional kernel to improve the effect of a convolutional neural network (CNN)?
AI: 2 questions, 2 answers:
About the activation functions
The main idea behind the activation is that they are non-linear to break the linearity of the network. So they always have the following properties:
Non-linear
Injective
Differentiable thus Continuous (not sure if this is mathematically correct)
Just a quick reminder of how a neuron is constructed:
So the output is basically $y = ActivationF(\sum_{i=1}^{m}x_i\omega_i + b)$. Where $\omega_n$ and $b$ are parameters of the neural network.
If you use $y = 2x$ as an activation function, it does not break the linearity and thus is useless because it is equivalent to multiplying all the weights $\omega_n$ and $b$ by 2, and the network is going to adjust these parameters anyway.
If you use $y = sin(x)$, then you break the linearity, which is a good thing, but the problem is now 2 different inputs may have the same output, which is kind of weird and will make learning chaotic for your network.
Do not forget that your network will have to adjust the weights to get the right output, so it is much easier for it to adjust when it has a very smooth function like tanh rather than with a function that has waves with a random periodicity.
Now I will tell you how most data scientists actually use activation functions (myself included): we just brain-deadly use the ReLU function everywhere as the activation because it gives good results everywhere and we can focus on other parts of the architecture rather than activation functions.
About convolutional kernels in CNN
It is absolutly not necessary to increase the kernel size to get good results and most networks used in Computer Vision (CV) like VGG or ResNet use 3x3 kernels in most layers (sometimes 5x5 or 7x7 in the first layers).
According to my understanding of CV networks, the most important part in CNNs is that the resolution of your image after your convolutions is super low (input image may be 256x256, it should be something like 8x8 or 4x4 on the lower resolution layers of your network so the 3x3 convolution get a grasp of the whole image.
This is just my opinion and I have nothing to back it up except intuition, so do not take these last lines for granted.
Hope it is clearer now, if you have any question feel free to ask. |
H: Row embedding as output of a transformer - how are they defined?
I am reading the paper Tabular transformers for modeling multivariate time series and am having issues understanding the structure in Fig. 2. In Sec. 2.2, the authors say that the field transformer processes rows individually, creating row embeddings. What exactly is a row embedding?
AI: The algorithm on Fig. 2 can de rephrased as:
Take $T$ sequential rows of data.
Omit random parts of the rows (Random Mask).
Put each masked row separately into a Field Transformer. (It doesn't care about where the row was in the document.)
Put the resulting strings together as a sentence into BERT in the same order they were in the document.
Receive rows' embeddings that account for where they were in the document.
Regarding your question about the row embedding, it is a sequence of characters that were allowed in BERT.
Likely, it's a mix of Latin characters and numbers that would not look like an English word. |
H: How to convert post-padded sequence to pre-padded sequence?
I have numpy array as follows:
train_x = [[1,2,3,0,0], [2,5,0,0,0], [2,3,0,0,0], [0,0,0,0,0], [0,0,0,0,0,0]]
Now, I would like to transform it to as shown below:
new_train_x = [[0,0,0,0,0],[0,0,0,0,0,0],[0,0,1,2,3],[0,0,0,2,5],[0,0,0,2,3]]
I tried writing manually. The length of such list is huge. But it is time consuming.
I would like to know the efficient and short code for this (manually takes time).
AI: So assuming that the padding is the same length and the shape of the array is consistent you can do something below:
import numpy as np
train_x = np.array([[1,2,3,0,0], [2,5,0,0,0], [2,3,0,0,0], [0,0,0,0,0], [0,0,0,0,0,0]])
stringX = ''
for i in train_x:
for s in i:
stringX += str(s)
subStrX = stringX[0:12]
prePadStr ='00000000000000' + subStrX
counter = 0
internalCounter = 0
newStr = ''
newLst = np.empty(shape=([5,5]),dtype=str)
while counter < 25:
newStr = prePadStr[counter:counter+5]
newStrLst = [char for char in newStr]
newLst[internalCounter] = newStrLst
counter+=5
internalCounter+=1
newLst
If you need something that should be able to infer the padding and shape, I can provide that as well. However when it comes to efficiency, that will likely isn't the case that this is the most efficient, but that may not matter depending on what you have to process. |
H: How to predict only those values that our model is 95% sure of?
I have 5 classes. I made a XGBoost Classification model and used model.predict(test) to predict the classes of test dataset. Out of all those values predicted by my model, I would like to know only those values that my model is more than 95% sure that the predicted value is correct. I mean, I would only like those predictions that my model is very confident of predicting. How do I find those predictions?
AI: Have a look at the predict_proba method of the XGBClassifier class which will give you the probabilities for each class instead of just the predicted class. You can then use these probabilities to only select the class with the highest probability if the probability is above the treshold you want to set (in this case 0.95). |
H: Explain TFX Metadata Store data model definitions
GOAL
Explain the following definitions in plane simple english?
Many practical examples of what they can have?
What each of them do?
ORIGINAL
This is the original https://www.tensorflow.org/tfx/guide/mlmd#data_model that I have issues with. In my description I will point back to this. “The Metadata Store uses the following data model to record and retrieve metadata from the storage backend.”:
ArtifactTypedescribes an artifact's type and its properties that are stored in the metadata store. You can register these types on-the-fly with the metadata store in code, or you can load them in the store from a serialized format. Once you register a type, its definition is available throughout the lifetime of the store.
An Artifactdescribes a specific instance of an ArtifactType, and its properties that are written to the metadata store.
An ExecutionTypedescribes a type of component or step in a workflow, and its runtime parameters.
An Executionis a record of a component run or a step in an ML workflow and the runtime parameters. An execution can be thought of as an instance of an ExecutionType. Executions are recorded when you run an ML pipeline or step.
An Eventis a record of the relationship between artifacts and executions. When an execution happens, events record every artifact that was used by the execution, and every artifact that was produced. These records allow for lineage tracking throughout a workflow. By looking at all events, MLMD knows what executions happened and what artifacts were created as a result. MLMD can then recurse back from any artifact to all of its upstream inputs.
A ContextTypedescribes a type of conceptual group of artifacts and executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners etc.
A Contextis an instance of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations etc. It has a user-defined unique name within its ContextType.
An Attributionis a record of the relationship between artifacts and contexts.
An Associationis a record of the relationship between executions and contexts.
How I see them
(Please correct my descriptions or let me know if they are right)
Def ArtifactType (how I can draw down based on the documentations):
Contains the base data
Contains many iteration and multiple modified version of the base data
Defines the datatypes
Have properties
Sores data as metadata in metadata storage ex.: database, in ram.
1 Artifact (how I can draw down based on the documentations):
1 version of the modified data
1 version of the modified data's properties
1 version of the modified data's data types
1 version of a specific instance of an ArtifactType, and its properties that are written to the metadata store.
!! BUT THAN "List all Artifacts of a specific type. Example: all Models that have been trained." https://www.tensorflow.org/tfx/guide/mlmd#mlmd_functionality -> So saved down model can also be Artifacts. This documentation is just terrible their ArtifactType and Artifact is pointing on each other whiteout explain any of them what it is. It doesn't makes any sense.
ExecutionType:
What is a "component in a workflow"?
What is a "step in a workflow, and its runtime parameters."
Because this TEXT area has NO workflow chart to point at while there is one to the previous section https://www.tensorflow.org/tfx/guide/mlmd#metadata_store and there is one after https://www.tensorflow.org/tfx/guide/mlmd#integrate_ml_metadata_into_your_ml_workflows .
Execution:
What is a record here?
What is a component
What is a component run?
What runtime parameters are we talking about?
What is Execution overall?
1 version of a specific instance of ExecutionType.
Executions save to metadata storage (ex.: RAM or database) you run an ML pipeline or step.
Event:
Is a record of the relationship between artifacts and executions.
to me it is not clear why is this step even necessary because event and execution sounds like they fulfill the same exact purpose.
to me it seems like execution saves down itself than why do we need an event to save it down again?
This is the only understandable statement in this definition "By looking at all events, MLMD knows what executions happened and what artifacts were created as a result."
ContextType:
What is "conceptual group of artifacts and executions" ? Especially what is "conceptual" about them?
- perfect examples this is what all the other description should be.
Context:
1 version of the ContextType.
Again what is this "conceptual group of artifacts and executions" ? Especially what is "conceptual" about them?
Again GREAT examples.
Attribution:
simple and understandable description
If all the elements ave and describes them self why is this necessary?
Association:
simple and understandable description
If all the elements ave and describes them self why is this necessary?
Previous recommendations:
Just not helpful also the colab also not answers the basic definitions - https://www.tensorflow.org/tfx/tutorials/mlmd/mlmd_tutorial
AI: Is there a specific portion that you are having an issue understanding or is it more general? If it is more general, there is an excellent tutorial notebook that is linked to in that website you linked.
We can also gain a more intuitive understanding of the different parts of the ML metadata from the below image, which was taken from the same link you provided.
In the image, we see that the data and model are considered artifacts.
The Execution is that of the Trainer of the model.
Events in this flow were the input of the data and the model output while the Context annotates the entire experiment.
From here, it is easier to understand where an Attribution and Association would fit in to this scheme.
EDIT
To add to prior post it is best to use the following image as well from the documentation:
I am also new to TensorFlow so I am definitely with you on the documenation's lack of clarity, but I think we can look at "Artifacts" as "objects" and "executions" as a process, sort of like instructions on how to act on the artifacts. From there the "Events" can be seen merely as when data is inputted into the process and predictions are fed out, but its important to look at "Events" as just the process by which the data is read in and the predictions are read out and not the data/predictions themselves. It is rather abstract and why it took me awhile to get a grasp on it. The "Attribution" and "Association" can be thought of as the primary and foreign keys of database table in that they describe how certain items are related to each other which is why they are needed.
As for your questions on components, the TFX User guide explains that:
A TFX pipeline is a sequence of components...
So basically, "components" in TensorFlow are merely the constituent parts of the pipeline. The standard components of a TensorFlow pipeline are listed in the TFX User guide.
When it comes to workflow, I gather that its really just a generic term used to describe the end-to-end machine learning process. I believe that the term "conceptual grouping" is also generic as a "ContextType" is user-defined as per the API documentation. Also runtime parameters are explained in the "Understanding TFX Pipelines" documentation as:
...inputs to pipelines that are known before your pipeline is executed.
I believe this edit as addressed the issues you have outlined. Hopefully this helps. |
H: How can I extract an optimized matrix of correlations from a larger data set?
Consider an Excel sheet containing a matrix of correlations between individual stocks and the combined portfolio as a whole:
How can I extract an optimized matrix such that most stocks have a low correlation to each other? If I simply go by the combined column, when I remove the ones that don't fit the criteria, I have a feeling it might not be optimal, as the combined column is affected by the ones that were removed.
AI: Have not worked with stocks, but I have worked with feature selection. You could try to remove unwanted features (stocks in your case) one by one.
What I did for my tasks was to successively choose one feature, say $X_k$, out of the feature set $\{X_j\}_{j=1\dots m}$ and then build a linear regression model to predict k-th feature using remaining features:
$\hat{X}_k = \alpha + \sum_{j \neq k} \beta_j X_j$
You can then compute, e.g. Fraction of Variance Unexplained:
$$
FVU_k=\frac{\left|X_k-\hat{X}_k\right|^2}{\left|X_k\right|^2}
$$
High value tells you that current feature ($X_k$) cannot be predicted using other features, at least using a linear model - i.e. it is uncorrelated with them. Low value tells you that current feature is redundant at can be removed.
You can select the feature with lowest FVU, remove that, then recompute everything and remove next feature, and then keep going. This is a greedy algorithm, but, by virtue of one-by-one removal it will at least take into account multi-collinear relationship. In the past I would stop at $FVU\ge 0.1$.
At the end you will have a set of features (stocks) that are not correlated strongly. You could then rebuild the correlation matrix and recompute correlation of individual stocks with the combined <not actually sure what "combined" refers to> |
H: Tensorflow for Deeplearning and Machine learrning
We can use TensorFlow for both machine learning and deep learning. So why do we use scikit-learn more in machine learning and not TensorFlow?
Are they both alternatives of each other?
AI: Tensorflow is usually used for deep learning (neural networks) and scikit is used for machine learning tasks. You can solve ML problems using Tensorflow, of course. But it is much easier to use scikit as all you need here is a ready to use algorithm. |
H: Can't use The SGD optimizer
I am using the following code:
from tensorflow.keras.regularizers import l2
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Add, Conv2D, MaxPooling2D, Dropout, Flatten, Dense, BatchNormalization, Activation
from tensorflow.keras import activations
CNN_model = Sequential()
# The First Block
CNN_model.add(Conv2D(128, kernel_size=2,kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same', input_shape=(700, 460, 3)))
CNN_model.add(Activation(activations.relu))
CNN_model.add(BatchNormalization())
CNN_model.add(MaxPooling2D(2, 2))
# The Second Block
CNN_model.add(Conv2D(128, kernel_size=2, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same'))
CNN_model.add(Activation(activations.relu))
CNN_model.add(BatchNormalization())
CNN_model.add(MaxPooling2D(2, 2))
# The Third Block
CNN_model.add(Conv2D(128, kernel_size=2, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same'))
CNN_model.add(Activation(activations.relu))
CNN_model.add(BatchNormalization())
CNN_model.add(MaxPooling2D(2, 2))
# The fourth Block
CNN_model.add(Conv2D(128, kernel_size=2, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same'))
CNN_model.add(Activation(activations.relu))
CNN_model.add(BatchNormalization())
CNN_model.add(MaxPooling2D(2, 2))
# The fifth Block
CNN_model.add(Conv2D(128, kernel_size=2, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same'))
CNN_model.add(Activation(activations.relu))
CNN_model.add(BatchNormalization())
CNN_model.add(MaxPooling2D(2, 2))
rom tensorflow.python.keras.engine.training import Model
from tensorflow.keras import backend as K, regularizers
from tensorflow.keras import losses
CNN_model.add(Flatten())
# Layer 1
CNN_model.add(Dense(800,activation='relu',kernel_regularizer=l2(0.0005)))
CNN_model.add(Dropout(0.5))
# Layer 2
#CNN_model.add(Dense(25, activation='relu',kernel_regularizer=l2(0.0005)))
#CNN_model.add(Dropout(0.5))
# Layer 5
CNN_model.add(Dense(8, activation='softmax'))
from tensorflow.keras.optimizers import SGD
opt=SGD(learning_rate=0.1, momentum=0.2, nesterov=True)
CNN_model.compile(SGD, loss = 'categorical_crossentropy', metrics = ['acc'])
However, I get the following error:
ValueError Traceback (most recent call last)
<ipython-input-9-a5e98777f528> in <module>
1 from tensorflow.keras.optimizers import SGD
2 opt=SGD(learning_rate=0.1, momentum=0.2, nesterov=True)
----> 3 CNN_model.compile(SGD, loss = 'categorical_crossentropy', metrics = ['acc'])
~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, steps_per_execution, **kwargs)
566 self._run_eagerly = run_eagerly
567
--> 568 self.optimizer = self._get_optimizer(optimizer)
569 self.compiled_loss = compile_utils.LossesContainer(
570 loss, loss_weights, output_names=self.output_names)
~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in _get_optimizer(self, optimizer)
604 return opt
605
--> 606 return nest.map_structure(_get_single_optimizer, optimizer)
607
608 @trackable.no_automatic_dependency_tracking
~\anaconda3\lib\site-packages\tensorflow\python\util\nest.py in map_structure(func, *structure, **kwargs)
865
866 return pack_sequence_as(
--> 867 structure[0], [func(*x) for x in entries],
868 expand_composites=expand_composites)
869
~\anaconda3\lib\site-packages\tensorflow\python\util\nest.py in <listcomp>(.0)
865
866 return pack_sequence_as(
--> 867 structure[0], [func(*x) for x in entries],
868 expand_composites=expand_composites)
869
~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in _get_single_optimizer(opt)
595
596 def _get_single_optimizer(opt):
--> 597 opt = optimizers.get(opt)
598 if (loss_scale is not None and
599 not isinstance(opt, lso.LossScaleOptimizer)):
~\anaconda3\lib\site-packages\tensorflow\python\keras\optimizers.py in get(identifier)
129 return deserialize(config)
130 else:
--> 131 raise ValueError(
132 'Could not interpret optimizer identifier: {}'.format(identifier))
ValueError: Could not interpret optimizer identifier: <class 'tensorflow.python.keras.optimizer_v2.gradient_descent.SGD'>
I am not mixing keras with tensorflow.keras, so why am I getting this error?
AI: Change this line from
CNN_model.compile(SGD, loss = 'categorical_crossentropy', metrics = ['acc'])
to
CNN_model.compile(opt, loss = 'categorical_crossentropy', metrics = ['acc']) |
H: What are some application of Google's PageRank Algorithm in Data Science
I came across a topic on computational linear algebra that talks about iterative algorithms to compute eigenvalues. I've worked with power method which is an iterative algorithm that converges a sequence of vectors to the largest eigenvalue.
One application of power method is the famous PageRank algorithm developed by Larry Page and Sergey Brin. The whole concept of this algorithm is an eigenvector problem corresponding to the largest eigenvalue of a system $Gv=v$ where $G$ is the Google matrix. This eigenvector can be found using the Power method.
Interestingly, I was wondering if PageRank has any application other than web surfing because it combines the concept of random walk and some computational graph theory and linear algebra which I suspect could have some applications in data science. Any idea is welcomed.
AI: Hubáček et al apply PageRank to football match prediction:
PageRank was originally developed for assessing the importance of a website by examining the importance of other websites referring to it. Similarly, our assumption was that a strong team would be determined by having better results against other strong teams. The PageRank of a team can be computed out of a matrix with columns as well as rows corresponding to teams. Each cell holds a number expressing the relative dominance of one team over the other in terms of previous match outcomes. In particular, the $i, j$ cell contains $$\frac{3w_{ij}+d_{ij}}{g_{ij}}$$ where where $w_{ij} (d_{ij})$ is the number of wins (draws) of team $i$ over (with) team $j$, and the normalizer $g_{ij}$ is the number of games played involving the two teams. These numbers are extracted from the current and the two preceding seasons. The coefficient $3$ reflects the standard soccer
point assignment.
The idea is based on PageRank Approach to Ranking National Football Teams by Lazova and Basnarkov. |
H: Remedy for small batch size?
I am trying to reproduce results of other people's research, but we cannot afford to do it with the same batch size as theirs, due to limited computing resources.
The method they use is a simple sequence classification using BERT.
They do it with batch size 48, learning rate 4e-5, optimization Adam, and epoch 1.
Their result is f2 score of 0.65 and we are nowhere near there despite doing everything the same except the batch size (we do it with batch size 16).
What are some ways to compensate for small batch size? I would greatly appreciate any suggestions.
AI: Try looking into gradient accumulation, it is a method that instead of applying the gradient update after each batch adds up the gradients for multiple batches and applies them. This allows you to get the same gradient updates you would get with larger batches without needing the larger batches, which is often useful for situations where the model would need large batch sizes but you don't have enough computational resources to support those batch sizes. |
H: What is the point of generating new features (linear or non linear) out of existing features in a dataset?
During feature engineering, we can create new features out of existing ones by using arithmetic operations albeit linear or not.
Let's say we have two features x and z. We can then create (engineer) a new feature f by summing x & z, assuming this makes sense in the context of the use case, to therefore become, f = x + z.
Or if a non linear feature is to be created then something like the following can be implemented, f = x*x + z.
My question is, given that we have x & z and knowing the strength of their correlation to the target variable, what is the point of creating a third feature which is just a combination of the original two?
What can the third feature point to that the first two can't?
Maybe if the combination is non-linear, I can understand, but what if the combination is linear? Why should it help?
P.S I have stumbled on a post on Cross-Validated addressing this issue, but given the nature of the answer, it still left me wondering and unsatisfied.
AI: Feature engineering has at least two important benefits:
You can simplify the task for your model by including interactions between features which the model otherwise would have to learn
Beyond simplification, you can inject prior knowledge (e.g. expert knowledge) into the data and eventually the model
Here are two examples:
The non-linear case - Suppose you're developing a model to predict the risk of severe Covid19 (e.g. defined as patients being hospitalized or dying). If your datasets includes features, such as weight and height, your model could learn the association between these features and the outcome. However, given that empirical evidence demonstrated an increased risk of severe Covid19 for obese people you could feature engineer an independent variable BMI or obesity. If you do so your model does not need to learn the, most likely, non-linear relationship between height and weight with regards to the target variable.
The linear case - Suppose your developing a model to predict whether a household will purchase a luxury car. If your dataset includes income from salary, income from investments and other income. Then feature engineering a linear combination of these 3 total income can make it easier for your model to learn the relationship between total income and the target variable. To make it more concrete: Take, for example, a decision tree. Without total income it might need several split points including the different income variables to derive a prediction. In contrast, splitting on total income might result in a prediction with much fewer nodes required.
The non-linear case is usually more relevant though. |
H: Preprocessing , EDA , and Feature Engineering
What is the difference between EDA, Feature Engineering, and Preprocessing?
The main purpose is to make the raw data suitable for modeling. In EDA, we are cleaning the data and so does the preprocessing. Wheras in FE, we are scaling and imputing.
AI: EDA(Exploratory Data Analysis) as suggested by the name is an initial analysis of the data. Understanding the distributions, getting an idea of the kind of values and their range. It's getting a feel of the data before further analysis and understanding the nature of it.
This would ideally give you an idea of the kind of preprocessing it would require which comes after EDA.
Preprocessing is the next step which then includes its steps to make the data fit for your models and further analysis. EDA and preprocessing might overlap in some cases.
Feature engineering is identifying and extracting features from the data, understanding the factors the decisions and predictions would be based on. |
H: Encoding when different number of records for each month-One hot or different type?
I am working on a dataset which is pretty small: 1169 records.
There is a column called month and it takes the values 'Jan','Feb', or 'March'.
The number of records for each month is different. I have 542 for 'Jan', 443 for 'Feb', and 212 for 'March'.
Given this information, should I use one hot encoding or encode the months as 1,2,3 for Jan, Feb, March respectively? Will the higher value account for the fact that there there are fewer records for that month?
AI: Whether you one-hot encode is unrelated to the number of records for each month.
Some algorithms are happy to create categoricals as categoricals directly, like most tree-based methods. They may want them encoded as numbers, but, the numbers themselves do not (should not) have meaning. They're just indices. 0-11 or 1-12 should be fine.
One-hot encoding is necessary when using a method that can't deal with categoricals, like linear methods. You would not want to encode months as numbers in the case. You have to one-hot encode, really. Otherwise "Dec" is "12 times larger" than "Jan" which almost always doesn't make sense semantically. |
H: New to keras neural network and k-fold cross validation
I'm new to learning neural networks and I found an example online to test accuracy with k-fold cross validation.
The example is for binary data but I want to test MAE or RMSE (I guess?) for my regression prediction. I'm a bit lost now on how I can test it because currently the accuracy is mostly 0.00%, sometimes 1.94%, I assume because my data is not binary.
Here is my code:
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
# split into input (X) and output (Y) variables
# define 10-fold cross validation test harness
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
cvscores = []
for train, test in kfold.split(X, Y):
model = Sequential()
model.add(Dense(32, input_dim=12, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='relu'))
# Compile model
model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X.iloc[train], Y.iloc[train], epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X.iloc[test], Y.iloc[test], verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
cvscores.append(scores[1] * 100)
print("%.2f%% (+/- %.2f%%)" % (numpy.mean(cvscores), numpy.std(cvscores)))
So my question is: how do I test MAE for regression data with k-fold cross validation?
Thanks for the help.
AI: I don't think you should use accuracy as a metric, you can test your MAE (Mean Absolute Error) by changing the metric in the compiling line.
model.compile(..., metrics=['mae'])
If you want to use RMSE you can define it yourself (it's not built-in in keras):
from keras import backend
def rmse(y_true, y_pred):
return backend.sqrt(backend.mean(backend.square(y_pred - y_true), axis=-1))
model.compile(loss='mse', optimizer='adam', metrics=[rmse])
Good luck :) |
H: How to convert mixed datetime formats into single one in pandas?
I am working with DataFrame which contains multiple datetime formats in one column. For example:
2020-11-09 00:00:48
2020-11-09 00:00:48
2020-11-09 00:00:48
2020-11-09 00:00:48
2020-11-09 00:00:48
2020-08-25
2020-08-25
2017-08-25
2018-08-25
2020-08-25
25.08.2020
25.08.2020
25.08.2017
25.08.2018
25.08.2020
I want to convert it into "dd.mm.yyyy" format.
pd.to_datetime(columnName, format = "dd.mm.yyyy") does not help.
AI: import pandas as pd
date_list = ["2020-11-09 00:00:48",
"2020-11-09 00:00:48",
"2020-11-09 00:00:48",
"2020-11-09 00:00:48",
"2020-11-09 00:00:48",
"2020-08-25",
"2020-08-25",
"2017-08-25",
"2018-08-25",
"2020-08-25",
"25.08.2020",
"25.08.2020",
"25.08.2017",
"25.08.2018",
"25.08.2020"]
df = pd.DataFrame(date_list,columns=['date'])
df['date'] = df['date'].apply(lambda x: pd.to_datetime(x).strftime('%d/%m/%Y'))
output will be
date
0 09/11/2020
1 09/11/2020
2 09/11/2020
3 09/11/2020
4 09/11/2020
5 25/08/2020
6 25/08/2020
7 25/08/2017
8 25/08/2018
9 25/08/2020
10 25/08/2020
11 25/08/2020
12 25/08/2017
13 25/08/2018
14 25/08/2020 |
H: Ongoing feature selection
If you have a set of n features you have 2^n-1 non-empty feature subsets. As a result, if you pick one of them you are unlikely to have found the best one.
To me, it seems intuitive that as you build your model, you would want to look at the things it does badly and try to find features that would help it to improve, or take out features that don't seem to be helping.
Although I've seen this done in practice and muddled through this way I've never seen any formal theory behind it. How do you know which features to add to the set your training on? and WHich to remove?
AI: There are various features selections techniques. The most common techniques rank individual features by how much information they bring with respect to the target, for example with information gain or conditional entropy.
Techniques based on individual features are efficient (i.e. fast) and usually help to reduce dimensionality and improve performance. But they are not necessarily optimal, because they cannot take into account the contribution of a subset of features together. For example they might select several features which are highly correlated between each other, even though selecting only one of them would be enough.
In order to take into account how features interact, ideally one would train and test a model with every possible subset of features, and then select the best one. However the full exploration of $2^N$ subsets is rarely feasible, but some optimization methods can be used, for example feature selection with genetic learning.
Note that there are also features extraction techniques. In this case the original semantic of the features is not preserved, since the whole set of features are transformed into a new representation. |
H: why i got error when using SimpleImputer for impute Nan values?
I have the following code, where sp_col is a sliced column of my dataframe df_1:
from sklearn.impute import SimpleImputer
import numpy as np
imputer = SimpleImputer(missing_values=np.NaN,strategy='mean')
sp_col =df_1.iloc[:-1,9:28].values.reshape(-1,1)
# print(sp_col.isnull().sum())
for items,j in sp_col.items():
im2 = sp_col
im2 = imputer.fit(im2)
im3 = imputer.transform(sp_col)
I then get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-207-28b34f875d63> in <module>
8 # print(sp_col.isnull().sum())
9
---> 10 for items,j in sp_col.items():
11 im2 = sp_col
12 im2 = imputer.fit(im2)
AttributeError: 'numpy.ndarray' object has no attribute 'items'
AI: Try this:
from sklearn.impute import SimpleImputer
import numpy as np
imputer = SimpleImputer(missing_values=np.NaN,strategy='mean')
df_1.iloc[:, 9:28] = si.fit_transform(df_1.iloc[:, 9:28]) |
H: Error when using simple imputer to impute nan values?
df_2 is data frame
My code:
from sklearn.impute import SimpleImputer
impute = SimpleImputer(missing_values=np.NaN,strategy='mean')
df_2.iloc[:,2:9] = impute.fit(df_2.iloc[:,2:9])
I got error:
TypeError: float() argument must be a string or a number, not 'SimpleImputer'
I do not know why can anyone help me?
AI: You should add tranform():
imputer=imputer.fit(df_2.iloc[:,2:9])
df_2.iloc[:,2:9]=imputer.transform(df_2.iloc[:,2:9])
Or you could use this directly:
df_2.iloc[:,2:9] = imputer.fit_transform(df_2.iloc[:,2:9]) |
H: Looping when predicting with keras
I created a prediction with keras and I want to see what are results if I repeat the prediction 10 times for example. If I loop I get a warning
WARNING:tensorflow:6 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x1507503a0> triggered tf.function retracing.
My question: Is it the same if I loop like this
for a in range(1,11):
model = Sequential()
model.add(Dense(128, input_dim=10, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='adam')
model.fit(X, Y, epochs=1000, verbose=0)
# new instance where we do not know the answer
Xnew = np.array([[96.52, 114, 111.02, 0.538, 0.559, 0.5082, 0.4992, 96.92, 104.94, 107.38]])
# make a prediction
ynew = model.predict(Xnew)
# show the inputs and predicted outputs
print("Predicted=%s" % (ynew[0]))
or if I output 10 times like this:
model = Sequential()
model.add(Dense(128, input_dim=10, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(4, activation='relu'))
**model.add(Dense(10, activation='linear'))**
model.compile(loss='mse', optimizer='adam')
model.fit(X, Y, epochs=1000, verbose=0)
# new instance where we do not know the answer
Xnew = np.array([[96.52, 114, 111.02, 0.538, 0.559, 0.5082, 0.4992, 96.92, 104.94, 107.38]])
# make a prediction
ynew = model.predict(Xnew)
# show the inputs and predicted outputs
print("Predicted=%s" % (ynew[0]))
I would say it's not the same but I'm new to this :)
AI: Yes, it is totally different:
In the first case: the model output is only one neuron and in each iteration you build your model from scratch. You should know that this is not a accumulative training meaning not equivalent like you train your model 10000 time. (epochs 10 * for-loop iterations 1000).
In the second case: the model output is 10 neurons. Your model is trained 1000 epochs.
The error mentioned arises perhaps it is a TensorFlow problem when you put a model in a loop. You could know more about this issue here. |
H: Splitting before tfidf or after?
When should I perform preprocessing and matrix creation of text data in NLP, before or after train_test_split? Below is my sample code where I have done preprocessing and matrix creation (tfidf) before train_test_split. I want to know will there be data leakage?
corpus = []
for i in range(0 ,len(data1)):
review = re.sub('[^a-zA-Z]', ' ', data1['features'][i])
review = review.lower()
review = review.split()
review = [stemmer.stem(j) for j in review if not j in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
from sklearn.feature_extraction.text import TfidfVectorizer
cv = TfidfVectorizer(max_features = 6000)
x = cv.fit_transform(corpus).toarray()
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(data1['label'])
from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(x, y, test_size = 0.2, random_state = 69,
stratify = y)
spam_model = MultinomialNB().fit(train_x, train_y)
pred = spam_model.predict(test_x)
c_matrix = confusion_matrix(test_y, pred)
acc_score = accuracy_score(test_y, pred)
AI: You should split before tf-idf.
If you learn tf-idf also on the test set you will have data leakage. In example, you won't have out-of-vocabulary words in inference on the test set, what might happen in the "real world". |
H: How to load any particular folder files from a zip dataset
Twitter is a great source of information. Using The Health-News-Tweets.zip dataset contains tweets by different agencies like BBC Health, CBC Health, etc. I will perform a clustering on these tweets and will use the elbow method to identify the ideal number of clusters.
My Concern is how i will load only train folder files present in dataset.
Each file contains tweets with ‘|’ as a separator. For ex ID | Timestamp | Tweet
My further proceeding will be extracting the tweets column and perform soe preprocesing , tokenize,vectorize,clustering and so but stucked in very beginning .
Any advice Datascience community could give would be much appreciated.
Dataset
I tried this but getting error
import pyprind
import pandas as pd
import os
progBar = pyprind.ProgBar(3000) # initialize progress bar
df=pd.DataFrame() # empty dataframe
for s in ('test','train'):
path="./Health-News-Tweets/train/ %s/%s" % (s) #file path
for file in os.listdir(path):
with open(os.path.join(path,file),'r') as infile:
txt = infile.read()
df= df.append([txt],ignore_index=True) #will append my content of particular text to the dataframe
progBar.update() # updating progress bar
df.columns = ['ID','Timestamp','Tweet']
Error :
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-2cabdcb33d40> in <module>
7 df=pd.DataFrame() # empty dataframe
8 for s in ('test','train'):
----> 9 path="./Health-News-Tweets/train %s/%s" % (s) #file path
10 for file in os.listdir(path):
11 with open(os.path.join(path,file),'r') as infile:
TypeError: not enough arguments for format string
PATH : C:\DATA SCIENCE 2021\LECTURES\FUNDAMENTAL OF LINUX & SERVER CONFIG - MODULE 1\5. MODULE 5 - Machine learning Using Python\Assignment\Health-News-Tweets
AI: I tried another method and successfully loaded the required files but unable to do same with above method.
import os
import pandas as pd
#list the files
path = r"C:\DATA SCIENCE 2021\LECTURES\FUNDAMENTAL OF LINUX & SERVER CONFIG - MODULE 1\5. MODULE 5 - Machine learning Using Python\Assignment\Health-News-Tweets\train"
filelist = os.listdir(path)
print(filelist)
#read them into pandas
df_list=[]
for file in filelist:
#print(file)
df_list.append(pd.read_table(os.path.join(path,file),header=None,encoding='unicode_escape')) |
H: Fastest way to replace a value in a pandas DataFrame?
I am loading in 1.5m images with 80,000 classes (or I will have to when I eventually train) into a Keras generator and am using a pandas dataframe to do so. The problem is, with so many images, my code takes a long time to run. I have an issue with the specific task of replacing a value in the dataframe; it takes too long:
df = a pandas dataframe with all the names of the files in
# Code to change the names into absolute paths so keras can load in the entire df
for index, row in df.iterrows():
j = row.values[0] # 720 nano seconds
path = "my path" + "specific values" + .jpg" # 1290 nano seconds
df['id'].replace(to_replace=[j], value=path, inplace=True) # 281000000 nano seconds
My issue is clearly with the last line and hence the title of the question. I managed to improve this by a magnitude of x4 with the code below, but is still to long:
df.loc[index, 'id'] = path # 69100000 nano seconds
For your interest, with 1.5m entries in the dataframe, this will take:
# FORMULA
(time, ns * no. of rows)/1000000000 = no. of seconds
no. of seconds/3600 = no. of hours
(281000000 * 1500000)/1000000000 = 421,500 seconds
421500/3600 = 117 hours
(69100000 * 1500000)/1000000000 = 103,650 seconds
103650/3600 = 28 hours
As you can see, a great improvement, but still too long. And I haven't even began training yet. Does anyone know a faster way to do this?
Additionally, since this is my first project concerning big data, can anyone offer me tips about how to deal with so many images?
Many thanks,
Finn Williams
AI: It seems I have immediately found an answer, so I shall post it here for the sake of others in the same situation.
I found on a website that the following code achieves the same result:
df.at[index, 'id'] = path # 26300 nanoseconds
Calculations:
# FORMULA
(time, ns * no. of rows)/1000000000 = no. of seconds
no. of seconds/3600 = no. of hours
(26300 * 1500000)/1000000000 = 39.45 seconds
39.45/3600 = 0.011 hours
I went off to clean my teeth, and by the time I got back, surely enough my code had terminated.
Lessons learnt:
For others in a situation where their code is taking a long time to run, I recommend browsing for similar functions and work out how long they take using the timeit library. It saved me 117 hours of compute time. |
H: Why are words represented by frequency counts before embedding?
Before getting vector representations of words by embedding, the words are mapped to numbers. These numbers are chosen to be the frequency of that word in the dataset. Why does this convention exist? Does it have any effects, or is it arbitrary?
AI: There is a general principle in linguistics and consequently in NLP: the meaning of a word is represented by the context of the word, i.e. the words around it. [edit] In NLP this principle is the basis of distributional semantics, which is used in every NLP application involving semantics (almost of them).
This means that statistically the meaning of a word can be represented by a distribution of frequencies/probabilities over the vocabulary of all its possible context words. This principle is generalized to a full text: the meaning of the text is represented as the frequency distribution of the words it contains.
Thus it's very meaningful to use the word frequency: it represents the "importance" of the word in the text and taking together the "importance" of all the words gives a representation of the meaning of the text. |
H: Failed to convert a NumPy array to a Tensor
i use this model
model = Sequential([
Dense(units=10, input_shape=(1,), activation='relu'),
Dense(units=32, activation='relu'),
Dense(units=10, activation='softmax')
])
model.compile(optimizer=Adam(learning_rate=0.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=10, epochs=30)
but model.fit always return this error
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported
object type float).
however i converted my data as following
x_train = np.array(x_train)
y_train = np.array(y_train)
x_test= np.array(x_test)
y_test = np.array(y_test)
y_train, x_train = shuffle(y_train, x_train)
y_test, x_test = shuffle(y_test, x_test)
this is my model summary
and this is the shape of my data x_train as 1D of array for each input sample, and y_train is a label for each sample and values from (1 to 10).
can any one helps me Regards in advance !
and i define my data as following
from google.colab import files
uploaded = files.upload()
import io
dset = pd.read_csv(io.BytesIO(uploaded['1-210.csv']))
y= dset.Readername
x=dset.drop('Readername',axis=1)
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2)
AI: It looks like the model is expecting float input. Try converting to float using astype:
X = np.asarray(X).astype(np.float32) |
H: Train set and Test set For result and conclusion
I have performed a Logistic regression on a binary classification dataset.
The result are as follow :
The training-set accuracy score is 0.8523 while the test-set accuracy to be 0.8442.
For Model evaluation and improvement using Kfold and GridSearch cv :
kfold validation
Applying 5-Fold Cross Validation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, X_test, y_test, cv = 5, scoring='accuracy')
print('Cross-validation scores:{}'.format(scores))
Cross-validation scores:[0.83913352 0.84428267 0.84872159 0.8460309 0.84123601]
We can summarize the cross-validation accuracy by calculating its mean.
Compute Average cross-validation score
print('Average cross-validation score: {:.4f}'.format(scores.mean()))
Average cross-validation score: 0.8439
Original model score is found to be 0.8523. The average cross-validation score is 0.8518. So, we can conclude that cross-validation does not result in performance improvement.
Hyperparameter Optimization using GridSearch CV
from sklearn.model_selection import GridSearchCV
parameters = [{'penalty':['l1','l2']},
{'C':[1, 10, 100, 1000]}]
grid_search = GridSearchCV(estimator = model,param_grid = parameters,scoring = 'accuracy',cv = 5,verbose=0)
grid_search.fit(X_train, y_train)
GridSearchCV(cv=5,
estimator=LogisticRegression(random_state=0, solver='liblinear'),
param_grid=[{'penalty': ['l1', 'l2']}, {'C': [1, 10, 100, 1000]}],
scoring='accuracy')
Examine the best model
Best score achieved during the GridSearchCV
print('GridSearch CV best score : {:.4f}\n\n'.format(grid_search.best_score_))
GridSearch CV best score : 0.8520
Print parameters that give the best results
print('Parameters that give the best results :','\n\n', (grid_search.best_params_))
Parameters that give the best results :
{'C': 10}
Print estimator that was chosen by the GridSearch
print('\n\nEstimator that was chosen by the search :','\n\n', (grid_search.best_estimator_))
Estimator that was chosen by the search :
LogisticRegression(C=10, random_state=0, solver='liblinear')
Calculate GridSearch CV score on train set
est
print('GridSearch CV score on test set: {0:0.4f}'.format(grid_search.score(X_test, y_test)))
GridSearch CV score on test set: 0.8446
GridSearch CV score on test set: 0.8525
I have used train set for kfold and gridsearch .
My concern is about which set is taken as for result Train or Test .
AI: Having close values for training-set accuracy and test-set accuracy is good. It means your model is not overfitting. But maybee you may still improve it.
If you do Hyperparameter Optimization using GridSearch CV, you should have:
train set / validation set with kfold to use during the parameters optimisation only
test set is for final performance evaluation
To evaluate the final performance, use the values obtained with the test set. It is like if you evaluate your model with new/never seen data.
Moreover in your question:
Cross Validation is not used to improve the model but just to have a mean accuracy value which should be more reliable than a single accuracy calculus.
And you should compare final test set score values of both methods. Non optimized score : 0.8442 or 0.8439 against optimized score : 0.8525 |
H: Compare 2 Pandas Column, over different rows and Output the resultant in a 3rd column
I have a data frame like this.
EMPLOYEE_ID
NAME
MANAGER_EMPLOYEE_ID
0
42
S
40
1
40
G
NaN
2
T
M
40
3
0c
H
42
I want a separate column that outputs the Name of the Manager of every person.
So for S manager would be G
for G it would be empty (Non NaN, if possible)
for M it would be G
for H it would be S
Here is my attempt:
name = list(df3[df3["MANAGER_EMPLOYEE_ID"].isin(list(df3["EMPLOYEE_ID"]))]["MANAGER_EMPLOYEE_ID"].unique())
df3.loc[:, "MANAGER NAME"] = False
b_match_idx = df3[df3["MANAGER_EMPLOYEE_ID"].isin(name)].index
df3.at[np.array(b_match_idx),"MANAGER NAME"] = df3["NAME"]
df3
But the result is not what I want. Can someone point out the flaw in the code?
AI: You can just do a join with the dataframe on itself by using the MANAGER_EMPLOYEE_ID and EMPLOYEE_ID columns like this:
(
df3
.merge(df3[["EMPLOYEE_ID", "NAME"]], how="left", left_on="MANAGER_EMPLOYEE_ID", right_on="EMPLOYEE_ID", suffixes=(None, "_y"))
.rename({"NAME_y": "MANAGER_NAME"})
.drop("EMPLOYEE_ID_y", axis=1)
)
Which will give the following resulting dataframe:
EMPLOYEE_ID
NAME
MANAGER_EMPLOYEE_ID
MANAGER_NAME
42
S
40
G
40
G
nan
nan
T
M
40
G
0c
H
42
S |
H: NLP techniques for converting from a direct speech to a reported speech
Any idea of some NLP techniques to transform a direct speech to a reported speech ?
Example converting : "I'm learning NLP" said a user to : a user said he's learning NLP.
I thought about paraphrasing but not sure..
Thank you!
AI: What do you mean by reported speech? It might be easier to help out if you could elaborate on the end goal. What are you trying to do?
EDIT
I see so what you are looking to do is to translate between active and passive voice. When it comes to techniques to do this I found several options:
You can train a model using a long short-term memory (LSTM) recurrent neural networks (RNNs) model to detect whether it is an active or passive model. From there you can then work on the translation part. Example for model here and translation here.
It looks like you will need to familiarize yourself with the Spacy python library. An example of how this can be used in your case can be found here.
Another, though more unorthodox approach is through the use of the program language Prolog. A paper on this subject can be found here with the accompanying code here.
Here is a short primer on using Prolog for your specific task. |
H: Speculation regarding rising trend in softwares with a potential in diminishing the role of a data scientist
I have recently came across a demo of for a software that's built on natural language processing and its job is to convert a text the user types on a Python interpreter such as Jupyter and the software automatically converts it to a code. A detailed explanation of this software can be found here.
I was wondering if such software is developed to handle cleaning and manipulating data, then what would be left for data scientist to do? At the moment there is a trend, the democratization of machine learning, or the citizen data scientist. While more and more software integrates machine learning would this be considered a dangerous trend? and how should we accommodate with the rising trend of similar softwares?
AI: I think commitstrip got it spot on. :-)
For myself, I would invest into soft skills, i.e. working with people, decoding what is needed from what is requested, planning large projects, and into hard skills like statistics, information theory and related areas of maths. Coding is just means to get things done IMHO |
H: I am getting (loss: nan - accuracy: 0.0000e+00) for all epochs after training the model
I made a simple model to train my data set which consists of (210 samples and each sample consists of a numpy array of 22 values)
and x_trian and y_trian look like:
and this is my simple code:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffler
from google.colab import files
uploaded = files.upload()
import io
dset = pd.read_csv(io.BytesIO(uploaded['1-210.csv']))
y= dset.Readernumber
x=dset.drop('Readername',axis=1)
#the split ratio of 80:20. The 20% testing data set is represented by the 0.2 at the end.
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2)
x_train= np.asarray(x_train).astype('float32')
y_train = np.asarray(y_train).astype('float32')
y_train, x_train = shuffle(y_train, x_train)
#create the model #input_shape=(23,)
model = Sequential([
Dense(units=4,input_shape=(22,), activation='relu'),
Dense(units=16, activation='relu'),
Dense(units=10, activation='softmax')
])
#get the model ready for training is call the compile() function on it.
model.compile(optimizer=Adam(learning_rate=0.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
#train it using the fit() function.
model.fit(x_train, y_train, epochs=5)
And this is what I'm getting for all the epochs :
I will be grateful to anyone who can help me!
AI: I can't comment -- where this would be more applicable -- but your y_train is class encoded (e.g., this sample's label is class 1), which is a single output. When your data are fed into the model w/ 10 output nodes, the model doesn't know what to do considering your y_train has 1 output for each sample.
A solution would be to one-hot encode your outputs (e.g., if your sample's label is class 1, it would be represented as [0,1,0,0,0,0,0,0,0,0]). Sklearn has a convinient OneHotEncoder to make the preprocessing simple. Now you have 10 outputs for each sample and the model can understand what's going on. Hopefully this helps |
H: How to Inference With Keras Sequential Models (Text Classification)
I have the following LSTM model and I can't make inference with it:
print("Define LSTM model")
rnnmodel=Sequential()
rnnmodel.add(embedding_layer)
rnnmodel.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
rnnmodel.add(Dense(2, activation="sigmoid"))
rnnmodel.compile(loss="binary_crossentropy",
optimizer="adam",
metrics=["accuracy"])
rnnmodel.fit(X_train, y_train,
batch_size=256,
epochs=1,
validation_data=(x_val, y_val))
score, acc=rnnmodel.evaluate(test_data, test_labels, batch_size=128)
print(f"Test accuracy with RNN: {acc}")
(epoch is 1 to test) I want to make an inference with the text, let's say
text=["the product was horrible"]
I check the documentation of tf.keras.Sequential and it states I should use the predict function and the input should be "A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs)."
So what I did is:
text=["the product was horrible"]
inference_sequence=tokenizer.texts_to_sequences(text)
inference_data=pad_sequences(inference_sequence, maxlen=MAX_SEQUENCE_LENGTH)
predictions=rnnmodel.predict(inference_data)
print(predictions)
and it gives me the result [[0.63219154 0.33410403]]
However I've given only one sentence. Why it gives me two results? I checked the sigmoid documentation from here and for an confirmed it should return only one result. So what's the problem here?
I also tried other approaches to make inference like mentioned https://stackoverflow.com/questions/61443543/how-to-make-prediction-on-keras-text-classification
So I did:
text=["the product was horrible"]
rnnmodel.predict(text)
and it gives me the warning: WARNING:tensorflow:Model was constructed with shape (None, 1000) for input Tensor("embedding_input:0", shape=(None, 1000), dtype=float32), but it was called on an input with incompatible shape (None, 1). and stuck forever.
What should I do I just can't make an inference.
AI: I checked the sigmoid documentation from here and for a confirmed it should return only one result. So what's the problem here?
You have used 2 Neurons at the output layer. So, each is responsible for one output.
Either, change the neuron count to 1 and y_true to the 1-D array.
Or change the activation function to Softmax.
text=["the product was horrible"]
rnnmodel.predict(text)
This will not work.
The embedding layer expects Label encoded data. So the first approach is correct. |
H: LSTM model, poor performance
I have been working on a project on the demand for a product.
I am using data from 2016 to train the LSTM model.
The architecture is as follows:
model_2016 = Sequential()
model_2016.add(LSTM(units = 256, input_shape=(1, look_back), return_sequences = True))
model_2016.add(Dropout(0.2))
model_2016.add(LSTM(units = 128))
model_2016.add(Dropout(0.2))
model_2016.add(Dense(units = 1))
trainY = np.reshape(trainY, (len(trainY), ))
valY = np.reshape(valY, (len(valY), ))
opt = Adam(learning_rate=0.0005, decay=1e-6)
model_2016.compile(loss='mean_squared_error', optimizer=opt, metrics=['accuracy'])
hist = model_2016.fit(trainX, trainY, validation_split = 0.2, epochs=100, batch_size=10)
The train_validation img is as follows
And the prediction of 2016 data is as follows.
The result of the prediction of 2020 demand is like the result of 2016, it seems like the LSTM model can't get the really precise value. I've checked some articles, like to modify batch size, numbers of neurons, and number of the epoch. But the results don't improve. Do you have any ideas? Appreciate it a lot!!
Updated Section
The following one is the model trained by data from 2016~2019
The train_validation img is as follows
And finally, it's the result of the prediction of demand in 2020
If you have any further suggestions, please tell me.
AI: It could be due to a lack of data. Your data seems to be over a year, and many data dynamics are seasonal ones. An end of year data would not have the same shape as the rest of the year. Consequently, it would be better to train the data at least over a year (preferably 2 or 3 years to let it learn frequent patterns), and then check the model with a validation data over several months.
If it is already the case, change the dropout value to 0.1, and the batch size to cover a year.
You can also change absolute values to relative ones, because high absolute values may alter the LSTMs predictions quality. Data standardization could also help. |
H: Text mining match in Python
I have one column called A and one column called B where column B is a more detailed description of column A, for example:
A
B
want
red apple
I love RED apple
100%
organic orange
I love citric fruit
0%
BANANA lover
I have 2 banana
50%
Basically the algorithm is if we have all values of A in B then give 100% match, if similar then give 80%, and if no value of A is found in B then 0%.
AI: As far as I understand from your question, you are trying to compare sentences on word level, but it seems like you are interested in finding the number of words in sentence A that are contained in sentence B (not te intersection itself)
So you could use something very simple (as a first approach)
Try:
def simmilar(s1,s2):
l1 = s1.split()
l2 = s2.split()
l1 = [s.lower() for s in l1]
l2 = [s.lower() for s in l2]
n = len(set(l1))
return len(set(l1) & set(l2))/n
df.assign(result = df.apply(lambda x: simmilar(x["A"], x["B"]), axis = 1))
Result: |
H: how to choose between data normalization or standadization?
I have been studying about data scaling. Two common methods for it are the StandardScaler and MinMaxScaler. As I understood, StandardScaler expects the data to be normally distributed, but I have seem some examples online where this method is used for skewed distributions, so is it really important for the data to be normal in order to use standardization? And, if the distribution is important, can I use in the same dataset the MinMaxScaler for those features with skewed data and the StandardScaler for the features with normal distributed data? Would that be better than just choose one of the two methods and apply it on every feature?
AI: Both are linear transformations. In general you should try both and see which performs better.
MinMaxScaler: Has the problem that your features will not have the same range of values after scaling. The advantage is that you might have intrinsic boundaries for your features. Also the interpretation of your variables ist still pretty straight forward.
StandardScaler: Your features will be on the same scale, which leads to better comparability between your variables. The Problem is that you do not know the mean and the standard deviation of the popluation but only of your training data set. Hence, you have to assume that these statistics are good aproximations for the population. |
H: Plot two categorical variables against two numeric variable in ggplot
In my dataset, I have two numeric revenue features, one for each month, and two categorical features one for region and other for value segment. what I want to do is compare these two revenues col by col for each region and facet wrap by value segment. Is there any way to do that in ggplot2?
sample data :
the image in my mind:
AI: Two things before the answer:
Please always produce a sample dataframe with dput()
As this is totally coding based, this might be more suitable for StackOverflow
Proceeding with the answer
#Load the libraries
library(tidyverse)
library(reshape2)
#Create the data frame
JUL_REV <- c(100,2000,3000,340)
AUG_REV <- c(2032,2103,3002,300)
REGION <- c('REGION 1','REGION 1','REGION 2','REGION 2')
VALUE_SEGMENT <- c('VALUE SEGMENT 1','VALUE SEGMENT 2','VALUE SEGMENT 1','VALUE SEGMENT 2' )
df <- data.frame(JUL_REV, AUG_REV, REGION, VALUE_SEGMENT)
#Melt the dataframe
melt(df, id=c('REGION','VALUE_SEGMENT')) -> melt_df
#Plot with ggplot
ggplot()+
geom_bar(data=melt_df,aes(x=REGION, y=value, fill=variable), stat='identity')+
facet_grid(~VALUE_SEGMENT)
I put the legends at the top. Let me know if you need any help! |
H: How to get column names after One Hot Encoding when using Pipelines?
I am using Pipeline and ColumnTransformer to preprocess the data. Basically I am using them to impute null values, scale the numerical data and finally perform OneHotEncoding. When I fit the ColumnTransformer object to my train and test data the resulting output I get is an Array where the column names are 1, 2, 3, 4,5 and so on. Below is my code:-
cat_cols = [cname for cname in train_data1.columns if train_data1[cname].dtype == 'object']
num_cols = [cname for cname in train_data1.columns if train_data1[cname].dtype in ['int64',
'float64']]
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_trans = Pipeline(steps = [('impute', SimpleImputer(strategy = 'mean')),
('scale', StandardScaler())])
cat_trans = Pipeline(steps = [('impute', SimpleImputer(strategy = 'most_frequent')),
('encode', OneHotEncoder(handle_unknown = 'ignore'))])
from sklearn.compose import ColumnTransformer
preproc = ColumnTransformer(transformers = [('cat', cat_trans, cat_cols), ('num', num_trans,
num_cols)])
X = preproc.fit_transform(train_data1)
X_final = preproc.transform(test_data1)
Here both X and X_final are Arrays where the column names are 1, 2, 3 and so on.
What I want is a DataFrame where the column names are present. I know I can convert Array to DataFrame using pd.DataFrame but how do I the column names? I tried the following but it don't work:-
X_df = pd.DataFrame(X)
X_df.columns = preproc.get_feature_names()
AI: Try this function to get the feature names.
def get_feature_names():
column_transformer = preproc
new_feature_names = []
for i, transformer_item in enumerate(column_transformer.transformers_):
transformer_name, transformer, orig_feature_names = transformer_item
orig_feature_names = list(orig_feature_names)
if isinstance(transformer, Pipeline):
# if pipeline, get the last transformer in the Pipeline
transformer = transformer.steps[-1][1]
if hasattr(transformer, 'get_feature_names'):
if 'input_features' in transformer.get_feature_names.__code__.co_varnames:
names = list(transformer.get_feature_names(orig_feature_names))
else:
names = list(transformer.get_feature_names())
new_feature_names.extend(names)
return new_feature_names |
H: Why is a MinMax Scaler scaling every coloumn independently?
Why is a MinMax Scaler scaling each column independently? Isn't it losing information if the values are somehow connected?
If the value in column B is always an upper limit for the value in column C, after scaling the value in column B will be smaller than the value in column C because the range is much smaller.
I can't get my head around why that makes any sense. Shouldn't it scale all columns on the same scale to keep the relation between them?
AI: When we say two values are 'connected', we are typically talking about correlation (or covariance).
The correlation between variables A and B is conserved across linear scalings (which MinMax Scalers perform).
For example, if A = [1,5,10], and B = [10,50,100], their Pearson correlation coefficient (i.e. how 'connected' they are) would be 1.
If we rescaled both using a MinMax scaler with bounds [0,1], they would now have the same values (i.e. A=[0.1, 0.5, 1.0] and B=[0.1, 0.5, 1.0]), but more importantly, their Pearson correlation coefficient would still be equal to 1.
This is why a MinMax Scaler can be applied to each column/feature independently.
Again, this is only the case for linear scalings. Non-linear scalers are not guaranteed to conserve the covariance structure of your data. |
H: 1D target tensor expected, multi-target not supported
I am trying to train my model.
My model outputs a [4,2] tensor where 4 is the batch size and 2 because of binary classification.
After receiving the outputs I found the index of the maximum element for each row.so , now the shape is [4,1] and the shape of my label is [4,1] as well.I could not understand why am I still getting this error.Could someone please help me to solve it out.Also, the optimizer I am using is SGD and the loss criteria is crossentropy.
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(dataloader, 0):
# get the inputs; data is a list of [inputs, labels]
#inputs, labels = data
inputs, labels = \
data['image'], data['Status']
# zero the parameter gradients
optimizer.zero_grad()
outputs = net(inputs.float())
a=torch.max(outputs,1).indices
a=a.reshape(4,1)
a=a.float()
labels=labels.float()
print(a.shape,labels.shape)
loss = criterion(a, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
This is the error I am getting.
torch.Size([4, 1]) torch.Size([4, 1])
RuntimeError Traceback (most recent call last)
<ipython-input-83-72f63a4db63e> in <module>()
22 labels=labels.float()
23 print(a.shape,labels.shape)
---> 24 loss = criterion(a, labels)
25 loss.backward()
26 optimizer.step()
2 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2822 if size_average is not None or reduce is not None:
2823 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2825
2826
RuntimeError: 1D target tensor expected, multi-target not supported
Also,model is:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 16 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 16, 5)
self.conv2 = nn.Conv2d(16, 32, 7)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(4608,128)
self.fc2 = nn.Linear(128,16)
self.fc3 = nn.Linear(16, 2)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square, you can specify with a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = self.dropout1(x)
x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.dropout2(x)
x =self.fc3(x)
return x
net = Net()
net = net.float()
print(net)
AI: If this is a binary classification problem then your model should only need to predict one output - a value between 0 and 1. A predicted value close to 0 would indicate the input likely belongs to your first class, and a predicted value close to 1 would indicate the input likely belongs to the second class.
Then you can optimise your model using a loss function such as nn.BCELoss(prediction, target) or nn.BCEWithLogitsLoss(prediction, target). This should avoid the error that you currently get as you won't be dealing with multiple output values for your predictions. |
H: How to expand lists?
In lists the main noun is often only mentioned at the end. However, for e.g. NER-tasks, I would like to "expand" them:
Outward and return flight -> outward flight and return flight
Proboscis, Vervet, and golden snub-nosed monkey -> Proboscis monkey, Vervet monkey, and golden snub-nosed monkey
education in mathematics or physics -> education in mathematics or education in physics
Are there already tools out there (bonus points for support of German language)? Google only led me to expanding contractions ("I've" -> "I have")...
AI: In general this is related to syntactic analysis: one needs to obtain a parse tree of the noun phrase, then it's possible to expand by mapping the head of the phrase with the different parts of the conjunction.
I think you can find dependency parsers for German, for instance in the NLTK library or Spacy.
I don't know if you would find a library which provides precisely the expansion though, I would expect that there is a bit of programming to do from the parse tree. |
H: When calculating lexical richness (e.g. TTR) do you lemmatize first?
When calculating Type-Token Ration (TTR) and Hapax richness (along with similar measures), do you lemmatize the corpus first?
AI: To my knowledge there's no standard way to use lexical density measures such as Type-Token Ratio (TTR).
It's common to apply TTR to the raw tokenized text, in particular if there is no suitable lemmatizer available or if the lemmatization risks introducing errors. Otherwise there's no strong reason one way or the other, as far as I'm aware. However it's important to proceed consistently of course: if one wants to compare TTR between different texts, the TTR should have been calculated the same way. |
H: Why cant we further tune/change the model after evaluating on the test set?
Every thread on stackexchange that I've found says that you can only use the test set once and thats it. So for instance, if you used a linear regression model and got poor results on the test set, you cannot change the the model to say a random forest and evaluate this model again on the test set.
This doesnt make sense to me. Example, we all know the MNIST dataset well. Lets say I download the data and split it into train, validation and test. Also, say I used a linear regression model and clearly I will do poorly on the test set. Now, what's stopping anyone else from downloading the same MNIST data set, splitting into train, validation, test and using a different model (e.g. neural network) and reporting their test result?
According to our understanding of only being able to use the test set once, their test results are invalid because they have somehow "learnt" that a linear regression model was not good. This does not seem right to me.
AI: The reason for the train/validate/test distribution (which I found out in a painful way) is that you will get good results if you tweak a model to fit to a test set. It could be completely random data, but if you calculate enough features, and tweak the hyperparameters of your model, you will get a relatively high (and misleading) level of accuracy.
not tweaking to your test set is a rule of thumb, which people treat like gospel because they've been burned by not following it. In reality, the more nuanced advise is this:
Train on a test set
test your model on a validation set
tweak your model based on the validation results
once you feel confident, test on a completely external test set.
If you get bad results, you can go back to steps 1-3, but you should be careful not to use your test results from 4 to tweak your model, as that defeats the whole point of separating the validation and test set in the first place.
there are systems that automate this process, like train/validate/test splits and nested kfold. |
H: How to set the priority to Machine leaning algorithms for Binary classification among Four based on accuracy and fitting
Rain Classification in Australia
Under this context, sklearn classification algorithms will be used, namely:
Logistic Regression Classification (Parametric)
Decision Tree Classification (Non parametric)
Random Forest Classification (Non parametric)
K-Nearest Neighbour (KNN) Classification (Non parametric)
Train Test Split is 80-20
Accuracy Score :
Logistic Regression Test Score 0.854062
Logistic Regression Train Score 0.853797
Decision Tree Test Score 0.795838
Decision Tree Train Score 1.000000
Random Forest Test Score 0.858269
Random Forest Train Score 0.999978
K-Nearest Neighbour Test Score 0.817180
K-Nearest Neighbour Train Score 0.831138
`Null accuracy score: 0.7815`
Logistic Regression is performing good without overfitting.
But if look at accuracy ,Random Forest accuracy is better.
How to check for underfitting.
Confusion Matrix
ROC AUC :
ROC AUC For LR : 0.8742
ROC AUC For DT : 0.7072
ROC AUC For RF : 0.8883
ROC AUC For KNN : 0.7928
Classification Metrices
Classification accuracy : 0.8583
Classification error : 0.1417
Precision : 0.9586
Recall : 0.8726
True Positive Rate : 0.8726
False Positive Rate : 0.2286
Specificity : 0.7714
Logistic Regression
precision recall f1-score support
No 0.88 0.95 0.91 17650
Yes 0.74 0.52 0.61 4935
accuracy 0.85 22585
macro avg 0.81 0.73 0.76 22585
weighted avg 0.85 0.85 0.84 22585
Decision Tree
precision recall f1-score support
No 0.87 0.86 0.87 17650
Yes 0.53 0.55 0.54 4935
accuracy 0.80 22585
macro avg 0.70 0.71 0.70 22585
weighted avg 0.80 0.80 0.80 22585
Random Forest Classification
precision recall f1-score support
No 0.87 0.96 0.91 17650
Yes 0.77 0.50 0.61 4935
accuracy 0.86 22585
macro avg 0.82 0.73 0.76 22585
weighted avg 0.85 0.86 0.85 22585
K-Nearest Neighbour (KNN) Classification
precision recall f1-score support
No 0.84 0.95 0.89 17650
Yes 0.66 0.34 0.45 4935
accuracy 0.82 22585
macro avg 0.75 0.64 0.67 22585
weighted avg 0.80 0.82 0.79 22585
Evaluation should be done on Logistic only or on all four .
AI: Since you partly overfit with RF, first try to get the RF hyperparameter right. You could do a grid search like:
rf = RandomForestClassifier(...)
param_grid = {
'n_estimators': [200,300],
'max_features': [10,20,30]
}
cv = GridSearchCV(estimator=rf, param_grid=param_grid, cv= 5)
cv.fit(xtrain, ytrain)
In RandomForestClassifier max_depth and max_features are of particular interest. More trees (n_estimators) tend to be "better".
(Single) Decision Trees are usually not a good estimator.
Once you have tuned your RF properly, you could also try to "stack" KNN, logistic regression, and RF since all three of them are not too bad.
Sklearn comes with a conveniance function for stacking, namely StackingClassifier. |
H: Convolutional neural network low performance
Problem description
I'm doing Signal Modulation Classification using a Convolutional Neural Network, but performances are very low (around 15% accuracy) and I can't find out why.
Data
Dataset is composed by 220.000 rows like these. Data is perfectly balanced: I have 20.000 datapoints for each label.
Dataset column
Type
Range
Form
Notes
Signal
i=real, q=real
[i_0, i_1, ..., i_n], [q_0, q_1, ..., q_n]
n=127
SNR
s=integer
[-18, 20]
s
Label
l=string
l
They are 11 labels
Neural Network
Neural Network is a Convolutional composed sequentially by 2 convolutional layers, and 3 fully connected.
DROPOUT_RATE = 0.5
iq_in = keras.Input(shape=in_shp, name="IQ")
conv_1 = Convolution1D(128, 7, input_shape=(1, 2, 128), padding="same", activation="relu")(iq_in)
dr_1 = Dropout(DROPOUT_RATE)(conv_1)
conv_2 = Convolution1D(128, 5, padding="same", activation="relu")(dr_1)
max_pool = MaxPooling1D(padding='same')(conv_2)
fc1 = Dense(256, name="fc1")(max_pool)
dr_2 = Dropout(DROPOUT_RATE)(fc1)
fc2 = Dense(128, name="fc2")(dr_2)
out_flatten = Flatten()(fc2)
output = Dense(11, name="output")(out_flatten)
model = keras.Model(inputs=[iq_in], outputs=[output])
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
Training
Training is being done splitting the data in 70% as Training set, 30% as Test set.
NB_EPOCH = 100 # number of epochs to train on
BATCH_SIZE = 1024 # training batch size
filepath = NEURAL_NETWORK_FILENAME
history = model.fit(
X_train,
Y_train,
batch_size=BATCH_SIZE,
epochs=NB_EPOCH,
validation_data=(X_test, Y_test),
callbacks = [
keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, mode='auto'),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=0, mode='auto')
])
# we re-load the best weights once training is finished
model.load_weights(filepath)
Results
This is the confusion matrix outputted by my evaluation system.
Question
How to improve performance? Can someone criticize my Neural Network?
Thanks.
AI: There are a couple of things I would suggest:
Reshape the input data: It looks to me that you want to analyse a time series if IQ-values and each time series is 128 datapoints. In this case you probably want to treat I and Q as the channels respectively and convolve over ther 128 points. To do this the input data needs to be of shape (128, 2). Right now you treat your data points as 128-dimensional vectors and convolve over the two channels I and Q.
Flatten between convolutional and dense part: Usually you have the Convolution1D and MaxPooling1D layers extract some spatial features. The fully connected layers have no notion of spatial properties, they just "understand" vectors. So most models have the Flatten layer just before the Dense layers.
Activation functions of the dense layers: Unless an activation is specified in the Dense layers, they use a linear activation, which does not really harvest the expressive power of the layer (see here). So fc1 and fc2 should probably get an activation="relu" as well.
Output activation: I would also use a softmax activation for the output, otherwise you can not interpret the output as probabilities for class membership.
With those points fixed I would expect your model to work, at least in principle. A few more points you might want to check:
Number of filters: Right now you use 128 filters for all the Convolution1D layers. That seems like a lot, I would start with maybe 16 and see how far you get. You can increase the number later.
Filter size: That is just an intuition on my part, but you might try smaller filters. In computer vision filter sizes of 7 and 5 respectively would seem oddly large. But since you probably understand the data better, you might have your reasons.
Not sure if this will help, but if you want to reduce the number of trainable parameters you could also try and insert another MaxPooling1D layer between the two convolutions. Usually one tries to compound spatial information with e.g. pooling while learning more features, i.e. increase the number of filters.
Try a model e.g. like this one (untested code, this is just as an example):
DROPOUT_RATE = 0.5
in_shp = (128, 2)
iq_in = keras.Input(shape=in_shp, name="IQ")
conv_1 = Convolution1D(16, 7, padding="same", activation="relu")(iq_in)
dr_1 = Dropout(DROPOUT_RATE)(conv_1)
conv_2 = Convolution1D(16, 5, padding="same", activation="relu")(dr_1)
max_pool = MaxPooling1D(padding='same')(conv_2)
out_flatten = Flatten()(max_pool)
fc1 = Dense(256, name="fc1", activation="relu")(out_flatten)
dr_2 = Dropout(DROPOUT_RATE)(fc1)
fc2 = Dense(128, name="fc2", activation="relu")(dr_2)
output = Dense(11, name="output", activation="softmax")(fc2)
model = keras.Model(inputs=[iq_in], outputs=[output])
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
With the following output:
Layer (type) Output Shape Param #
=================================================================
IQ (InputLayer) [(None, 128, 2)] 0
_________________________________________________________________
conv1d (Conv1D) (None, 128, 16) 240
_________________________________________________________________
dropout (Dropout) (None, 128, 16) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 128, 16) 1296
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 64, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 1024) 0
_________________________________________________________________
fc1 (Dense) (None, 256) 262400
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
fc2 (Dense) (None, 128) 32896
_________________________________________________________________
output (Dense) (None, 11) 1419
=================================================================
Total params: 298,251
Trainable params: 298,251
Non-trainable params: 0
_________________________________________________________________ |
H: Reinforcement Learning - after 800k timesteps agent wont learng
I'm study and try a reinforcement learning.
now im using gym and stable-baseline.
My project have a step where I calculate a reward with a function.
-> step() -> calculate_reward() -> return calculated reward -> in step():
I set total_reward = -1000 and Done = True
But agent will interrupt everytime to this step becouse it using the same action.
Tick: 23
Price: 131.067600 - Next Price: 126.127800 - Action: 0 - Reward: 4.939800
Tick: 24
Price: 128.367200 - Next Price: 130.804100 - Action: 0 - Reward: -2.436900
Tick: 23
Price: 131.067600 - Next Price: 126.127800 - Action: 0 - Reward: 4.939800
Tick: 24
Price: 128.367200 - Next Price: 130.804100 - Action: 0 - Reward: -2.436900
Tick: 23
Price: 131.067600 - Next Price: 126.127800 - Action: 0 - Reward: 4.939800
Tick: 24
Price: 128.367200 - Next Price: 130.804100 - Action: 0 - Reward: -2.436900
What is wrong in my concept?
My notebook colab is: https://colab.research.google.com/drive/1HxjONW_QvYPagk7clUrUPFAkpJ59OwZ8?usp=sharing
AI: You could change some parameters of your A2C model, perhaps the learning rate or the alpha/epsilon/gamma/momentum, for instance:
model = A2C('MlpLstmPolicy', env, verbose=1, learning_rate=0.0001, alpha=0.001, momentum = 0.02)
or using a different LSTM architecture, for instance:
policy_kwargs = dict(net_arch=[64, 'lstm', dict(vf=[128, 128, 128], pi=[64, 64])])
model = A2C('MlpLstmPolicy', env, verbose=1,policy_kwargs = policy_kwargs)
I don't know which parameter could be the best, it depends on your data.
Note that stock markets might need data scaling techniques in order to reach better results. |
H: Get the percentage of each class in classification
Let me explain my question with an example.
Imagine our machine learning model is trained using many different classes such as rectangle, circle, and triangular, etc.
Now I want to give it a new shape, for instance, a rounded corner rectangle.
What I need is not only the classification of this new shape I need to extract the similarity percentage. For example, 85% rectangle and 14% circle, and 1% other classes.
Is it a multiclass and multioutput problem? I think the multioutput problems are also a sort of binary classification. Is this a classification problem or maybe I should go through regression problems?
How can I achieve my goal for example in Scikit learn? What are the keywords?
AI: So many model classes in Sci-Kit learn have a method called "predict_proba(X)". This is a method that can predict the probability of the predicted classification. Below is a link from the sci-kit learn documentation that shows an example of how this would work in practice.
https://scikit-learn.org/stable/auto_examples/calibration/plot_calibration.html?highlight=classification |
H: Why is sequence prediction always the objective in RNN and LSTM like algorithms
The title is pretty much my question. I haven't seen any literature yet that uses a different training objective. The goal is to find the hidden states eventually, then why is it that only 1 method is so popular, and there are no others seen?
AI: Sequence prediction is not the only objective in RNNs and LSTMs.
For instance, we have sequence classification, where the input to the RNN/LSTM is the sequence, and the output is its label; a concrete example of that is sentiment analysis, where the input is a sentence, and the output is whether the answer is positive, negative or neutral. You can see specific models and benchmark datasets at paperswithcode.
Another objective is sequence tagging where, for each timestep, we output a label; an example is part-of-speech (POS) tagging, where the input is a textual sequence and the output is the POS tag each of the words in the text has. You can see specific models and benchmark datasets at paperswithcode. |
H: Annotating NER dataset
I am working on annotating a dataset for the purpose of named entity recognition.
In principle, I have seen that for multi-phrase (not single word) elements, annotations work like this (see this example below):
Romania (B-CNT)
United States of America (B-CNT C-CNT C-CNT C-CNT)
where B-CNT stands for "beginning-country" and C-CNT represents "continuing-country".
The problem that I face is that I have a case in which (not related to countries) where I need to annotate like B-W GAP_WORD C-W C-W.
How should I proceed with the annotation in this case?
If I do annotate like in the schema above, should I expect a BERT-alike entity recognition system to learn and detect that a phrase can be like B-W GAP_WORD C-W C-W, or do I need that "C-W" (continuation word) to be exactly after the B-W (beginning word)?
Which solution is correct of the following 2:
B-W GAP_WORD C-W C-W
B-W GAP_WORD B-W C-W
And then, in case 2, find a way to make the connection between the B-Ws (actually corresponding to the same entity)?
AI: As far as I'm aware there's no perfect answer to this question.
I agree with your analysis, the two options make sense:
The first option corresponds to the correct labeling in theory, in the sense that it means exactly what one wants in this case: the words of the entity don't (necessarily) appear continuously.
The second option makes things easier for the NER system by separating the two parts of the entity. This might lead to better results in practice, because NER systems often make errors with the B label.
If possible I would suggest to experiment with the two options. At the stage of annotation this would mean tagging such discontinuous NEs with a special temporary tag, for example:
B-W GAP_WORD B_OR_C-W C-W
this way the special B_OR_C-W can be automatically replaced with either B or C depending on the selected option. |
H: Taking the squared Euclidean distance for kNN classification of images
A problem I'm working on states:
Computes the squared Euclidean distance between each element of the training
set and each element of the test set. Images should be flattened and treated
as vectors.
The training set is a tensor of dimensions: [500, 3, 32, 32]
The test set is a tensor of dimensions: [250, 3, 32, 32]
The dataset is a subsample from CIFAR-10, so these are images. I've flattened both making their dimensions [500, 3072] and [250, 3072] respectively, but I run into a couple questions.
Isn't the process of kNN to find the nearest "majority" neighbor of a particular example? Why is step 1 asking me to find a distance between the training and test set here, which makes no sense? (I currently see it as taking the pixel value difference between two random & unrelated pictures.)
Shouldn't I instead be finding a distance between the training set a "label", then classify the example to the label with the lowest distance?
I get that the label is a scalar value so taking the actual difference between it and an example is incorrect, I'm just trying to understand it more conceptually. I've tried to not add code to clear headspace, but can toss it in if needed.
AI: Took a bit of time, but I understand what I was asking.
The process of kNN is to find the nearest "majority" major (determined by param k). My misunderstanding came from interpreting what the training set was being used for. The training set and the test set both are mapped in the same feature space. To classify the test set images, a distance (usually Euclidean) is taken between each test set example and each training set example (this is why the matrix of the output will be a 500x250).
Each example in the test set then classifies with it's "majority" neighbor. This is why you measure a difference between the two sets. |
H: How to measure the accuracy of an NLP paraphrasing model?
I using the HuggingFace library to do sentence paraphrasing (given an input sentence, the model outputs a paraphrase). How am I supposed to compare the results of two separate models (one trained with t5-base, the other with t5-small) for this task? Can I just compare the validation loss or do I need to use a metric (if so, what metric)?
AI: Evaluation should always be specific to the target task and preferably rely on some unseen test set.
The target task is paraphrasing, so the evaluation should be designed to check externally how good the generated sentences are as paraphrases. Usually this kind of task (the output is similar to Machine Translation) is evaluated by using a gold standard set of paraphrases and measuring how close the generated paraphrase is to the gold standard paraphrase(s). The comparison commonly uses some variant of BLEU score.
Practically the way to know how to evaluate a common task is to search the state of the art: for example this paper uses the method described above with various similarity measures (BLEU, ROUGE and variants apparently). |
H: Calculating Dissimilarity between attributes
I would like to figure out how to calculate the dissimilarity between attributes Jack and Jim. Given the attributes table shown below
Given the Relational table.
and the example calculations
I would like to understand how the dissimilarity is calculated for Jack and Jim.
AI: I understand how to approach the problem now.
$q$: number of attributes equal to 1 Jack and Jim.
$r$: number of attributes equal to 1 Jim.
$s$: number of attributes equal to 0 Jack, but equal to 1 Jim.
$t$: number of attributes equal to 0 for both Jack and Jim.
Therefore,
$d(\text{Jack},\text{Jim}) = {(r+s)\over(q+r+s)} = {(1+1)\over(1+1+1)} = 0.67$ |
H: "Up or down but not sideways" bimodal time series prediction - what is the best way to model it?
Say I have a time series (e.g. bitcoin price). I want to predict tomorrow's price, specifically tomorrow's % change in price from today. Let's say this is gaussian distributed, with the mean at 0%.
If the market is trending up, the price prediction should be higher (e.g. +3.1%).
If the market is trending down, the price prediction should be lower (e.g. -5.4%).
If the market is trending sideways, the price prediction should be neutral (e.g. 0%).
However, there are times when the market is ready to move "up or down but not sideways". (This may happen because of some controversy in the news, and everyone is waiting to see what everyone else will do, and whoever moves first starts the trend, like a butterfly effect.) This is the part I'm interested in modeling.
If the model is trained to simply predict a target float value (e.g. -5.4), this can tell us about trends up, down, or sideways, but it can't tell us when the market is in a state of controversy ("up or down but not sideways").
If the model is trained instead to predict a mean and variance (i.e. a unimodal gaussian distribution), this tells us about model confidence, but it still isn't enough.
Confident uptrend = mean positive, variance small
Confident downtrend = mean negative, variance small
Confident sideways trend = mean zero, variance small
Up or down but not sideways = mean zero, variance high
Random / model has no fucking clue = mean zero, variance high
This is bad, because "up or down but not sideways" looks the same as a random guess. It's even worse because zero should be the least likely outcome, yet the model has predicted zero to be most likely (because the mean of the distribution is centered at zero).
A mixture of two gaussians, however, is able to modal bimodal distributions like this:
Is this the best way to model this problem? A complication I see here is: gaussians can't be skewed. There must be better, more flexible, distributions.
Plan B--
The other method I thought of was using a softmax over N classes, discretizing the distribution. Then the model gives a probability for each price% range, and even more flexibility on distribution shapes.
If I went the softmax route, what should I be careful of? Does this run into issues with class imbalance? Is there some superior logic to choosing the number of classes, and the range of each class? Should classes be balanced? (ie. each class is equally likely) Or should classes be linearly binned by z-score? (ie. between 0-.5 std, .5-1 STD, 1-1.5 std, 1.5-2 std, etc) A complication I see with these is the artefacts emerging from coincidences of how the bins align with the data points.
What is the best way to model this problem? Are there better, more flexible, continuous distributions?
(Context: neural net in pytorch)
AI: Seems like Mixture density networks provide a great solution.
Gaussian mixtures can be numerically compared to both the linear regression MSE loss approach and the softmax cross-entropy loss approach via negative log likelihood. |
H: Tensorflow parameters for CNN
I created the below simple model (taken from a Coursera course). It has a total of five convolutions.
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
When I go through the summary, I can see that the # of parameters does not seem to match up with the theory that I have learned.
Summary:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 298, 298, 16) 448
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 149, 149, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 147, 147, 32) 4640
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 71, 71, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 33, 33, 64) 36928
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 14, 14, 64) 36928
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 3136) 0
_________________________________________________________________
dense (Dense) (None, 512) 1606144
_________________________________________________________________
dense_1 (Dense) (None, 1) 513
=================================================================
Total params: 1,704,097
Trainable params: 1,704,097
Non-trainable params: 0
The first value $448$ matches the formula:
(filter(width) * filter(height) * no_of_channels * no_of_filters_current * no_of_filters_previous ) + bias
$(3 * 3 * 3 * 16 * 1)+16$.
However the second value $4640$ does not add up for me.
I am supposed to get $(3 * 3 * 3 * 32 * 16)+32 = 13856$
Any idea where I am going wrong in the calculation?
AI: In your second calculation, there are three 3's instead of two. You can see that you are multiplying five terms instead of the four terms in your formula.
If you compute $(3 * 3 * 16 * 32) + 32$, you obtain $4640$. |
H: How are the values for the `sex` feature in sklearn Diabetes dataset obtained?
I'm just starting out with using sklearn for my own Machine Learning project and I'm using sklearn's built-in "Diabetes" dataset.
While performing data exploration on the features, I noticed something a bit confusing to me about the sex feature. Here's the hist plot:
Now there are 2 things I do understand here:
The binary histogram makes sense, there are in this dataset 2 distinct 'sexes' of male and female.
Them being numerical also makes sense, as it appears all features in this dataset have already been 'normalized'.
What I don't understand is why the values are the way they are? (See below for what the values are)
>>> from sklearn import datasets
>>> diab_df = datasets.load_diabetes(as_frame=True)
>>> features = diab_df['data']
>>> features.sex.unique()
array([ 0.05068012, -0.04464164])
How are these numbers derived? At first, I thought it could be some sort of stratified sampling, where if the true population distribution is say, 53% male, 47% female, then I'd maybe expect to see the values in this hist to be -0.47 & 0.53 or something?
AI: The data description says:
Note: Each of these 10 feature variables have been mean centered and
scaled by the standard deviation times n_samples (i.e. the sum of
squares of each column totals 1).
https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
For more information see: Bradley Efron, Trevor Hastie, Iain Johnstone and Robert
Tibshirani (2004) "Least Angle Regression," Annals of Statistics (with
discussion), 407-499.
http://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf
from sklearn import datasets
print(datasets.load_diabetes()) |
H: Significance of Convex Loss Function with Nonlinear Models
When used in a linear model, a convex loss function guarantees a unique global minimum for the parameters, which can be found by local optimization methods.
However, when the model is nonlinear (e.g. MLPs), local minima are possible for a convex loss.
Are there any benefits to a convex loss function when the model is nonlinear? Can convexity be completely disregarded in the nonlinear case?
AI: Another benefit of a convex loss function is it will have faster convergence for all models, both linear and nonlinear. There will be even faster convergence in a convex loss function if a momentum term is added to gradient descent.
However often in real-world scenarios and with many models types, the loss function is not guaranteed to be convex. It is not clear what "convexity be completely disregarded" means. Machine learning systems should be designed to be robust to non-convex loss functions in order to find useful parameters in a board range of problems. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.