text
stringlengths 83
79.5k
|
---|
H: Hook up PyTorch U-Net model to video
I built a U-Net model in PyTorch that is trained on medical images to detect polyps. The purpose of the model is to do semantic segmentation, so it must predict the location + class of polyps.
Now I want to hook the model up to some videos so they can be inferenced. I can't get this to work, because the input size of each frame is different than the input that the model expects.
The frame I read (with CV2) is of size: [1080, 1920, 3]. The model is of size [64, 3, 7, 7]. I figured 64 is the batch size here, 3 is the channels, but what are the 7, 7? The size of the image I should input? I created a pastebin with the model architecture here: https://pastebin.com/XUV35MbE.
Can someone show me how to input my frame into the model to get a prediction? The code I have now is:
capture = cv2.VideoCapture('data/videos/17.mp4')
success, frame = capture.read()
going = True
model.to('cuda')
model.eval()
while going:
going, frame = capture.read()
if not going and frame is None:
continue
frame = torch.tensor(frame).transpose(0,2).type('torch.cuda.FloatTensor')
results = model(frame)
Edit: the error I get is: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 1920, 1080] instead
AI: The RuntimeError error you're getting is caused by an incorrect number of dimensions in the input. The model expects an input with four dimensions with the first one denoting the batch size. If you are feeding in the video frames one by one the batch size would simply be equal to one, using torch.unsqueeze(frame, 0) should work.
The second RuntimeError has to with the size of the tensors you feeding into the model. As you are using a U-Net architecture the encoder size will end up with a tensor of size 135 (1080 / 2 / 2 / 2), while the decoder size will end up with a tensor of size 136 (68 * 2). When the model then tries to concatenate the two tensors it can't because of different sizes. You can solve this by making sure you can divide the height and width by two n times, with n being the number of times you are downsampling. I would expect a height of 1088 to work, so try padding/rescaling your image to that height. |
H: Does "Module does NOT support fine-tuning" mean that it cannot be used for transfer learning?
Some modules on the Tensorflow hub say that they "do not support fine-tuning". I'm finding it difficult to use one for transfer learning but that is also due to my inexperience. But I just wanted to know if my attempts are just ultimately in vain anyway.
Example: https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1
This module does not support fine tuning, but can I still use it as a starting point to add new layers onto the end to look for new objects?
AI: Fine-tuning, consists of unfreezing the entire model you obtained (or part of it), and re-training it on the new data with a very low learning rate. This can potentially achieve meaningful improvements, by incrementally adapting the pre-trained features to the new data.
So, simply it means, you can not retrain these models (unfreeze layers), but of course you can add your own layers at the end of these models and create your new model.
Read more here. |
H: Can I use labelled data in unsupervised learning algorithms like neural network?
I am working on a transaction dataset that consists of some labeled features like gender, product categories, membership types, and so on. There are also some numeric data like the amount of transactions, number of products, and so on.
I use one hot encoder for all the categorical data and minmaxscaler for all the numeric data.
In this case, can I feed all the categorical data to unsupervised learning algorithms like neural network and KMeans clustering? Do I need to use PCA to convert all the categorical data to numeric data before feeding it to neutral network?
AI: Yes of course: unsupervised means that it doesn't learn from the labels, but you can use those labels to see if your unsupervised algorithm work well or not.
In your case, I would recommend dimensional reduction algorithms. PCA is a good option, but you will have better results with non linear ones like t-SNE or UMAP. |
H: Should I shuffle my `train_test_split` if my time series contains lagged features?
I understand that it is not recommended to shuffle your training and test sets for time series, else the model will not be able to understand the time dependency of the features.
However, I am now using lagged variables for all of my features for my data frame. If I have a 7 day lag for each feature, the model, in this case a Random Forest (RF) has access to the past 7 days of each feature to predict each $\hat{y}_{i}$.
When using sklearn.model_selection.train_test_split can I set my shuffle=True? I have tested the model with a 7 day lag with/without shuffle and it overfits considerably when shuffle=False. The RF performs far better with shuffle=True, with my train/test $MAE$ converging well.
Is there anything wrong setting my shuffle=True when using time-lagged variables for time-series data?
AI: Yes it is wrong to set shuffle=True.
By shuffling the data you allow your model to learn properties of the data distribution that might appear only in the test time periods.
For example, if you have a trend in the data, shuffling will 'help' you handle it.
In a real-time scenario, you'll never have access to those properties of the distribution. |
H: Can I reverse engineer (parts of) the training data from a machine learning model?
Somebody trained a machine learning model successfully with some data A.
Is it possible to reverse engineer that machine learning model in such a way, that I get insights about that data A or at least parts of it?
With insights I mean: to get an idea about a single row in my data A like: what values or value ranges did certain attributes in that row have.
Extra question: may it perhaps depend on the kind of machine learning model (whether it is a neural networks model or another one)?
AI: Yes, this is possible to various extents.
The most striking example is in predictive text models, as in the xkcd comic:
See these two Berkeley AI Research blog posts on the topic, and the associated papers. As mentioned there, one important idea in this area is differential privacy. This is a very strong condition; as described in section 2.1 of this paper:
the attacker can know any amount of information about an individual, and even know every single other data point in the dataset, and still not be able to detect the presence of the targeted individual
This level of privacy is failed by even simple linear models; a quick search finds multiple papers about differentially-private versions of logistic regression, and the above link surveys such questions for decision trees.
To your less stringent sense of privacy, you would need to have some way to identify individual rows, or else extracting insights about any given row would be impossible. Knowing some subset of the rows' values may be enough to uniquely identify it, and then some knowledge about the model can give information about the other values in the row as above. Having access to the actual model information can help a lot, from k-nearest-neighbors which has to store the actual training data, to linear models where some information can be gleaned from the coefficients; in most research areas though, we're limited to treating the model as black-box, with at best the ability to query the model freely.
One thing came to mind while preparing this answer, but I don't know if there's anything formalized about it: training data is generally "nicer" than random numbers; perhaps you know that in all the training data, a feature is rounded to the hundredths place. Then even with a simple polynomial regression you may get some interesting hacks. Say the model has a single feature $x$ with a target $y$, and both are rounded to the hundredths place in the training set, and suppose the model interpolates the training data (generally a bad idea, but sometimes fine with say neural networks). An attacker can query $\hat{f}(x)$ for every hundredth-rounded $x$ (in some realistic range), and keep a list of those whose outputs happen to also have no nonzero decimals after the hundredths place; those are the only possible training points because the model interpolated the training set. |
H: Is it always possible to get well-defined clusters from the data?
I have TV watching data and I have been trying to cluster it to get different sets of watchers. My dataset consists of 64 features (such as total watching time, percent of ads skipped, movies vs. shows, etc.). All the variables are either numerical or binary. But no matter how I treat them (normalize them, standardized, leave them as is, take a subset of features, etc.), I always end up getting pictures similar to this:
This particular picture was constructed after applying t-SNE with 2 components from the scikit-learn library. The picture is similar when using PCA and even when using both PCA and t-SNE combined.
It looks like all the watchers are pretty much the same and that we cannot divide them into clusters. But I highly doubt this. Hence, my question is: is it possible that the data is so homogeneous? Or maybe it is just not possible to visualize it like I am trying to do? Are there maybe some advanced visualization techniques?
AI: First of all, a picture should not be taken to define if there are or no groups on your data, since no matter what projection you use (linear with PCA or manifold with tSNE), you are reducing a 64-dimensional space into a 2-dimensional space, that's a lot of information lost.
Secondly, as far as I know, no theorem guarantees you can find clusters on any given X matrix, probably the opposite yes as per An Impossibility Theorem for Clustering. So to your first question, I sadly would say no.
So I would give you two pieces of advice to validate if there are such groups in your data:
You can try using a projection algorithm before clustering like you already have, but I recommend using UMAP instead of tSNE or PCA.
Use a metric to evaluate cluster separation like inertia if you use K-Means or silhouette if using any other.
This metric should be a good measure that tells you whether or not you have groups and hopefully how many (according to your metrics and caring the number of clusters)
Once you have found both an algorithm and a number of clusters with good metrics, you can run a clustering profiling (analyse a central tendency measure like the average for each feature across clusters) to make some insights based on the cluster's features characteristics.
Then you can plot again your 2D scatter but this time, adding the cluster id like the colour would be more insightful.
Hope it helps |
H: When and how to use StandardScaler with target data for pre-processing
I am trying to figure out when and how to use scikit-learn's StandardScaler transformer, and how I can apply it to the target variable as well.
I've read this post and, while the accepted answer maintains that it is not necessary to standardize the target vector, other answers suggested that it might still be beneficial.
So let's assume that I want to go ahead and standardize the target vector.
According to the syntax, the fit_transform method of a StandardScaler instance can take both a feature matrix X, and a target vector y for supervised learning problems.
However, when I apply it, the method returns only a single array. If I try to unpack two values, like in the code below, I get a "ValueError: too many values to unpack (expected 2)" error:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled, y_scaled = scaler.fit_transform(X,y) # X is some feature array, y is the target vector
# This code will produce an error message
This is consistent with the documentation, which states that the return value is a single output array X_new.
Then my question is: why is there an option to add y to the parameters of the method? Does it change the way in which X is standardized?
If not, should I use something like the code below?:
from sklearn.preprocessing import StandardScaler
scaler_X = StandardScaler()
scaler_y = StandardScaler()
X_scaled = scaler_X.fit_transform(X)
y_scaler = scaler_y.fit_transform(y)
AI: The correct way of scaling both the features and the target in Python with Scikit-Learn for a regression problem would be wit pipelines as follow:
from sklearn.linear_model import LinearRegression
from sklearn.compose import TransformedTargetRegressor
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
tt = TransformedTargetRegressor(regressor=LinearRegression(),
transformer = StandardScaler)
model = Pipeline([("scaler",StandardScaler()),("regressor",tt)]) |
H: Why first fully connected layer requires flattening in cnn?
One can read everywhere on internet or in books that in convoluted neural networks, between convolution layers and the first fully connected layer, you should flatten your data.
I managed to understand that Dense layer (=first fully connected layer) requires 1d (= flattened = linearized) data.
However, I failed to figure out WHY dense layer specificaly requires 1d data.
Could you share your explanation if you have a didactical one?
AI: Requiring a fully connected layer to only accept one dimensional (a vector) makes for a consistent interface between layers. Strict inputs makes the the code more straightforward. Otherwise a fully interconnected layer might have to accept arbitrary inputs (e.g., n-dimensional). |
H: What metric should I use to achieve perfect score when choosing all possible results?
A guy told me that he can predict which player I would choose from Greece's Euro 2004 Champion football team. Assume my choice was random.
He then goes ahead and names all the players of the team.
He claims that he aced the prediction, since he achieved a perfect score - no chance that he would have missed my random choice when said all possible choices!
What metric did he optimized?
Accuracy
Precision
AUC
Recall
Related: https://towardsdatascience.com/the-5-classification-evaluation-metrics-you-must-know-aa97784ff226
AI: The prediction algorithm resulted in only positives, and no negatives, i.e. it predicted that every member of the roster was a member of the target class, with the target class being "your pick".
Of the positive results predicted by the algorithm, there was 1 true positive and N-1 false positives, from a roster of size N.
In that case, the recall True Positive / (True Positive + False Negative) = 1 is maximized.
In the absence of negative predictions, the precision and accuracy are equal: True Positive / (True Positive + False Positive) = 1/N
The meaning of the AUC is ambiguous, in an absence of the ordering of the predictions. |
H: Kmeans with Word2Vec model unexpected results
I'm trying to play around with unsupervised NLP using Word2Vec. So far, the data i used is very small, but that is because I am just testing to see how Kmeans will work.
The Kmeans was performed first (4 clusters) due to the small number of inputs, and the TSNE was used to visualise to 2D:
model = Word2Vec(sents,
min_count=5,
window=5,
alpha=0.03,
min_alpha=0.0007,
sg=1,
)
model_K = KMeansClusterer(4, distance=euclidean_distance, repeats=50) #cosine distance didnt change as much
assigned_clusters = model_K.cluster(model.wv.vectors, assign_clusters=True)
tsne = TSNE(n_components=2, random_state=0)
vectors = tsne.fit_transform(model.wv.vectors)
As you can see the clustering kind of works, but there are some clusters way off. I'm wondering is that is because I performed the cluster before the reduction in dimensions. But from what I have read it's better to do Kmeans before if you can.
When I try with 6 clusters I get:
Any reasons why the clustering isn't working as expected will be appreciated. Thanks.
AI: The question is what is expected from clustering?
Kmeans works well on gaussian distributed data. In this case both clsuetrings are "right". Some of overlaps comes to the fact that you applied kmeans before visualization and TSNE has to compress some information which is present in high-dimensional data. It means some of overlap points are not overlapping in higher dimensions but you see them overlapping here.
Last but not least is the fact that when clusters are not well-separated, kmaens produces overlapping anyways. |
H: Performance measurement of an event extraction system
I have developed an event extraction system from text documents. It first clusters the data corpus and extracts answers for what, when and where questions. Final answers are determined by using a candidate scoring function. I am struggling at evaluating the performance of the system. What measurements should I consider? Any suggestion is highly appreciated. An Image explaining the problem is attached.
AI: The standard evaluation would be to count the proportion of correct predictions.
The most basic version would be to count 3 instances for every event: where, when, what. For example if the three questions are answered correctly the score for this event is 3/3. Note that the case where one of the questions has no answer should be counted normally, i.e. if the system doesn't give any answer it's correct but it's an error if it does.
You might also have the case where an event is not detected at all by the system, in this case it makes sense to count as if it has the three questions wrong: 0/3.
It looks like you can also have several answers for one of the question. In this case you might want to count partial answers, for example 0.5 if the system finds one correct answer out of 2. There can be different variants of this option.
The final evaluation score is simply aggregated across all the events.
Note that it would be common to count the detailed score for each type of question as well. |
H: One-Class Text Classification
So I have a specific use case where my colleagues have kept thousands of articles across the years deemed as "Good", among hundreds of thousands of other articles deemed as bad and they didn't keep!
My objective is to train an NLP Deep Learning Model to detect which articles are good and which are bad. Since I don't have the "Bad" articles I cannot use Binary Classification.
So my questions are:
1- is One-Class Text Classification is suitable for this task?
1.1- if yes, please let me know how to do it in the context of NLP.
2- Are there other solutions or suggestions for this use case?
P.S.
I have found some research and code for similar use cases like Anomaly Detection and fraud detection, but the nature of this use case is different.
Because first I have textual documents and what I found are tabular data.
And second, is that I have thousands of documents that are labeled as "Good" among hundreds of thousands that are labeled "Bad" and were not kept in the database.
But in the case of Anomaly Detection and Fraud detection or other similar use cases, most of the data is labeled as "Good" therefore we're looking for exceptions.
I'm really looking forward to your answers, suggestion, and thoughts and I'm very open to discussions.
Thank You.
AI: Since you mention deep learning, one option is to embedded the documents and then cluster the documents.
Each cluster could be labeled as "Good" or "Not Good". The labeling could be done by hand or automatically by voting with existing labels (e.g., if a majority of the documents are "Good" then the entire cluster is "Good").
The trained regions could also be used for prediction. |
H: Number of features of the model must match the input. Model n_features is 740 and input n_features is 400
i am getting this error predicting from random classifier,
could anybody point me to where i am going wrong in this?
(background information: yes, i am trying to do sentence classification with 2 labels)
#Initializing BoW
cv = CountVectorizer()
#Test-Train Split
X_train,X_test,y_train,y_test = train_test_split(experiment_df['Sentence'],experiment_df['Label'])
#Transform
train = cv.fit_transform(X_train)
test = cv.fit_transform(X_test)
#Train Classifier
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(train,y_train)
#Pred
y_pred = clf.predict(test)
Error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-31-a6f8e9da0bb0> in <module>()
1 clf = RandomForestClassifier(max_depth=2, random_state=0)
2 clf.fit(train,y_train)
----> 3 y_pred = clf.predict(val)
3 frames
/usr/local/lib/python3.7/dist-packages/sklearn/tree/_classes.py in _validate_X_predict(s
elf, X, check_input)
389 "match the input. Model n_features is %s and "
390 "input n_features is %s "
--> 391 % (self.n_features_, n_features))
392
393 return X
ValueError: Number of features of the model must match the input. Model n_features is 740 and input n_features is 400
```
AI: You are currently using the fit_transform method on both your training dataset as well as your test set. This is incorrect since you should not fit the model on your test set as (depending on the model used) this would be overfitting, and it can give issues with dataset shapes when creating new columns based on the values in the data (count vectorizer, creating dummy columns etc.). The correct way would be to fit and transform the train data and then only transform the test dataset:
#Initializing BoW
cv = CountVectorizer()
#Test-Train Split
X_train,X_test,y_train,y_test = train_test_split(experiment_df['Sentence'],experiment_df['Label'])
#Transform
train = cv.fit_transform(X_train)
test = cv.transform(X_test)
#Train Classifier
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(train,y_train)
#Pred
y_pred = clf.predict(test) |
H: modeling time series data with large number of variables
I want to model time series data of 52 dependent variable using neural networks in order to forecast these series in future .
I have tried some architectures of LSTM and CNN (conv1D) models but my models always overfit as they can't generalize.
Does the number of features impact the results of models, if yes how to deal with data with large number of variables ? Is there any models preferred for this task ?
AI: Start by studying the covariance of your features, if you find that hard to interpret use Pearson's correlation it will help you detect the correlated features and follow up with Spearman's correlation or Kendall's correlation just to be sure.
Then you can proceed with dimensionality reduction using PCA but projecting highly dimentinality data to lower space, or you can embed some of your features, but for this you need a good data analysis to choose the right ones.
You can also use Feature Selection to get the importance of every feature and choose them based on their score then remove those who are correlated. This will definitely improve your result.
Now for overfitting you need to figure out the source of low bias and high variance in your model I suggest you use Keras Tuner if you have implemented LSTM and CNN with Tensorflow of course or Ray Tune if you're using Pytorch to get the parameter tuning which will help you reduce the overfitting and you can always act on on Early Stopping, Regularization techniques, Dropouts... |
H: How do the linear + softmax layers give out word probabilities in Transformer network?
I am trying to implement a transformer network from scratch in pytorch to understand it. I am using The illustrated transformer for guidance. The part where I am stuck is about how do we go from the output of the final decoder layer to linear + softmax.
From what I have understood, if we have a batch of size B, max output seq length M, embedding dimension D, and vocab size V, then the output of the last decoder layer would be BxMxD which we have to turn into a vector of probabilities of size BxV so that we can apply softmax and get next predicted word. But how do we go from a variable size MxD matrix to a fixed-length V vector?
This post says we apply the linear layer to all M vectors sequentially:
That's the thing. It isn't flattened into a single vector. The linear
transformation is applied to all M vectors in the sequence
individually. These vectors have a fixed dimension, which is why it
works.
But how do we coalesce those transformed vectors into just one single vector? Do we sum them up?
AI: Your understanding is not correct.
You go from a $B \times M \times D$ tensor to a $B \times M \times V$ tensor (i.e. the logits). As you can see, in the final tensor we have M vectors of dimension V (one vector per token), not just a single vector.
To obtain the $B \times M \times V$, you just perform a matrix multiplication.
This applies to Transformers, but also to most sequence generation models, like LSTMs. |
H: Convert .data file to .csv
I'm using a data called 'adults.data', I need to work with that data as a '.csv' file. Can anyone guide me on how can I change the format?
I tried opening the file in excel and then save it as csv, but the new file contains only one column containing all the '.data' columns.
AI: One way is to convert .data file to excel/csv using Microsoft Excel which provides an option to get external data (use Data tab --> from Text). Check this video for demo
Other way you can utilize python to read .data files and directly work with dataframes
import pandas as pd
df = pd.read_csv("adults.data")
(OR)
df = pd.read_table("adults.data") |
H: OpenCV2 resizing image: recommended size
I am working on image processing project. I see that it's recommended to resize the image in the data pre processing step.
How do I decide what size to to resize it to? The tutorial I am following resizes it to 128.
AI: It actually depends on your deep learning model and your GPU.
So the rule of thumb is use images about 256x256 for ImageNet-scale networks and about 96x96 for something smaller and easier. I heard that there is also people who train using 512x512 on kaggle site too.
If you train fully convolutional networks like Faster RCNN you can take much bigger images (say 800x600) because you have batch size = 1, but overall once again it really depend on what deep learning model you are using and the resources you have at hand. |
H: BERT Optimization for Production
I'm using BERT to transform text into 768 dim vector, It's multilingual :
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-multilingual-mpnet-base-v2')
Now i want to put the model into production but the embedding time is too much and i want to reduce and optimize the model to reduce the embedding time What are the libraries that enable me to do this ?
AI: you can start by using torchscript, it may require changing ur whole code, and switching to transformers( by loading the backbone of the model and the last layers) so basically u get out from GIL interpreter, coz it does not support multithreading.
by with torchscript u can run ur model in c++ env,
there's also onnx which I believe it enhances performance.
if ur use case is not a real-time and you are using an API, you can use a queue mechanism like rabbitmq |
H: Rearrange step chart in tkinter
I'm developing this project in Python using the Tkinter, ElementTree, Numpy, Pandas and Matplotlib packages:
# Function to extract the Name and Value attributes
def extract_name_value(signals_df, rootXML):
# print(signals_df)
names_list = [name for name in signals_df['Name'].unique()]
num_names_list = len(names_list)
num_axisx = len(signals_df["Name"])
values_list = [value for pos, value in enumerate(signals_df["Value"])]
print(values_list)
points_axisy = signals_df["Value"]
print(len(points_axisy))
colors = ['b', 'g', 'r', 'c', 'm', 'y']
# Creation Graphic
fig, ax = plt.subplots(nrows=num_names_list, figsize=(20, 30), sharex=True)
plt.suptitle(f'File XML: {rootXML}', fontsize=16, fontweight='bold', color='SteelBlue', position=(0.75, 0.95))
plt.xticks(np.arange(-1, num_axisx), color='SteelBlue', fontweight='bold')
i = 1
for pos, name in enumerate(names_list):
# get data
data = signals_df[signals_df["Name"] == name]["Value"]
print(data)
# get color
j = random.randint(0, len(colors) - 1)
# get plots by index = pos
ax[pos].plot(data.index, data, drawstyle='steps-post', marker='o', color=colors[j], linewidth=3)
ax[pos].set_ylabel(name, fontsize=8, fontweight='bold', color='SteelBlue', rotation=30, labelpad=35)
ax[pos].yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f'))
ax[pos].yaxis.set_tick_params(labelsize=6)
ax[pos].grid(alpha=0.4)
i += 1
plt.show()
But I would like to make the y-axis values start at 0 in all subplots() cases and end up to the size or length of the points_axisy variable
That is to say, that the lines painted in yellow freehand, is replaced by the values of the graph but I do not understand how to do it. I've already been testing code with the enumerate function but I can't find the solution.
The xml file to test my code can be taken from: xml file Thank you very much in advance for your help, any comments help.
AI: I leave the answer for possible further search.
I just changed the line:
ax[pos].plot(data.index, data, drawstyle='steps-post', marker='o', color=colors[j], linewidth=3)
For this:
x = [-1] + data.index.tolist() + [len(signals_df) - 1]
y = [0] + data.tolist() + [data.iloc[-1]]
ax[pos].plot(x, y, drawstyle='steps-post', marker='o', color=colors[j], linewidth=3)
and it worked. Thank you! |
H: How to plot a line graph with extreme value changes and a large data set?
I am trying to visualize the data that I have recorded with a benchmark. The problem that i have is that the value changes are rather big and a single data set consists of 10.000 values. I tried to visualize it with python but the resulting line graph doesn't look good. Do you have some advice or ideas how i could visualize the data better?
AI: I found a another style of plot that might be interesting in this case: a boxplot. |
H: Why do we use 'T' when we are to say matrix-vector product?
On the first picture author uses $T$ meaning matrix-vector product
But other website do not use $T$, but says that $x$ is a vector, I do not understand if it is important or not
AI: When a vector or matrix has the superscript $T$, it means the matrix/vector is transposed.
Transposing a matrix or vector means flipping the matrix along its diagonal, that is, changing the elements such as the columns are rows and the rows are columns.
For instance:
When speaking about vectors, we normally understand them as column vectors. This means that a vector with $n$ elements is a matrix of dimensions $n \times 1$.
When two bidimensional matrices $A$ and $B$ are multiplied ($A \cdot B$), the second dimension of $A$ must match the first dimension of $B$. Therefore, if $A$ is a vector of dimensions $n \times 1$ and $B$ is a matrix of dimensions $n \times m$, to compute $A \cdot B$, we need the vector $A$ in transposed form to get the dimensions right, that is, $A^T B$.
On the other hand, if $A$ is a matrix of dimensions $n \times m$ and $B$ is a vector of dimensions $m \times 1$, then we don't need to transpose $B$ to compute $A \cdot B$, because the dimensions already match. |
H: How to choose and create natural language data for machine learning
What the difference between these two data formats?
For example, for the Named Entity Recognition task, I learned that index and BIO Encoding are popular data formats to train.
Are they have different features for machine learning, and should I choose input data format following training models' requirements?
# index representation
"entities": [
{
"name": "John J. Smith ",
"span": [4,8],
"type": "PERSON"
}
# BIO Encoding
Tokens IO BIO BMEWO BMEWO+
Yesterday O O O BOS_O
afternoon 0 O O O
, 0 O O O_PER
John I_PER B_PER B_PER B_PER
J I_PER I_PER M_PER M_PER
. I_PER I_PER M_PER M_PER
Smith I_PER I_PER E_PER E_PER
traveled O 0 O PER_O
to O O O O_LOC
Washington
I_LOC B_LOC W_LOC W_LOC
. O O
AI: The BIO format (and its variants) is a standard format for training a sequence labeling model, in particular a Named Entity Recognition (NER) model.
Sequence labeling consists in assigning a label to every token in the sequence, so at the "low level" stages of training and predicting the system must deal with the token and its label, as well as (possibly) other features associated with the token. There are several possible choices to represent an entity through the labels: there must be at least two obviously, and it has been proved that adding at least a special B for the first token in the entity is beneficial.
A json-like format like the one you present can be used as a simplified output of a NER system, typically for applications which only need a list of recognized entities with their type. It's usually more convenient to manipulate but it cannot be used directly by the NER system: it doesn't even contain the full text, it's not tokenized and it doesn't have a label for each token. But assuming that the full text is also provided, this format could also be converted to BIO or some variant but it's more work.
If the goal is to provide a dataset which can be fed immediately to train a NER model, then the BIO format is clearly more suitable.
If the goal is to provide a convenient format for other usages then something like this JSON format is fine, it's not really a matter of NER. |
H: Convert any format of phone number to (111) 111-1111 using python
I have a phone number column in my dataset which has numbers in different formats. Here are some examples:
1000101000
111-101-1000
(212)-212-2122
444.456.7890
123 456 7890
+12124567890
How do I format all of the above different formats to (111) 111-1111 format?
AI: #txt is your phone number that you get from your dataset
filterTxt = list(filter(str.isdigit, txt))
formatTxt=''
for index,number in enumerate(filterTxt) :
if index==0:
formatTxt+='('+number
elif index==2:
formatTxt+=number+') '
elif index==5:
formatTxt+=number+'-'
else:
formatTxt+=number
Ok let me explain my code. First of all we want to filter only the number the easiest way is to use list(filter(str.isdigit, txt)). Filter will read all the string and compare it with str.isdigit which is to detect whether the string is digit or not and because we got the number already I then we just format it using for since we got the list of number already. |
H: Training neural network to emulate a hash function
A hash function takes an input, performs a set of complex operations and then produces an output. For my purposes the output from the function will always be the same for any given input.
I remember reading long, long ago that neural networks can in theory (assuming infinitely large network) reproduce any algorithm - therefore it stands to reason that a theoretical neural network could replicate a hash function. That all being said, I did a search for such a neural network (just with Google) and I couldn't find anything quite as pure as I've described (possibly because hashing inputs has seen some popularity).
If we limit our scope to looking at cryptographic hashes only and assume we can train using input/output pairs is it reasonable to train a network for this?
AI: I remember reading long, long ago that neural networks can in theory (assuming infinitely large network) reproduce any algorithm - therefore it stands to reason that a theoretical neural network could replicate a hash function.
This is the universal approximation theorem and it does apply to learning hash functions.
However, the theorem says nothing about generalisation, and that is a major problem when learning any pseudo-random functions.
Any hash function which has the following properties:
Output is hard or impossible to reverse back to any original input,
output is highly sensitive to input, such that a single bit changing in the input will (pseudo-randomly) change all bits in the output.
will be impossible to approximate and generalise using statistical techniques. Cryptographic hashing algorithms have these as a design goal, and attempt to meet the ideal of a random oracle.
If you could train a neural network to generalise better than 50% accuracy (bit for bit, and measured over a suitably large test data set to establish significance) on unseen inputs, that would be an indication that the hash was broken for cryptographic use. The chances are that you would not be able to achieve this even for known broken hashes such as MD5.
If we limit our scope to looking at cryptographic hashes only and assume we can train using input/output pairs is it reasonable to train a network for this?
You are very unlikely to create any useful reusable model doing this. It may still be a worthwhile learning exercise for practicing machine learning, with the main benefit being it is very easy to put together training and test datasets. Your expectation of not being able to find anything could be used as a test of statistical knowledge, and for bug hunting. Your first suspicion if you get positive results would be to check your code and assumptions. However, there are other learning scenarios where you have a higher expectation of making a useful model, and those are IMO going to be more rewarding. |
H: How to introduce bias in a machine learning model?
How can I introduce bias for a decision tree model while building an ML application?
e.g.
If I am building a stock trading recommendation algorithim, I would want to recommend a stock only when the model detects a probability of swing (upturn and downturn) but, when I have a set of stocks that I have defined as volatile, I would like the model to recommend them only when the probability of swing is above a certain value. Can I define this as bias?
How can I introduce this in a model?
Can I:
Introduce a categorical varable that defines a certain stock as volatile and then fit?
or
Set a value to such a stock as categorical and then fit?
Apologies I am not able to explain my question better but essentially, I want to introduce bias in a model. What is the correct approach to doing it?
AI: In general it's a bad idea to try to force a model to do something: ML is supposed to be data-driven, so if the data doesn't represent the particular desirable pattern then either there's a good reason for that (i.e. the pattern is not as relevant as one thinks it is) or the data is not suitable for the task (or noisy, incomplete...).
You don't give any detail about the current model so there's no way to know whether introducing a variable will change the model the way you want, it depends how the ranking is calculated (assuming there's a ranking involved).
Keep in mind that there's no reason to make the model do everything itself, especially if it's not based on the data. It might make sense to do some rule-based pre- or post-processing. In the case you mention it would be simple to post-process the prediction: if the stock is volatile and the probability is lower than the threshold then ignore this stock. |
H: Noise free image dataset
Presently, I am working on image denoising using CNNs. I am curious where I can find a noise-free image dataset?
I am looking for real-world images but not the dataset that belongs to MNIST.
AI: There are tons of image datasets. Wikipedia has a great list to get you started: https://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research#Image_data
A few very popular datasets:
CIFAR-10 and CIFAR-100
ImageNet
Open Images |
H: nested cross validation vs. train-test split
I am trying to understand the main benefits of conducting a nested cross-validation compared to a simpler train-test split. Let us say I would like to build a prediction model. I initially split my data so that 80% of it is used for training and the remaining 20% of it for testing. Then, I run CV on the 80% to tune the hyperparameters and finally run the model using the optimal hyperparameters on the test sample, in order to get an unbiased estimate of my model performance.
Now, my understanding is that nested-CV has two main benefits:
You get to use the entire data you have as part the training process (so the inner CV would essentially get to see all the data at some point).
The model performance estimate you get could be more stable (in the sense that it is not based on a single run using the test data, but on multiple runs.
Am I missing something? And from a practical standpoint, assuming a large-enough database, does one really gain much from adding the computational complexity of a nested-CV compared to a simpler train-test split?
Thanks a lot.
AI: You get to use the entire data you have as part the training process (so the inner CV would essentially get to see all the data at some point).
The model performance estimate you get could be more stable (in the sense that it is not based on a single run using the test data, but on multiple runs.
You've covered the main benefits. However, it is important to point out that more stable specifically includes the benefit of not being dependent on how you split your data. With hold-out validation it may be that the distribution of your test set differs from your training set thereby violating the key assumption of having training and test data coming from the same distribution in order to obtain an unbiased estimate of the model's performance.
This is more likely to be a problem when the amount of data is limited. Therefore, when you have a very large dataset (and your model takes a long time to train) it is common to apply holdout validation (that is, k-fold CV for validation and holdout CV for testing). With models which are very costly to train (such as Neural Nets often are) it is common to apply holdout validation even to only medium-sized datasets (e.g. where medium-sized refers to not more than $200k$ datapoints as a ballpark figure). |
H: Algorithms for SMS spam detection
Which among KNN, Logistic and Naive Bayes would yield best results for SMS spam
detection? Is there any other efficient approach worth exploring.
I am planning to make a python application for SMS spam detection.
Any suggestions or resources would be great.
AI: It totally depends on what sort of feature engineering you use. Except for the case of KNN which is useless. Naive Bayes will work well with Bag of Words and TF-IDF while Logistic regression will perform well on all including Word2Vec. |
H: What is the standard output of the GloVe algorithm?
When I look at the loss function of the GloVe algorithm for generating word vectors, I see that $w$ and $\tilde{w}$ are symmetric:
$$
J=\sum_{i,j=1}^Vf(X_{ij})(w_i^T\tilde{w}_j+b_i+b_j-logX_{ij})^2
$$
(eq. 8 from https://nlp.stanford.edu/pubs/glove.pdf)
However, the authors never seem to describe whether $w$ and $\tilde{w}$ are both used to generate the embedding, or only one? When I watch Andrew Ng describe the algorithm (https://www.youtube.com/watch?v=EHXqgQNu-Iw), he suggests averaging the two, but when I try to read the original c-implementation of GloVe (https://github.com/stanfordnlp/GloVe/blob/master/src/glove.c), it seems that they implement both saving $w$, and their concatenation, depending on user settings. What do people usually do?
AI: If you are using the embeddings in a downstream task, better performance is often achieved by averaging or summing them (the factor of 2 makes no difference). Concatenating them is fine also as it preserves that same information, i.e. the downstream model can effectively average them (of course it preserves more info, but it is unknown if that is beneficial). Averaging/summing is obviously more memory efficient than concatenating.
An intuition for why averaging is useful is in this paper, which relates mainly to word2vec but both models learn similar statistics (which is ultimately what these embedding capture). This paper examines the effect of adding the embeddings and offers a different explanation. |
H: How does the character convolution work in ELMo?
When I read the original ELMo paper (https://arxiv.org/pdf/1802.05365.pdf), I'm stumped by the following line:
The context insensitive type representation uses 2048 character n-gram convolutional filters
followed by two highway layers (Srivastava et al., 2015) and a linear projection down to a 512 representation.
The Srivastava citation only seems to relate to the highway layer concept. So, what happens prior to the biLSTM layer(s) in ELMo? As I understand it, one-hot encoded vectors (so, 'raw text') are passed to a convolutional filter and a linear projection? How should I think of input and output dimensions? I get the feeling that perhaps there used to be a detailed explanation somewhere on allennlp.org (or perhaps their github repo), but it has perhaps been deemed outdated or unnecessary since?
I hope the question makes sense.
AI: One way to understand how ELMo's character convolutions work is by directly inspecting the source code.
There, in the forward method, you can see that the input to the network is a tensor of dimensions (batch_size, sequence_length, 50), where 50 is the maximum number of characters per word. Therefore, before passing the text to the network, it is segmented in words, and each character is encoded as an integer value.
This is what happens in the forward method before the highway layers:
The tensor gets prepended and appended sentence boundary tokens (beginning-of-sentence (BOS), end-of-sentence (EOS)).
The tensor goes through an embedding layer (this is somewhat similar to one-hot encoding and a matrix multiplication, see this answer). This gets us a vector for each character.
The tensor goes through different 1D convolutions of configurable kernel sizes.
The resulting activation maps are concatenated and passed as input to the highway networks
This architecture was proposed by kim et al. (2015), and is summarized well in one of the figures of the paper: |
H: Standardizing giving worse results
I am training a Decision tree regressor on the famous Boston House Price dataset. I read that tree based models are fairly immune to scaling so I tried to see practically. Before scaling I was getting MAE = 2.61. After scaling using StandardScaler, I got worse results which is quite surprising as scaling the data should result in an improvement.
Also if not an improvement, I was hoping to see the same result as I got without scaling since I am using a tree based model.
Also there are no categorical variables in this dataset and all the variables included in the scale_list a large difference in their min and max values hence have been included. I have not included and binary (1's and 0's)variables.
train_x, test_x, train_y, test_y = train_test_split(data2, y, test_size = 0.2, random_state =
69)
#STANDARDIZING THE DATA
scale_list = ['CRIM', 'NOX', 'RM', 'AGE', 'DIS', 'TAX', 'PTRATIO', 'B', 'LSTAT']
train_x_stand = train_x.copy()
test_x_stand = test_x.copy()
for i in scale_list:
scale = StandardScaler().fit(train_x_stand[[i]])
train_x_stand[i] = scale.transform(train_x_stand[[i]])
test_x_stand[i] = scale.transform(test_x_stand[[i]])
Any help would be appreciated!
AI: As you well mentioned, tree-based models are not sensitive to feature scaling, but on the contrary it might help with the convergency of finding the minimum in the optimization on boosted models
I replicated your code and I found pretty much the same metrics in both scaling and no scaling versions of the model.
from sklearn.datasets import load_boston
from sklearn.tree import DecisionTreeRegressor
from sklearn.pipeline import Pipeline
from sklearn.compose import make_column_transformer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error
boston= load_boston()
X, y = pd.DataFrame(boston.data, columns=boston.feature_names), pd.DataFrame(boston.target)
train_x, test_x, train_y, test_y = train_test_split(X, y, test_size = 0.2, random_state = 69)
scale_list = ['CRIM', 'NOX', 'RM', 'AGE', 'DIS', 'TAX', 'PTRATIO', 'B', 'LSTAT']
preprocessor = make_column_transformer((StandardScaler(),scale_list), remainder="passthrough")
smodel = Pipeline([("scaler",preprocessor),
("model", DecisionTreeRegressor(random_state=42))]).fit(train_x, train_y)
tmodel = Pipeline([("model", DecisionTreeRegressor(random_state=42))]).fit(train_x, train_y)
print(f"metric with scale features: {mean_absolute_error(test_y, smodel.predict(test_x))}\nmetric with no scale features:{mean_absolute_error(test_y, tmodel.predict(test_x))}")
metric with scale features: 2.6941176470588237
metric with no scale features:2.662745098039216
I dare to think that something is wrong with the way you are applying the scaling. I recommend you to check it or use pipelines.
Also I was unsure if you used Ensemble or a simple DT, but in both cases remember to set the random_state in order to have reproducible results.
Hope it helps! |
H: GridSearchCV is Giving me ValueError: number of labels does not match number of samples
I'm trying to run a grid CV parameter search using sklearn.model_selection.GridSearchCV. I keep getting a ValueError that is really confusing me.
Below I've included the code for the pipeline I created, which includes a TfidfVectorizer and a RandomForestClassifier. I used train_test_split to separate the features and target, and tried to fit the grid search with the pipeline. Here are my results.
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
features = ['id', 'description']
target = 'ratingCategory'
x_train, x_val, y_train, y_val = train_test_split(
train[features],
train[target],
test_size=0.2,
stratify=train[target],
random_state=95
)
vect = TfidfVectorizer()
clf = RandomForestClassifier()
pipe = Pipeline([
('vect', vect),
('clf', clf)]
)
parameters = {
'vect__min_df': (0.01, 0.05),
'clf__ccp_alpha': (0.1, 0.5)
}
grid_search = GridSearchCV(pipe, parameters, cv=5, n_jobs=4, verbose=1)
grid_search.fit(X=x_train, y=y_train)
Checking the shape of the matrices confirms that x_train and y_train have the same length (the number of rows in both = 3269).
So I'm confused as to why fitting the grid search gives me the following error:
Fitting 5 folds for each of 4 candidates, totalling 20 fits
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=4)]: Done 20 out of 20 | elapsed: 2.0s finished
--------------------
ValueErrorTraceback (most recent call last)
<ipython-input-18-4e85850d6599> in <module>
29 grid_search = GridSearchCV(pipe, parameters, cv=5, n_jobs=4, verbose=1)
----> 30 grid_search.fit(X=x_train, y=y_train)
........................................
ValueError: Number of labels=3269 does not match number of samples=2
What does it mean by number of labels and samples? There should be 3269 samples, since the shape of both X and y matrices is (3269, 2) and (3269, ), respectively.
Any help is super appreciated! Let me know if the full traceback would help, but it was extremely long so I didn't include it.
AI: Try:
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
# features = ['description']
# target = 'ratingCategory'
x_train, x_val, y_train, y_val = train_test_split(
# train[features],
train.description,
# train[target],
train.ratingCategory,
test_size=0.2,
# stratify=train[target],
stratify=train.ratingCategory,
random_state=95
)
vect = TfidfVectorizer()
clf = RandomForestClassifier()
pipe = Pipeline([
('vect', vect),
('clf', clf)]
)
parameters = {
'vect__min_df': (0.01, 0.05),
'clf__ccp_alpha': (0.1, 0.5)
}
grid_search = GridSearchCV(pipe, parameters, cv=5, n_jobs=4, verbose=1)
grid_search.fit(X=x_train, y=y_train) |
H: train without having target column
I have data from a year (many years but let's say one year for clarity).
Data has columns like Temperature, Humidity etc.
I want to train a model from October to March, in order to see if from April-July certain conditions are met (which were created from Oct-March) that imply a creation of disease for example. So, the target is 1 or 0.
So, the problem is that I must train a model from Oct-March without having a target column and I must predict the target column from April.
I can't think a way to make that work.
AI: You seem to have a large conceptual problem.
If each row in the Oct-Mar dataset has data but no target, there is no possibility to train anything. There is no outcome linked to the predictors. You need one specific outcome for each row in the Oct-Mar dataset.
If I understand you correctly, you wish to construct a causal chain of the form (Oct-Mar)->(???)->(Apr-Sep)->Outcome. If this is the case, you can't train just on the Oct-Mar data.
There are different ways you can get at (???):
You can go forward and use an unsupervised technique (e.g. k-means clustering) to distinguish patterns in the (Oct-Mar) data. You can then use the resulting classes as a substitute for the (???) target and try to predict it using the (Apr-Sep) data.
You can even go backward and induce (???) using unsupervised ML from the (Apr-Sep) data and try to predict it with (Oct-Mar).
You can come from both ways and cluster both datasets. Then you can check whether these clusters have anything to do with each other.
A different possibility would be to just link the (Oct-Mar) data to the corresponding (i.e. following) (Apr-Sep) data, so you get one dataset with just one target. |
H: WordNetLemmatizer not lemmatizing the word "promotional" even with POS given
When I do wnl.lemmatize('promotional','a') or wnl.lemmatize('promotional',wordnet.ADJ), I get merely 'promotional' when it should return promotion. I supplied the correct POS, so why isn't it working? What can I do?
AI: "promotional" is not an inflected form of "promotion", therefore "promotion" is not the lemma of "promotional". Actually, "promotion" is a noun and "promotional" is an adjective.
Maybe what you actually want to do is not lemmatisation but stemming. Note that the stem is the root of the word and, certainly, the stem of both "promotion" and "promotional" can be "promot" (or "promotion", depending on the convention). |
H: Can I run this job quicker for GridSearchCV?
I am using GridSearchCV for optimising my predictions and its been 5 hours now that the process is running.
I am running a fairly large dataset and I am afraid I have not optimised the parameters enough.
df_train.describe():
Unnamed: 0 col1 col2 col3 col4 col5
count 8.886500e+05 888650.000000 888650.000000 888650.000000 888650.000000 888650.000000
mean 5.130409e+05 2.636784 3.845549 4.105381 1.554918 1.221922
std 2.998785e+05 2.296243 1.366518 3.285802 1.375791 1.233717
min 4.000000e+00 1.010000 1.010000 1.010000 0.000000 0.000000
25% 2.484332e+05 1.660000 3.230000 2.390000 1.000000 0.000000
50% 5.233705e+05 2.110000 3.480000 3.210000 1.000000 1.000000
75% 7.692788e+05 2.740000 3.950000 4.670000 2.000000 2.000000
max 1.097490e+06 90.580000 43.420000 99.250000 22.000000 24.000000
df_test.describe():
Unnamed: 0 col1 col2 col3 col4 col5
count 390.000000 390.000000 390.000000 390.000000 0.0 0.0
mean 194.500000 3.393359 4.016821 3.761385 NaN NaN
std 112.727548 4.504227 1.720292 3.479109 NaN NaN
min 0.000000 1.020000 2.320000 1.020000 NaN NaN
25% 97.250000 1.792500 3.272500 2.220000 NaN NaN
50% 194.500000 2.270000 3.555000 3.055000 NaN NaN
75% 291.750000 3.172500 4.060000 4.217500 NaN NaN
max 389.000000 50.000000 18.200000 51.000000 NaN NaN
The way I am using GridSearchCV is as follows:
rf_h = RandomForestRegressor()
rf_a = RandomForestRegressor()
# Using GridSearch for Optimisation
param_grid = {
'n_estimators': [200, 700],
'max_features': ['auto', 'sqrt', 'log2']
}
rf_g_h = GridSearchCV(estimator=rf_h, param_grid=param_grid, cv=3, n_jobs=-1)
rf_g_a = GridSearchCV(estimator=rf_a, param_grid=param_grid, cv=3, n_jobs=-1)
# Fitting dataframe to prediction engine
rf_g_h.fit(X_h, y_h)
rf_g_a.fit(X_a, y_a)
My questions are:
1. How do I optimise GridSearchCV for multicore processing? I don't mind running the system all night long to get best_params_ but, somewhere it is being bottlenecked and I cannot seem to understand why?
Am I using GridSearchCV correctly? How would I go about choosing param_grid for the best parameters because the whole point of this exercise is to get best_params_
What could be an optimal start to exploring best parameters?
Without GrisSearchCV, and using defaults for RandomForestRegressor, the process completes in 10 minutes.
I am running it for the first time on my dataset and I am curious of the results.
I have a relatively capable computer running on Ryzen 7, 8 cores and a 32 GB RAM.
Edit: Corrected code as per suggestions. I saw a massive improvement from all night running with no output to improved output in 9 minutes
AI: To extend my comment:
As I mentioned you can set the parameter n_jobs to -1 or instead using RandomizedGridSearch (which also receives n_jobs parameter)
Regarding to the parameter grid, I always select my grid so that the default values are included and from there, some values less and greater than the default (for continuous parameters) and the same logic for the categorical parameters.
Another heuristic would be to tune the parameters individually and using validation curves may help to find "adequate" parameters, so once you have found "the best" hyper parameters individually you can test the cv score using all those parameters that were found alone (and this might also give an idea of the range of parameters for which your model is performing well).
The later approach has of course drawbacks, like omitting the fact that a grid is multidimensional and the individual performance on one parameter may not be the "best" in combination with all the set of parameters. |
H: How to find the probability of a word to belong to a dataset of text
I have two text datasets, one with people that have a certain medical condition and another of random patients. I want to figure out what words are more likely to show up in the dataset with that medical condition. My original thought was to use a chi squared test, but it seems like I can't run the test for each word since the tokens are "categories" and a single word is a "value" of the "categorical" variable. For example if the word is "dog", what is the probability for it to show up in our dataset with disease? Similar for the word "drug", what would be the probability?
Would I use something like tfidf? I have all of the frequencies for all of the tokens.
AI: A Chi-squared test makes sense but it's only going to tell you whether the difference in frequency is significant or not, by itself it's not very informative about the scale of the difference between the classes.
The simple answer to your question is to calculate the conditional distributions for every word $w$ and class $c$. Using the notation $\#x$ for the frequency of $x$ (i.e. number of documents containing $x$):
$$p(w|c)=\frac{\#(w,c)}{\#c}$$
This represents how frequent $w$ is compared to other words within class $c$ (i.e. ignoring the other class).
$$p(c|w)=\frac{\#(w,c)}{\#w}$$
This represents how frequent class $c$ is compared to the other class when considering only word $w$ (i.e. ignoring other words).
The latter is the one which matters when comparing the two classes, but it's a fair comparison only if the two classes contain the same number of documents.
A more advanced (but maybe less intuitive) method is to use a measure like Pointwise Mutual Information (PMI) between a word an a class. The PMI value is high if the word and the class are strongly associated. |
H: Is feature importance in XGBoost or in any other tree based method reliable?
This question is quite long, if you know how feature importance to tree based methods works i suggest you to skip to text below the image.
Feature importance (FI) in tree based methods is given by looking through how much each variable decrease the impurity of a such tree (for single trees) or mean impurity (for ensemble methods). I'm almost sure the FI for single trees it's not reliable due to high variance of trees mainly in how terminal regions are built. XGBoost is empirically better than single tree and "the best" ensemble learning algorithm so we will aim on it. One of advantages of using XGBoost is its regularization to avoid overfitting, XGBoost can also learn linear functions as good as linear regression or linear classifiers (see Didrik Nielsen). My trouble is about its interpretation has came up due to image bellow:
In upper side i've got the FI in XGBoost for each variable and below the FI (or coefs) in logistic regression model, i know that FI to XGB is normalized to ranges in 0-1 and logistic regression is not but the functions usually used to normalize something are bijective so it won't comprimise the comparation between the FI of two models, logistic regression got the same accuracy (~90) than XGB at cross validation and test set, note that the most three important variables in xgb are v5,v6,v8 (the importances are respective to variables) and in logistic model are v1,v2,v3 so it's totally different to the two models, i'm sure that the interpretation to logistic model is reliable so would xgboost interpretation not be reliable because this difference? if it wouldn't so it wouldn't only for linear situations or in general case?
AI: Your main problem (it turns out, thanks for following up in the comments) is that you used the raw coefficients from the logistic regression as a measure of importance, but the scale of the features makes such comparisons invalid. You should either scale the features before training, or process the coefficients after.
I find it helpful to emphasize that feature importances are generally about interpreting your model, which hopefully but not necessarily in turn tells you about the data. So in this case, it could be that some set of features has predictive interaction, or that some feature's relationship with the target is nonlinear; these will be found important for the xgboost model, but not for the linear one.
Aside from that, impurity-based feature importance for tree models have received some criticism. |
H: What would be a good n_estimators matrix and thus param_grid for this problem?
I am using GridSearchCV for optimising my predictions
I am running a fairly large dataset and I am afraid I have not optimised the parameters enough.
df_train.describe():
Unnamed: 0 col1 col2 col3 col4 col5
count 8.886500e+05 888650.000000 888650.000000 888650.000000 888650.000000 888650.000000
mean 5.130409e+05 2.636784 3.845549 4.105381 1.554918 1.221922
std 2.998785e+05 2.296243 1.366518 3.285802 1.375791 1.233717
min 4.000000e+00 1.010000 1.010000 1.010000 0.000000 0.000000
25% 2.484332e+05 1.660000 3.230000 2.390000 1.000000 0.000000
50% 5.233705e+05 2.110000 3.480000 3.210000 1.000000 1.000000
75% 7.692788e+05 2.740000 3.950000 4.670000 2.000000 2.000000
max 1.097490e+06 90.580000 43.420000 99.250000 22.000000 24.000000
df_test.describe():
Unnamed: 0 col1 col2 col3 col4 col5
count 390.000000 390.000000 390.000000 390.000000 0.0 0.0
mean 194.500000 3.393359 4.016821 3.761385 NaN NaN
std 112.727548 4.504227 1.720292 3.479109 NaN NaN
min 0.000000 1.020000 2.320000 1.020000 NaN NaN
25% 97.250000 1.792500 3.272500 2.220000 NaN NaN
50% 194.500000 2.270000 3.555000 3.055000 NaN NaN
75% 291.750000 3.172500 4.060000 4.217500 NaN NaN
max 389.000000 50.000000 18.200000 51.000000 NaN NaN
The way I am using GridSearchCV is as follows:
rf_h = RandomForestRegressor()
rf_a = RandomForestRegressor()
# Using GridSearch for Optimisation
param_grid = {
'max_features': ['auto', 'sqrt', 'log2']
}
rf_g_h = GridSearchCV(estimator=rf_h, param_grid=param_grid, cv=3, n_jobs=-1)
rf_g_a = GridSearchCV(estimator=rf_a, param_grid=param_grid, cv=3, n_jobs=-1)
# Fitting dataframe to prediction engine
rf_g_h.fit(X_h, y_h)
rf_g_a.fit(X_a, y_a)
How can I optimise param_grid and hence determine best_params_ of the same?
What would be the best matrix for n_estimators for this dataset?
AI: In general there's no way to know the best values to try for a parameter. The only thing one can do is to try many possible values, but:
this mathematically requires more computing time (see this question about how GridSearchCV works)
there is a risk of overfitting the parameters, i.e. selecting a value which is optimal by chance on the validation set. |
H: Is it a good idea to use parallel coordinates for visualising outliers?
I tried using parallel coordinates to visualize outliers. Is it fundamentally correct?
AI: Scatter plot and box plots are the most preferred for visualizing outliers.
Parallel plots can also be utilized for detecting outliers. For large datasets it can be bit confusing, highlighting outliers comes in handy then.
parallel plot case study
outliers in parallel plot |
H: Would it be a good idea to use PCA output as input in models?
I have some dummy variables that indicate the occurrence of an event. There is so many of them, so I used PCA on them, and it appears some of them are rather correlated together.
Would it be a good idea to use the PCA dimensions as input to a model?
AI: Yes this is a very good idea, and often done this way when you have a lot of features. PCA is used to have less but useful features and train your model more efficiently (this is dimensionality reduction). Note that PCA is building new features from the ones you pass to it (it is different than feature selection).
This will help you train your model faster and make it work better: it will generalize better. You still have to set how many of features you will keep to feed the model. You should keep meaningfull ones, and/or start by keeping 50 features and try less after. |
H: Concat function increases row values and returns df with null values
I am trying to one hot encode my train and test dataset. For my train dataset, I have 2 dataframes with different number of columns but same number of rows.A (with encoded features) = (34164, 293) and B (only contains numerical features) = (34164, 7). I need a final dataframe whose dimensions are C (dataframe with the encoded features and numerical features both) = (34164, 300).
When I use pd.concat function with axis = 1, I get a dataframe with dimensions (44845, 300) and also includes some nan values. I don't get why would it increase my row count when both the initial dataframes have same number of rows? Also from where did those nan values come from? Below is my code.
ohe = OneHotEncoder(handle_unknown = 'ignore', sparse = False)
train_x_encoded = pd.DataFrame(ohe.fit_transform(train_x[['model', 'vehicleType', 'brand']]))
train_x_encoded.columns = ohe.get_feature_names(['model', 'vehicleType', 'brand'])
train_x.drop(['model', 'vehicleType', 'brand'], axis = 1, inplace = True)
train_x_final = pd.concat([train_x_encoded, train_x], axis = 1)
I tried train_x.join function and it returned df with (34164, 300), but there were nan values in it.
train_x_final1 = train_x.join(train_x_encoded)
AI: When applying pd.concat with axis=1 to two dataframes results in redundant rows (usually also leading to NaNs in the columns of the first dataframe for previously not existing rows and NaNs in the columns of the second dataframe for previously existing rows), you may need to reset indexes of both dataframes before concatenating:
train_x_final = pd.concat(
[train_x_encoded.reset_index(), train_x.reset_index()], axis = 1) |
H: How do you normalise the train+validation sets together?
This question is somewhat related to: Is it correct to join training and validation set before inferring on test-set?
As far as I understand, normalisation in general is done in the following way:
Split the data
Normalise train data
Use mean+std from 2 for normalising validation and test data
Train model and tune hyperparameters
Now we have a model we are happy with and hyperparams. Above question suggests that it's good then train a model using the train+validation data together.
But I am confused, do I leave them normalised as they are and combine them together?
Or do I calculate a fresh new normalisation on the combined sets and then recalculate test data with the new normalisation values?
Thank you!
AI: If you are going for the approach with train, validation and test sets and want to train the model using both train and validation sets than yes, it would be appropriate to perform new normalization on the data in its raw form. Than just like before, you use the mean and standard deviation to normalize test set data. |
H: what would be the correct representation of categorical variables like sex?
I have a doubt about what will be the right way to use or represent categorical variables with only two values like "sex". I have checked it up from different sources, but I was not able to find any solid reference. For example, if I have the variable sex I usually see this in this form:
id sex
1 male
2 female
3 female
4 male
So I found that one can use dummy variables like this:
(https://www.analyticsvidhya.com/blog/2015/11/easy-methods-deal-categorical-variables-predictive-modeling/)
and also in this way:
(https://stattrek.com/multiple-regression/dummy-variables.aspx)
Therefore, which one would be more adequate way to deal with this variable, for example, in a classification system. I am inclined to go with the dummy variables, but I would like some opinion about it.
Thanks
AI: This case can be simplified with a single boolean feature because the original variable sex is binary: it can only have values male or female.
This implies that the two values are complementary of each other, so there is no need to keep both: $X_1$ contains exactly as much information as keeping both sex_male and sex_female.
Note that this simplification cannot be done as soon as the categorical variable can have more than two values.
Side note: sex is not always a binary variable anymore, many surveys would propose a third options such as "doesn't identify as binary". |
H: tf-idf for sentence level features
Many papers mention comparing sentences using the tf-idf metric, e.g. Paper.
They state:
The first one is based on tf-idf where the value of the
the corresponding dimension in the vector representation is the number of occurrences of the word in
the sentence times the idf (inverse document frequency) of the word.
While I am familiar with tf-idf weights per token, it is a bit vague for me how to extract a similarity measure between two sentences given the tf-idf weights of their individual tokens.
If the reference to the paper itself was not clear, the questions is:
Given a document containing several sentences,
Is there a known measure of similarity between sentences in the document, based on the tf-idf score of the tokens inside each sentence?
AI: It's common to see some confusion about TFIDF so thank you for asking this question :)
TFIDF is not a metric, it's a weighting scheme
This means that it's a way to represent a document, not to compare documents. TFIDF assumes a bag of words (BoW) representation, i.e. a document or sentence is represented as a set of words (their order doesn't matter). The basic BoW representation is to encode every token/word with its frequency (TF); In TFIDF the frequency of the word is multiplied by the IDF (actually the log of the IDF) in order to give more importance to words which appear rarely.
Two important points:
The TF part is specific to the document, whereas the IDF part is calculated across all the documents in the collection.
Each dimension in a TFIDF vector represents a word. The dimensions are the same for all the documents, they correspond to the full vocabulary across all the documents (this way index $i$ always corresponds to the same word $w_i$).
Note that there are other weighting schemes which can be used to represent documents as vectors, for example Okapi BM25.
Cosine-TFIDF is a metric to compare TFIDF vectors
Once documents (or sentences) have been encoded as TFIDF vectors using the same vocabulary (same dimensions), these vectors can be used to calculate a similarity score between any pair of documents (or a document and a query encoded the same way).
The Cosine similarity measure is certainly the most common way to compare TFIDF vectors. It's so common that sometimes people omit to mention it or over-simplify the explanation by saying that they "compare documents with TFIDF" (this is technically incorrect).
Note that other similarity measures can be used as well with TFIDF vectors. Most other measures (e.g. Jaccard) tend to give similar results, they're not fundamentally different from Cosine. |
H: Possible to use predict_proba without normalizing to 1?
I'm using xgboost multi-class classifier to predict a collection of things likely to fail. I want to run that prediction, and report anything that the classifier identifies with probability > 75%. However if I use xgb.predict_proba(), the sum of the results in the array add up to 1. So, if there are a lot of things likely to fail, they will all have tiny percentages in the result array.
Looking at the predict_proba code, I can see where the array is getting normalized. However I can't figure out how to prevent this.
In the end, I think my code would look something like this (except with the pre-normalized probabilities):
probas = xgb.predict_proba(single_element_dataframe)
for class_name in xgb.classes_:
class_index = np.where(xgb.classes_ == class_name)
proba = probas[0][class_index]
if proba > 0:
print(f"{class_name}: {proba}")
Any ideas?
AI: Currently you're doing multiclass classification: find the most likely among N classes. Each class $C$ probability indicates how likely class $C$ is for the instance as opposed to any other class. This is why the probabilities sum to 1: in this setting, there is only one "correct" class, so two classes cannot both have high probability.
Based on your description you should use multi-label classification: find all the classes that apply to the instances among N classes. In this case each class $C$ probability indicates how likely this instance has class $C$ as opposed to not having class $C$ (i.e. independently from any other class). Naturally the consequence is that the probabilities don't sum to one, since they are independent of each other.
Note: multi-label classification is exactly the same as training a binary model for every class independently. |
H: Model layer getting random two input instead of 1 input
I am running the code mentioned at
link of the code
Here is the code:
import numpy as np
from keras.models import Model
from keras.layers import Input, Dense, Dropout
#from keras.utils import to_categorical # not working in google colab
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from keras.callbacks import EarlyStopping
import tensorflow as tf
# Create an input array of 50,000 samples of 20 random numbers each
x = np.random.randint(0, 100, size=(50000, 20))
print(x)
# And a one-hot encoded target denoting the index of the maximum of the inputs
y = to_categorical(np.argmax(x, axis=1), num_classes=20)
print(y)
# Split into training and testing datasets
x_train, x_test, y_train, y_test = train_test_split(x, y)
# Build a network, probaly needlessly complicated since it needs a lot of dropout to
# perform even reasonably well.
i = Input(shape=(20, ))
a = Dense(1024, activation='relu')(i)
b = Dense(512, activation='relu')(a)
ba = Dropout(0.3)(b)
c = Dense(256, activation='relu')(ba)
d = Dense(128, activation='relu')(c)
o = Dense(20, activation='softmax')(d)
model = Model(inputs=i, outputs=o)
es = EarlyStopping(monitor='val_loss', patience=3)
model.compile(optimizer='adam', loss='categorical_crossentropy')
#model.fit= tf.convert_to_tesnsor(model.fit)
model.fit(x_train, y_train, epochs=8, batch_size=8, validation_data=[x_test, y_test], callbacks=[es])
print(np.where(np.argmax(model.predict(x_test), axis=1) == np.argmax(y_test, axis=1), 1, 0).mean())
Error:
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1298 test_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1282 run_step *
outputs = model.test_step(data)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1241 test_step *
y_pred = self(x, training=False)
/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:989 __call__ *
input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py:197 assert_input_compatibility *
raise ValueError('Layer ' + layer_name + ' expects ' +
ValueError: Layer model_11 expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 20) dtype=int64>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 20) dtype=float32>]
Kindly help in solving this issue, I am not able to run it on Goolge Colab.
Here is the Google Colab link:
google colab link
AI: From the Keras documentation:
validation_data:
Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be: - A tuple (x_val, y_val) of Numpy arrays or tensors. - A tuple (x_val, y_val, val_sample_weights) of NumPy arrays. - A tf.data.Dataset. - A Python generator or keras.utils.Sequence
You have input
validation_data=[x_test, y_test]
The list implies that there are two inputs in the model (Multi-input) and we are providing data for both the input.
Hence, the error.
Change it to
validation_data=(x_test, y_test) |
H: Can I use a different image input size for transfer learning?
Most pre-trained CNN models accept a $224x224$ input size when they were trained. Can I use $256x256$ to get a higher accuracy?
AI: If you change the image size, you will be able to reuse only part of the original network.
Convolutional and pooling layers can be applied to images of any size, so the initial part of the network, which normally consists of convolutions and pooling, will be reusable as-is.
However, the dense layers after the convolutional part assume certain input dimensions. Therefore, as your larger images lead to larger input to the dense layers, you won't be able to reuse the dense layers. |
H: Best choice for splitting data given a quantity and a expected accuracy
I have a dataset with at least 1,000,000 images (from IDs) which I am using to detect the presence of sealed IDs.
The legacy algorithm got nearly 60% accuracy, but my current algorithm yielded almost 80% on a small set. There must be some logic for deciding how many images I should use for training, validation, and test sets. I was tempted to use 500k images for training, 250k images for validation, and 250k for testing. Any ideas would be greatly appreciated.
AI: There is no fixed rule while selecting the size of the training set and testing set. Its all about trial and error, so try out different ratios 80-20, 70-30, 65-35 and pick one that gives the best performance result.
Its suggested in several machine learning research articles to generally opt for
Training dataset to be 70% (for setting model parameters)
Validation dataset to be 15% (helps to tune hyperparameters)
Testing dataset to be 15% (helps to access model performance)
If you plan to keep only split data into two, ideally it would be
Training dataset to be 75%
Testing dataset to be 25%
Like in your case of extremely large datasets which typically can go to millions of records,
a train/validation/test split of 98/1/1 would suffice since even 1% is a huge amount of data.
Note:
Do ensure you split your training, test datasets using an algorithm and not manually.
One major concern while splitting data is to ensure the data is not imbalanced. For instance, you have 5 classes that needs to be classified in your ML problem. You need to ensure the train and test dataset has sufficient data with these 5 classes for model to give best performance. If you manually split the data chances are your dataset might have only 10% of data in varied classes or worse it might only have 3 classes in the test/train data. To ensure data is not imbalanced use stratify in sklearn's train_test_split
X_train, X_test,y_train,y_test = train_test_split(X, y, test_size=0.15, train_size=0.15,stratify=X['YOUR_COLUMN_LABEL']) |
H: Replace NAs with random string from a list
I have a df with a variable (fruit) that has some NANs. I would like to replace the NANs only in this variable with a random string from a list (eg; apple, banana, peach, pear, strawberry).
Any ideas?
AI: This should do the trick. You set up your vector of possible fruits to select from, and then everywhere there is an NA in df$fruit, pick a random element from the possible fruit vector to overwrite it with:
randomFruits = c("apple", "banana", "peach", "pear", "strawberry")
df$fruit[is.na(df$fruit)] = randomFruits[sample(length(randomFruits), length(is.na(df$fruit)),replace=TRUE)]
Or a bit shorter:
randomFruits = c("apple", "banana", "peach", "pear", "strawberry")
df$fruit[is.na(df$fruit)] = sample(randomFruits, length(is.na(df$fruit)),replace=TRUE) |
H: Separate numerical and categorical variables
I have a dataset (42000, 10) which contains 7 categorical features and 3 numerical. I would like to separate both the numerical and categorical features into 2 different data frames i.e I would like 2 data frames where one contains only numerical data (42000, 3) and the other only categorical data (42000, 7), perform some pre-processing on both of them, and lastly concatenate them into one data frame.
So, my question is how do I separate my initial dataframe into 2 based on numerical and categorical data?
AI: Simplest way is to use select_dtypes method in Pandas.
This returns a subset of a dataframe based on the column dtypes:
df_numerical_features = df.select_dtypes(include='number')
df_categorical_features = df.select_dtypes(include='category')
Reference documentation of select_dtypes
This will also depend on the column datatypes of your dataframe.
Considering you have categorical columns and few columns are either int64 or float you can go for:
df_numerical_features = df.select_dtypes(exclude='object')
df_categorical_features = df.select_dtypes(include='object')
Use the include/exclude option to choose based on the dtype. Other dtype information is as shown below:
To select all numeric types, use np.number or 'number'
To select strings you must use the object dtype, but note that this will return all object dtype columns
To select datetimes, use np.datetime64, 'datetime' or 'datetime64'
To select timedeltas, use np.timedelta64, 'timedelta' or 'timedelta64'
To select Pandas categorical dtypes, use 'category'
To select Pandas datetimetz dtypes, use 'datetimetz' (new in 0.20.0) or 'datetime64[ns, tz]' |
H: How to interprete the feature significance and the evaluation metrics in classification predictive model?
Consider a experiment to predict the Google-Play apps rating using a Random-Forest classifier with scikit-learn in Python. Three attributes 'Free', 'Size' and 'Category' are utilized to predict the apps rating. 'Rating' (label) is not continuous value, instead, grouped into two classes 0 and 1. Where 0 is below 4 star rating and 1 is above 4 rating. Through Random-Forest, feature significance of all three predictive attributes and the F1 Evaluation of the model is also calculated.
Firstly, lets suppose model omits the 'Size' as most significant feature, so what is implied here, having larger size or lower size of an app contribute to the rating? What If there is no ascending or descending order in the attribute, for instance if the 'Category' is most significant, then what category contributed the most?
Secondly, the scikit-learn calculates the evaluation for each individual labels. As there are two labels, 0 and 1, so model yields two separate F1 scores for these labels and also the overall F1 scores of entire model. Now consider, for label 0, F1 is 30% and label 1 its 75%. Whereas overall F1 of entire model is 55%. In general, F1 and all other evaluation metrics must be around 90% for good prediction model. But suppose a scenario where above 70% F1 is considered good. Can I claim that these above mentioned attributes are good predictor of label 1 as its individual F1 is 75% but not for label 0, because it has only 30% F1 individually. If yes, then is it means predictive model cant find any relation between attributes and label 0, but finds a considerable relation with label 1? Or I have to consider the overall F1 which is 55%, and claim there exists very little correctional between 'Rating' and predictive attributes for all the labels. In conclusion, not a good prediction model at entirety for all the labels?
AI: Firstly, lets suppose model omits the 'Size' as most significant feature, so what is implied here, having larger size or lower size of an app contribute to the rating? What If there is no ascending or descending order in the attribute, for instance, if the 'Category' is most significant, then what category contributed the most?
Decision Tree splits the space based on a feature value.
Good feature importance implies, when the model used that feature to split the space, split were cleaner [better Gini drop].
At this point, you can't know the value-specific correlation i.e. whether RED color is causing high rating or GREEN color. Its just the color.
If you OHE your data, then each value will become a feature and you may get the required significance value.
Can I claim that these above-mentioned attributes are good predictors of label 1 as its individual F1 is 75% but not for label 0, because it has only 30% F1 individually.
Yes, it should imply that i.e. splits on the feature separate out Label_1. Though I belive, a "Confusion matrix" is a better view for such analysis. |
H: How does missing data occur
I am new to ML.
I found that one of the preprocessing steps is to handle missing data.
My query is
Is there a way to understand nature of missing data
I can see that the mostly missing data is dropped or replaced with mean/mode is that the right way to go about it
AI: Looks like your query is a 3 part question
How missing data occur: Missing data can occur when no data value is stored for a variable, or because of nonresponse, or the information is just not available
Understanding missingness: forms of missingness take different types
MCAR(missing completely at random): Missing data values do not relate to any other data in the dataset and there is no pattern to the actual values of the missing data themselves.
MAR(missing at random): Missing data do have a relationship with other variables in the dataset. However, the actual values that are missing are random.
MNAR(missing not at random): The pattern of missingness is related to other variables in the dataset, but in addition, the values of the missing data are not random.
Handling missing data: Once you have understood what missingness is all about. Next step is to handle them, dropping missing data or replacing with mean/mode is few of the techniques.
Deletion methods
Listwise deletion: ideal for MCAR
Pairwise deletion: ideal for MAR or MCAR
Single imputation:
Mean/Median/Mode substitution
Regression imputation
LOCF(Last observation carried forward)
Model-Based methods:
Maximum likelihood: best maximum likelihood technique is EM (Expectation-Maximization)
Multiple Imputation: MICE algorithm, Amelia(ideal for time series) are few packages that handle multiple imputation.
Look into these for better understanding of your missingness
missing data imputation
handling missing data in R |
H: What is the difference between one-hot and dummy encoding?
I am trying to understand
The reason behind encoding (one-hot encoding and dummy encoding)
How one-hot and dummy are different from each other
AI: Most machine learning models accept only numerical variables. This is the reason behind why categorical variables are converted to number so the model can understand better.
Now lets address your second query lets look into what is one-hot encoding and dummy encoding and then see the difference
One hot Encoding: Take the example of column name Fruit which can have different types of fruits like Blackberry, Grape, Orange. Here each category is mapped to binary variable containing either 0 or 1. Widely utilized when features are nominal.
Fruit
Price (dollars per pound)
Blackberry
3.82
Grape
1.2
Orange
.64
Post one hot encoding the table now looks as shown below
One Hot Encoded table
Blackberry
Grape
Orange
Price (dollars per pound)
1
0
0
3.82
0
1
0
1.2
0
0
1
.64
Dummy Encoding: similar to one hot encoding. While one hot encoding utilises N binary variables for N categories in a variable. Dummy encoding uses N-1 features to represent N labels/categories
One Hot Coding Vs Dummy Coding
Column
One Hot Code
Dummy Code
Blackberry
100
10
Grape
010
01
Orange
001
00 |
H: What mean a column in zero in confusion matrix?
When training my model and reviewing the confusion matrix, there are completely zero columns for each specific category, what does this mean, is there an error or how do I interpret it?
I use the confusion matrix display function and it gives this result
Thanks for your answers
AI: If the plot is correct, it means the model never gives any predictions label of 1 and 2. |
H: Neural network / machine learning approach to model specific sequencing-classification problem in industry
I am working on a project which involves developing a machine learning/deep learning for an application in a roll-to-roll industry. For a long time, I have been looking for similar problems as a way to get some guidance but I was never able to find anything related.
Basically the problem can be seen as follows:
An industrial machine is producing a roll of some material, which tends to have visible defects throughout the roll. I have already available a machine learning algorithm capable of analyzing segments of the roll and classifying each segment as having defects or not, so the task it not detect the defects.
What I am actually developing is an algorithm that receives time-series inputs of the production, including the outputs (probabilities) of the machine learning vision model that classify the segments as having defects or not, and evaluates if the machine should stop or not at a specific instant, to avoid further generation of defects.
In many roll-to-roll = continuous production industries, unlike the industries where very 'isolated' parts are produced with very specific reject/don't reject quality criteria (e.g: car parts), you might not want to stop production at the sight of a single defect, but rather when groups of continuous defects start to ruin the production. So the problem is more about detecting those continuous defects by analyzing each timestep of information and be able to 'separate' those from the cases of just single defects.
Hope that the description provides a little context in order to understand the purpose here. I am using an approach based on LSTMs and a sigmoid activation function. I am developing a custom loss function and modeling the learning problem labels based on regions of timesteps in which the machine should stop - it gives a classification at each timestep. Something like:
[0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
- the zeros represent timesteps where no stop should happen
- the ones represent timesteps where at least a stop should happen = continuous defects
The NN should learn to not stop on the places with zeros and stop on the places with ones, by being fed different timestep inputs. There are some particularities of course but I believe this is a simple explanation that I hope can provide some insights.
-> With this, I was curious to know if someone has ever worked on a problem that follows a similar 'logic' and direct me to similar ways of looking at the problem. Would also be interested in similar network architectures/configurations that would lead to a starting point. I am also very curious on any other contribution as a way to look at the problem. Really interested in hearing your perspectives!
AI: I would try to apply techniques from the "changing point problem" world. In this kind of problem, you try to identify times when the probability distribution of a stochastic process or time series changes. This is a classic problem with classic solutions, so maybe you don't need a neural network to solve it. In your particular case, you're interested in the online version, this is, you have to detect the change in the distribution in real-time.
I leave here some sources I've found that may be interesting
Change detection in streaming data analytics: A comparison of Bayesian online and martingale approaches
Bayesian online change point detection — An intuitive understanding
Online change detection techniques in time series: An overview
If you ask me, I would try a Bayesian approach, where you have a prior distribution on the defects rate and you update the distribution with incoming data. You could model the probability of receiving a defective segment as a beta distribution $B(x; \alpha, \beta)$, which has the nice property that parameters $\alpha$ and $\beta$ are updated as
$$
\alpha_{t+1} = \alpha_t + s_t
$$
$$
\beta_{t+1} = \beta_t + f_t
$$
where $s_t$ is the number of segments without any defect at time $t$ and $\beta_t$ is the number of segments with defect at time $t$. Therefore, after observing $T$ segments you would have
$$
P = B(x; \alpha_T, \beta_T)
$$
And with this distribution, you can implement strategies like "Stop the production if the defect rate is higher than some threshold $t$ with a probability $>95$, ie: stop if $\int_{t}^1 B(x; \alpha_T, \beta_T) dx > 0.95$" |
H: Why do I get an almost perfect fit as well as bias variance tradeoff with my time series forecast?
In order to achieve scalable and robust time series forecast models, I am currently experimenting with metalearner ensembles.
Note, that I am also using a global modeling approach, so all time series are "learning" from another. In my example I want to predict the monthly demand for 12 retail products one year ahead (4 years of training data available) I also choose different datasets to test the following.
As base models, I am fitting 6 XGBoost Models with 6 different learning rates from 0.001 to 0.65. The other parameters I do not tune (parsnip library defaults).
For the metalearner I am also using an XGBoost Model with the following tuning grid:
Note, that I choose a very small range for the eta parameter!
The grid search resulted in mtry = 16, min_n = 13, tree_depth = 7 and a learning rate of 0.262. Trees were set to 1000 and early stopping parameter was set to 50 iterations.
However, my results are too good to be true (realistic) I guess. Below you can see the typical accuracy results from my predictions:
As you can see, the predicted line of the ex post (training forecast on out-of-sample test set) almost perfectly matches with the actual values.
Also there is an almost perfect bias, variance tradeoff. This seems odd, because it should be really hard to achieve in real world problems.
Now I am asking myself, if this approach is just exactly what I need, or if this
AI: Without more details, it seems to me that you have a data leak problem.
How did you split the data in train/test? Notice that you're dealing with a time-series problem, so the standard random split wouldn't work. If you do a random split you would have a data leak, ie: your model would be trained with data from the future.
In time-series problems, you need to do a split based on time, for example, use all the data from the first 3 years to train the model and then evaluate the model with the remaining data of the last year. |
H: When to prioritize accuracy over precision?
I am working on a simple SVM project for the prediction of hepatitis c. I got my dataset from kaggle. When dealing with null values, I tried two ways, firstly by dropping data with null values, second by filling null values with mean.
The above is the result of the first method. I obtained a fairly high accuracy value, as well as a high precision value for blood donors (healthy people).However, the accuracy value of fibrosis, hepatitis, and suspect is 0. I know this because there are only a few datasets for these three categories, and some of them have null values, so they have to be dropped.
Next is if I replace the null value with the mean of the column. I get a slightly lower accuracy value, but still get precision on fibrosis and hepatitis. I want to know, is it more important to get high accuracy from dropping data, or precision from using mean values?
AI: First it's important to realize that these two tables are not comparable, because the two models are not evaluated on the same dataset: in the first case you remove instances which contain undefined values, probably making it easier to predict the remaining instances. In particular it's clear that the 4 small classes are harder to find for the classifier, but in the first case you have only 9 instances from these classes instead of 14. This is a bias: by removing the difficult instances, the model can easily perform better.
Accuracy is pointless in a case like this where data is highly imbalanced: by always predicting the majority class the classifier reaches more than 90% accuracy. The precision/recall/f1-score measures are much more informative. You may notice that your macro F1-score is actually better in the second case, even though the dataset is harder with the undefined values. |
H: Seaborn Heatmap with month & hour of database entry
I have a dataframe with X rows.
For each row, I have the information of the month (value from 1 to 12) and hour (value from 1 to 24) in separate columns
I need to create a heatmap with seaborn in order to display the number of entries crossed with the month/hour.
I do not manage to do it. Any idea how should I proceed ?
AI: import numpy as np
import pandas as pd
from numpy.random import RandomState
import seaborn as sns
state = RandomState(0)
df = pd.DataFrame({"month":state.randint(1,12,20),
"hour":state.randint(1,24,20)
})
sns.heatmap(pd.crosstab(df["month"], df["hour"]), cmap ="Reds",linewidths=1); |
H: How is uncertainty evaluated for results obtained via machine learning techniques?
As machine learning (in its various forms) grows ever more ubiquitous in the sciences, it becomes important to establish logical and systematic ways to interpret machine learning results. While modern ML techniques have shown themselves to be capable of competing with or even exceeding the accuracy of more "classical" techniques, the numerical result obtained in any data analysis is only half of the story.
There are established and well formalized (mathematically) ways to evaluate uncertainties in results obtained via classical methods. How are uncertainties evaluated in a machine learning result? For example, it is (at least notionally) relatively straight forward to estimate uncertainties for fit parameters in something like a classical regression analysis. I can make some measurements, fit them to some equation, estimate some physical parameter, and e.g. estimate its uncertainty with rules following from the Gaussian error approximation. How might one determine the uncertainty in the same parameter as estimated by some machine learning algorithm?
I recognize that this likely differs with the specifics of the problem at hand and the algorithm used. Unfortunately, a simple Google search turns up mostly "hand-wavy" explanations, and I can't seem to turn up a sufficiently understandable scientific paper discussing an ML result with an in-depth discussion of uncertainty estimation.
AI: Here are some of the main approaches I'm aware of. One method is to use Bayesian machine learning, which learns a probability distribution over the entire parameter space (see Joris Baan's A Comprehensive Introduction to Bayesian Deep Learning). However, these methods tend to be computationally expensive.
For classification problems, the most common approach is to use a classifier that can output class probabilities (such as the cross-entropy loss). While this probability can be interpreted as uncertainty, it usually is not well calibrated. By calibrated we mean the model uncertainty reflects the prediction results. For example we would expect that 80% of samples that are predicted with >= 80% certainty are correctly classified. So a calibration step can be added after the classification step.
For regression problems, a naïve approach is to train multiple models, either by bagging, or for deep learning models, using different weight initialisations. Then the variance of the predictions from each model can be interpreted as the uncertainty. For instance, we can use an ensemble, where we also use the mean of the predictions as the ensemble prediction. However, this gives over-confident estimations of uncertainty.
For deep learning models, there are a couple of other approaches that I am aware of.
The first is called Monte-Carlo drop-out (Gal and Ghahramani's Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning), which can be applied to any deep learning model that uses dropout. This method uses the randomness from dropout to estimate variance or uncertainty in predictions and can be applied to both regression and classification models.
The next is to change the loss function to the negative log likelihood (NLL) function. When used for regression, it provides an estimate of both the mean and variance. So models using this method have two outputs - one for the mean and the other for the variance. An early work on this is Nix and Weigend's Estimating the mean and variance of the target probability distribution, which uses separate MLPs for the mean and variance. A more recent work (Lakshminarayanan et al.'s Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles) applies this technique to any neural network, and combines it with ensembling, which further improves the uncertainty estimates. |
H: Found 0 images belonging to 0 classes
Losely following this tutorial, I'm trying to apply Keras' ImageDataGenerator preprocessing on my custom object dataset. Here is the code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.applications import imagenet_utils
from sklearn.metrics import confusion_matrix
import itertools
import os
import shutil
import random
import matplotlib.pyplot as plt
%matplotlib inline
os.chdir('/home/pc3/deep_object/')
mobile = tf.keras.applications.mobilenet.MobileNet()
cwd = os.getcwd()
# Print the current working directory
print("Current working directory to generate: {0}".format(cwd))
train_path = 'data/Object-samples/train'
valid_path = 'data/Object-samples/valid'
test_path = 'data/Object-samples/test'
train_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory(
directory=r'data/Object-samples/train', target_size=(224,224), batch_size=10)
valid_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory(
directory=valid_path, target_size=(224,224), batch_size=10)
test_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory(
directory=test_path, target_size=(224,224), batch_size=10, shuffle=False)
However I get 0 pictures, despite the fact that the folders are already filled with pictures.
Current working directory to generate: /home/pc3/deep_object
Found 0 images belonging to 0 classes.
Found 0 images belonging to 0 classes.
Found 0 images belonging to 0 classes.
my directory structure is like this:
~/deep_object$ tree -L 2
.
├── data
│ ├── Object-samples
│ ├── dogs-vs-cats
│ └── MobileNet-samples
├── deeplizard_tutorial_side_effects_example.ipynb
├── Mobilenet-finetunning-my-dataset.ipynb
So I'm wondering what is wrong here?
AI: Tensorflow expects sub-directories for every single class that you have inside the primary directory. For example: inside /Object-samples/ you'd have two sub-directories /Object-samples/0/ and /Object-samples/1/ which would contain images belonging to that class. |
H: How to convert a string to callable
I have created a string array and called a method with that array along with Train, test data. The purpose of the method is to find Kfold results of each algorithm specifies in the array. Everything works fine except in cross_val_score(model,X,y), it considers the model variable as a simple string instead of a callable model. If I put it like this cross_val_score(RandomForestClassifier(),X,y) it works perfectly fine. Now, how to convert the string model to a callable model. I am new to ML and I may not be able to make you understand the problem properly. Please let me know if you have any question. Thank you.
strarray = ['RandomForestClassifier()','LogisticRegression()','SVC()']
def checkall(array,X,y,Kfold):
for model in strarray:
values = cross_val_score(model,X,y)
print(values)
checkall(strarray,X,y,5)
AI: You can simply change the values stored in the strarray variable to simply store either the model name or the initialized model:
strarray = [RandomForestClassifier(), LogisticRegression(), SVC()]
def checkall(array,X,y,Kfold):
for model in strarray:
values = cross_val_score(model,X,y)
print(values)
checkall(strarray,X,y,5) |
H: Error rate of a class from confusion matrix
My professor gives a multiclass confusion matrix and asks for the error rate of a certain class. Unfortunately, the professor refuses to give a definition.
I think the closest value to an error rate for a class $j$ is the conditional probability $\mathrm{P}(\mathrm{Pred} \neq j ~|~ \mathrm{Truth} = j)$, i.e. the sum of the offdiagonal entries along the $j$-column divided by the sum of all the $j$-column entries. Do you agree?
(I guess the only alternative would be $\mathrm{P}(\mathrm{Truth} \neq j ~|~ \mathrm{Pred} = j)$, which is computed along the $j$-row.)
AI: I don't know if this is the definition that you're supposed to use, but usually in multiclass classification the most standard method to apply a binary classification measure is one-vs-rest: given a target class $C$ (positive class), consider all the other classes as a single negative class (i.e. as if they are all merged together).
According to this interpretation, the error rate of a target class $j$ would be the probability of an instance to have different predicted and true class, excluding cases where neither the predicted or true class is $j$ (these are not counted as errors since both the predicted and true class are "negative"), |
H: All Categorical data
I need some feedback on a problem I have been working on. I am working with a fairly balanced dataset with all categorical features, and a categorical outcome (classification problem). The data has no continuous numerical features. To predict my outcome on testset I am using xgboost algorithm. Since I have all categorical predictors I am using one-hot encoding to handle my categorical features. Now I am a bit worried that I might be missing something in the process, so I wanted to check if I have all categorical features with a binary outcome is this a valid approach? I don't see any other way to deal with this problem.
FYI the categorical variables are not things like ZIP codes, IDs...they are actually relevant to the outcome...e.g. smoker (yes/no) | high bp (yes/no)
What do you think?
AI: I don't see any problem doing classification with purely categorical features, as far as the features are relevant.
And as always, some precautions dealing with categorical features:
The choice of model. Some models can handle categorical features off-the-shelf (e.g. tree-based algorithm), some are specifically designed for (e.g. CatBoost). These models may ease your feature engineering work, and probably better accuracy.
Cardinality. Sometimes a categorical feature can take a lot of values (plus unknown/unseen ones), which can be a problem. You should think ahead about what to do in these cases. |
H: How to handle date in Random Forest prediction?
I want to predict the income for small business from 1/1/2018 to 1/1/2020 for a dataset from historical data(all my variables are numeric except for date) the start date of my data is 1/1/2012. The income is updated each 3 months (so on 1/1/2018 I have x income and on 30/3/2015 I have new income) I'm not sure how to handle the date in this situation.
I did the following:
1- read the data
2- analysis the data
3- convert date column to type date
4- sort date
5- split the data (training data from 1/1/2012 to30/12/2017 and test data from 1/1/2018 to 1/1/2020)
6- convert date back to numeric
7- normalize all my columns except for the date columns
8- perform RF
9- predict using test dataset
In this scenario I didn't change the date format but I feel like this might not be the best way to handle date format.
AI: Random Forest needs to handle dates as numeric data, that's why you can use the day, the weekday, the month, the trimester and the year as separated fields.
In addition to that, if your data has a cyclic behavior, you can align your data to it.
For instance, if your data has a cycle of 3 months, you can have the following features: day of the trimester (from 1 to 90), weekday, trimester and year. This would help to find interesting patterns.
Note: after the random forest processing, you may want to convert the day of the trimester to yyyy-mm-dd for a better clarity. |
H: Joining on columns with duplicate values - clean before merging or after merging?
I am joining on two data sets on a column which has duplicated values in both datasets. Is it better practice to remove the duplicates and make the values I am joining on a primary key in both datasets before joining the two, or is it okay to first merge the two data sets, then make the joined column the primary key using something like .groupby()?
E.g:
A = pd.DataFrame({'KEY' : ['abc', 'abc', '123', 'wyz'],
'WEIGHT' : [5, 7, 13, 10]
})
B = pd.DataFrame({'KEY': ['abc', '123', '123', 'def'],
'TITLE' : ['cat', 'dog', 'dog', 'elephant']
})
# join first then clean
C = pd.merge(A,B, how='inner', on='KEY')
C = C.groupby('KEY', as_index=False).agg(funcs) # mean for VALUE, first for TITLE
# versus clean then joining
A = A.groupby('KEY', as_index=False).mean()
B = B.groupby('KEY', as_index=False).first()
C = pd.merge(A,B, how='inner', on='KEY')
```
AI: With small datasets it doesn't matter, but for large datasets it is always better to remove duplicates before joining, just for efficiency. There is usually an increase in CPU time when you are joining larger datasets with duplicates. This is magnified for very large datasets. But, in an opposite sense, sometimes joining without first removing the duplicates also helps with identifying join problems if the resulting output does NOT contain EXACT duplicate. e.g Sometimes a row contains a column you may not be interested in, which is revealed AFTER you do the join and can thus generate additional rows. I have discovered hidden variables in some of my data which i didn't realize changed by seeing duplicates in the output. That can help with refining your join by including (or eliminating) the column, and can help your model.
in practice we usually join 1 or 2 keys. So it is always a good idea to do a count of primary keys in the input and output data to make sure you are getting what you want. |
H: How do researchers actually code novel architectures and layers?
Disclaimer: I am almost a complete novice when it comes to tensorflow, keras, coding in general, and neural networks/data science.
While reading papers on novel architectures for neural nets, I see diagrams and such describing their ideas, and then they present their results, with no code shown. While learning to apply neural nets, I just load in the data, build a model by stacking layers like LSTM, Dense, etc, and training it. In other words I can't see how to do anything that isn't straight out of the box.
What tools, libraries, etc, do researchers use to implement these architectures when the layers aren't as simple as model.add(Dense(1))
For example, how might we implement the SeqMO algorithm described in this paper?
https://arxiv.org/pdf/1806.05357.pdf
AI: In this particular case, I don't know how are they implementing these complex layers, but in Keras/TensorFlow you can define your own layers by inheriting from tf.keras.layers.Layer. For example, you could define a custom dense layer as (example from the documentation)
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_weight("kernel",
shape=[int(input_shape[-1]),
self.num_outputs])
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
If you're interested in this topic I recommend you paperswithcode.com, where you can find a lot of code implementations of research papers. Hope it helps :)
EDIT:
Apparently, for your particular example the code is available here, but they're using PyTorch, not Keras. |
H: Having weird accuracy graph on deep learning binary classification model
I am doing deep learning binary classification on some data and got very weird results with the accuracy metric. In the first few epochs, it doesn't change at all but then it goes on this weird linear path. I have attached a picture below. Can someone tell me what this means since I am new to machine learning and I am used to nice logarithmic graphs?
AI: Assuming that the dataset is balanced, my intuition is the following:
From epoch 1 to 55: the loss function being superhigh indicates your model is doing random predictions but with probabilities near 0 or 1. This is, to each example it assigns randomly a value near 0 or 1. The log-loss formula is
$$
\mathcal{L(y_i, p_i)} = - \left[y_i \log p_i + (1-y_i) \log(1-p_i) \right]
$$
If the real label is $y_i=0$ and your wrong prediction is $p_i \to 1$, then the loss function is $\mathcal{L}(y_i, p_i) \to \infty$. Also, the random predictions explain the 50/50 accuracy (assuming a balanced dataset). During these epochs, your model is not learning, but just calibrating the predicted probabilities to be in a more reasonable range.
From epoch 55 to 300: After epoch 55 seems that your model starts to learn. This is also reflected in your accuracy plot, where the accuracy starts to improve. In your loss plot, it seems the loss is not changing, but this may be an illusion. Try to change the y-range to (0, 1) and I guess you'll see your loss decreasing.
My recommendation is to be careful in the way you initialize your network since they have a lot of impact on your network learning process. There are a lot of resources about this topic, like this one. |
H: Data Mining of unresearched data for a master's degree final project
So, I have to start thinking about the topic of my final project in a data science master's degree (business oriented, although I can choose any unrelated field) and one of the requirements is to mine and use data that has not yet been analysed in the academic research environment.
I would prefer to avoid the typical scrape of data from twitter or other common scraping sources of information.
I would really appreciate if you could give me some ideas or direction on how to find an accesible source of data which also does not require too much time to get information from.
Thanks a lot for the help!
AI: If it's business oriented, there are many "business Wikipedia" type websites that have lots of data presented in the same format on each page, which will make them a lot simpler to scrape. For example Yahoo Finance for stock data finance.yahoo.com
You can use the BeautifulSoup library executed with a Python script locally to set-up a simple HTML page scraping script given the URL, and then just get it to loop through some set of different page URLs to get all the info you need. |
H: How does gradient descent avoid local minimums?
In Neural Networks and Deep Learning, the gradient descent algorithm is described as going on the opposite direction of the gradient.
Link to place in book. What prevents this strategy from landing in a local minimum?
AI: It does not. Gradient descent is not immune to local minima in non-convex function optimization.
Nevertheless, the noise introduced by stochastic gradient descent (SGD) helps escaping local minima. Other hyperparameters, like the learning rate, momentum, etc, also help.
You can check sections 4.1 and 4.2 of Stochastic Gradient Learning in Neural Networks for detailed explanations and mathematical formulation of SGD and its convergence properties. |
H: What's the logic behind such gradient descent
The gradient descent is motivated from the leetcode question of minimal distance:
https://leetcode.com/problems/best-position-for-a-service-centre/
$$\arg\min\limits_{x_c,y_c}\sum\limits_i\sqrt{(x_i-x_c)^2+(y_i-y_c)^2}.$$
And here is one of the correct solution (dist is distance function):
double getMinDistSum(vector<vector<int>>& positions) {
constexpr double kDelta = 1e-6;
int n = positions.size();
vector<double> center(2, 0.0);
for (const auto& pos : positions)
{
center[0] += pos[0];
center[1] += pos[1];
}
center[0] = center[0] / n;
center[1] = center[1] / n;
double minDist = dist(center, positions);
double step = 1.0;
while (step > kDelta)
{
bool reduceStep = true;
for (int y = -1; y <= 1; y++)
for (int x = -1; x <= 1; x++)
{
if (abs(y) + abs(x) != 1)
continue;
double curX = center[0] + x * step;
double curY = center[1] + y * step;
double newDist = dist({ curX, curY }, positions);
if (newDist < minDist)
{
minDist = newDist;
reduceStep = false;
center[0] = curX;
center[1] = curY;
}
}
if (reduceStep)
{
step /= 10.0;
}
}
return minDist;
}
The logic is that
each time only move in the decreasing of the five directions:$(\pm 1, \pm 1)$ and $(0,0).$
If none of them decrease, the step reduces the ten times
I cannot understand why such gradient descent works?
AI: This is not gradient descent, but instead it is coordinate descent.
The logic is the same as walking up a hill in order to reach the summit.
You do not have to be aware of the entire hill and the area around you is sufficient. As long as you always take a step to a higher point then you will be able to get to the top.
There are problems however. The function needs to be convex and smooth, otherwise you can get stuck in a local point that is not the peak. |
H: Using k-means to create labels for supervised learning
I want to know if the following is a valid approach to create labels, if I have measurements under some conditions, and the conditions are similar but never exactly the same.
This doesn't correspond exactly to my real problem but for convenience lets say I have two WiFi Networks A and B. I want to know under which conditions A or B performs better.
My first step is to transmit Data over A and over B. I measure the network conditions and the time it takes to transmit the Data. The problem is that the captured conditions are never exactly the same. Hence I can't directly assign a label (e.g. "under this conditions A or B is better").
So I would perform a k-means clustering of the conditions and group similar conditions together. For each point in a cluster I lookup weather the transmission was performed with A or B and compare e.g. the medians of the transmission times.
Now I have a label (A better or B better) for each cluster center and can train a supervised model to generalize.
Is this a valid or common approach in those situations?
AI: I'd suggest an alternative approach: train a regression model for each of the two networks A and B, which takes the conditions as input features and predicts the performance of the network under these conditions. Based on these two models, it is possible to directly find out which one is better under any conditions, by applying the two models and comparing their predicted performance.
I think that this approach is more direct in representing how the information is likely to impact the results. The clustering approach might work but it would lose some information in the process, because the clustering will introduce errors and the impact of the conditions on the performance wouldn't be directly represented in the model. |
H: An efficient way of calculating/estimating frequency spectrum for an event
This is rather a practical question. I'm looking for an efficient way of calculating the frequency of an event for a large number of samples. Here's a more concrete example.
Let's say that I have a system with millions of users. Each user has so many different features that I can use to categorize them into different classes. Among them, there's an event (let's say clicking) that each user generates once in a while. I'm interested in considering the frequency of clicking as an input feature, how would you calculate that frequency efficiently?
The brute force answer is that each time the user clicks, I store that as a pair (timestamp, 1). Then, for each new incoming event, I can construct a list of such pairs into a window. Each element of this list represents a bucket (time range) and the value of the bucket shows the number of pairs that fall into it. At last, I'll calculate FFT to transform the window in time into a frequency spectrum which is my classification's input feature.
It seems to me doing so for millions of users who are constantly generating events is very heavy processing. I was wondering if there's a lighter way of calculating (or even estimating) such a frequency spectrum for the events that occur over time?
AI: Sounds like more of a resource issue, but it is still related to data science, because of its final objective.
Dealing with millions of users could require a lot of memory and computing power.
That's why client-side processing should be a priority, using client-side functions like javascript.
On the other hand, it is interesting to start with a data analysis about clicks (mean amount of clicks per person, mean time spent in a session, etc.).
This is important to set rules to call the database and save the information.
For instance, you could count clicks on the client-side and every and save it in the database every (mean time spent)/2 for example.
The aim is to reduce as much as possible the request to the server-side, without having to use a long time-out.
In addition to that, if you collect enough click data, it is possible to do some interesting stats (rush hours, functions performance, most used functions, ...) and adapt the server-side or client-side processing to it. |
H: Why would we add regularization loss to the gradient itself in an SVM?
I'm doing CS 231n on my own. I'm looking at this solution to a question that implements a SVM.
Relevant code:
# average the loss
loss /= num_train
# average the gradients
dW /= num_train
# add L2 regularization to the loss
loss += reg * np.sum(W * W)
# ????
dW += 2 * reg * W
I don't understand why we would add regularization loss to the gradient.
My understanding of regularization is we use it to prefer certain weights, $W$, over others. But... I don't understand
What type of regularization is occurring to dW (L2 regularization operates on the square of all values of the weights -- this is not squaring anything)
Why we would tweak the weights themselves, presumably you want to tweak the loss which will incentivize changing the weights in a certain direction. Why would you tweak the weights (well, their gradients) themselves?
AI: The l2 regularization term is being added to the loss itself. But then you need to find the gradient of this new loss; since gradients are additive, this is the same as the gradient of the unpenalized loss plus the gradient of the l2 term, the latter of which is the quantity specified in the last line of code.
Note that it makes sense: when updating the weights, you will subtract some multiple of the gradient, so are moving the weights opposite their current location, i.e. toward the origin, as you expect regularization to accomplish. |
H: Optimize daily ice cream profit beased on simulation of all combinations input variables
I have an ice cream sales simulator for which I can simulate ice cream sales on any given day in the past. I want to optimize daily profit. The dependent variables for my ice cream shop which I have control over are: 'scoop size' and 'number of flavours'.
Now, for every day I ran my simulator with all possible combinations of my input variables, resulting in something like this:
day
scoop size
number of flavours
profit
1
1
1
100
1
1
2
120
1
1
3
140
1
2
1
90
1
2
2
95
1
2
3
105
2
1
1
102
2
1
2
85
...
...
...
...
So for every day, I created a simulation for all possible scoop sizes and number of flavours. Apart from this, other factors might also affect my sales, like weather or day of the week, but I have no control about these.
So given the fact that I have this huge dataset with simulations, how do I go about finding the best combination of scoop size and number of flavours to use in the future?
What I've tried:
Just ick the combination of the row with the highest profit (140 in this case), but this might just return an outlier, for example a day on which the weather was very good, so all profits were better. I'm looking for the results with the highest average profit.
Group by both variables individually or create plots to visualize them against profit, but then I'm only optimizing for 1 variables at a time, but both are not independent of each other.
Should I try to add all variables that could have an impact on my profit first before making any predictions, even though there's maybe an infinite amount?
I'm not looking for an exact answer, I'm just wondering what kind of problem I'm trying to solve here, any tips or any resources to read to get a better understanding of this problem would be very welcome. I'm new to data science so I just don't know where to look. Thanks! (I'm using Python btw)
AI: I would propose a solution like this:
Train a regression model which predicts the sales (target variable) based on all the features (both types: those you have control on and those you don't).
Assuming that the model works well, it can predicts sales given any conditions.
For example, let's say you want to optimize for tomorrow:
For the uncontrollable parameters, input the current values, for example day of the week tomorrow and weather forecast
For the controllable parameters, try all the possible combinations (repeat the process of applying the model).
Then just pick the combination of controllable parameters which leads to highest sales volume. |
H: Why weak supervision works?
I've learned that weak supervision combines multiple labeling methods to generate labels for a large dataset. I can't understand why generated labels can be used to train a more accurate model than all those labeling methods. Can't we use weak supervision as a combined classifier model instead of just labeling the dataset?
AI: Weak supervision only works better than classic labeling in the case of weak data, but with a large volume.
Why? Because it considers several incomplete features and applies them to a general probabilistic model including noise and empty rows (different scenarios are built automatically), which is not possible with classic labeling that usually considers one or two complete features (= more deterministic). |
H: Which format is preferrable to publish book dataset (plain or preprocessed)?
When I decide to publish collection of book texts as a dataset, should I do some preprocessing first or should I publish "plain texts"?
For example, https://huggingface.co/datasets/bookcorpus is published as a collection of sentences (so basic preprocessing was done), but https://huggingface.co/datasets/bookcorpusopen is published with raw texts.
AI: It depends on the content and the potential applications but it would not make a huge difference.
Plain text has a slight advantage in making studies at the book level and lets people preprocess it to the sentence level if needed. The other way round is possible but you might lose some information like paragraphs or titles that could be useful in some cases. |
H: How do I not display rows that have an empty value when trying to output a dataframe with pandas
I have this table where there are missing values under the value2 column.
Value1
Value2
1000
1000
1000
500
1000
560
1000
560
What I would like to do is to display the above table but without the empty rows, therefore the table should look like this
Value1
Value2
1000
500
1000
560
1000
560
Any help would be appreciated.
AI: You can simply filter out those rows using pandas indexing: df[df["Value2"].notna()] |
H: Why can't I reproduce my results in keras using random seed?
I was doing a task using RNN to predict a time series movement.
I want to make my results reproducible. So I strictly followed this post:
https://stackoverflow.com/questions/32419510/how-to-get-reproducible-results-in-keras
My code are as follows:
# Seed value
# Apparently you may use different seed values at each stage
seed_value= 0
# 1. Set the `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
# 2. Set the `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)
# 3. Set the `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)
tf.compat.v1.set_random_seed(seed_value)
tf.random.set_seed(seed_value)
# 5. Configure a new global `tensorflow` session
# for later versions:
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
However, every time I ran my codes, I still got a different result, what could the reasons be?
AI: Are you using a CPU or a GPU?
If you are using a GPU, there is an additional source of randomness.
To confirm this point, you can try to use TensorFlow with CPU only, or disable Cuda DNN but the model will be twice longer:
THEANO_FLAGS="optimizer_excluding=conv_dnn" python your_file.py
THEANO_FLAGS="dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic" python your_file.py
Source:
https://github.com/keras-team/keras/issues/2479#issuecomment-213987747 |
H: Sampling from earths landmass
Given a sphere which resembles earth, I want to sample points where land would be. I am struggling to find a dataset to sample from, and even to find a dataset which I could use to generate a set of such points.
Does anybody an according dataset / a nice workaround?
AI: You can use cartopy to achieve this easily.
pip install cartopy
For instance:
import cartopy.io.shapereader as shpreader
import shapely.geometry as sgeom
from shapely.ops import unary_union
from shapely.prepared import prep
land_shp_fname = shpreader.natural_earth(resolution='50m',
category='physical', name='land')
land_geom = unary_union(list(shpreader.Reader(land_shp_fname).geometries()))
land = prep(land_geom)
def is_land(x, y):
return land.contains(sgeom.Point(x, y))
>>> print(is_land(0, 0))
False
>>> print(is_land(0, 10))
True
Source: https://stackoverflow.com/questions/47894513/checking-if-a-geocoordinate-point-is-land-or-ocean-with-cartopy |
H: High level-Low Level features in U-NET
Why do the first layers of U-Net or CNN generate low-level features? Why not the last layers? What is the logic behind getting low-level features at the beginning of architecture? And yes, high-level features are more "meaningful" but why? Why high-level features are more meaningful than low-level features?
AI: To answer your question let's first go through how CNN works.
When we give a CNN an input image, it sees an array of numbers that correspond to the pixel intensities of the input image. The intensity of a pixel can range from 0 (black) to 255. (white). CNN will produce numerical values that indicate the likelihood that an image belongs to a particular class.
In order to classify images, CNN searches for basic features like edges and curves before progressing through a number of convolutional layers to more abstract ideas. To identify edges, it scans the image both horizontally and vertically using filters.
Consider filters as a weight matrix that is multiplied by pixel intensities to create a new image that preserves the characteristics of the source image.The output value in the corresponding pixel position of the output image is then obtained by adding these products which is called feature map.
The output of the previous cnn layer becomes the input of the subsequent cnn layer . In essence, each layer of the input is specifying the places in the original image where specific low-level features can be seen. Dimension reduction is done in Pooling Layer. Now when you apply a set of filters on top of that, the output will be activations that represent higher-level features. As you move through the network and through more convolution layers, you receive activation maps that represent more and more complicated features.
So, CNN basically works how we humans look at images first by it's distinguishable features(low-level) and then minute details(high-level).
Why high-level features are more meaningful than low-level features?
Take a example of dog and cat they both have 4 legs(low-level features) but what will distinguish them is maybe their eyes, ears (high-level features) which will help them in classifying better.
why high-level features are extracted in the last layers? Why not in
the first layer?
It's because for large image suppose (224,224) we extract high level then weights will be 224*224*3 = 150528. This full connectivity is not required and can lead to overfitting. So, we connect neurons to only a local region of the input volume. The spatial extent of this connectivity is a hyperparameter called the receptive field of the neuron or filter size.
Rather than using Fully connected layer in first layer it's used in last layer to use high level features extracted from Convolution + Pooling Layer for classifying images as it reduces weights significantly ,so is much faster and effective.
Refer:https://cs231n.github.io/convolutional-networks |
H: MAE divided by median metric
I have a regression task for which my best models has a Mean Absolute Error (MAE) of approximately 15,000. The median value of the target variable is approximately 150,000.
I want to report that the error is ~10% of the median. Is there a name for such metric? i.e. dividing the MAE by the median?
If not, is there an alternative error that quantifies percentage?
AI: You are encoding a Laplace prior over your targets... now, by itself a loss has no much meaning, however, if you associate it with a distribution, you can understand how good it is
the Mean Absolute Error is the MLE of the "variance" $b$ of the Laplace distribution (variance is a bit abusive as term since the variance is actually $2b^2$)
However, this is the result of an assumptions... homoscedasticity... which might be true, or might be false
In other words, saying that the ratio between the loss and the target is 10%, just explains how much noise on average your data has... with respect to your model
This means, that there is no much to say about the ratio, since it depends on a strong assumption, and on the model you are fitting (maybe using a very thick and deep NN you can get much better than that, but that does not mean that it performs as better as reported by the ratio)
In my opinion then, you might want to avoid using that "measure" since it's missleading, and does not convey much information (to a human, but you can use that to compare your model to other models) |
H: What are some methods to reduce a dataframe so I can pass it as one sample to an SVM?
I need to classify participants in an NLP study into 3 classes, based on multiple sentences spoken by the participant. I performed a feature extraction on each sentence, and so I am left with a matrix of length (# of sentences spoken x feature vector length for each sentence) for each participant. So, for me, each sample is represented by a matrix of varying length, since some participants spoke more sentences than others. What are some ways for me to reduce the dimensionality of each matrix, and also standardize the length, so I can perform an SVM with each participant as a sample?
I am also interested in learning about other methods to classify my samples, if SVMs are not the best fit. Thank you.
AI: SVM are not meant to solve "arbitrarily long" classification problem, therefore you have few choices:
use PCA for sequences, however it takes very long since it has to build a giant matrix over which it can perform PCA
change model, and pick a better suited one (eg RNN)
pad and cut your data (most often is not needed the whole phrase to predict the output)
introduce some prior knowledge, for example order the most recurrent words, and remove all of those that don't add anything to the meaning (be careful, this might lead to many problems, for example if you remove "not")
use recurrent autoencoders, to transform a sentence to a fixed size vector, over which you can perform any ML algorithm (but this might cause some problem from the POV of explainability)
In my opinion, cut&pad is the best option to start with, simple to implement and often very powerful, but this might change from context to context |
H: Scalar predictor - is it better to have a lot of training data that is less precise? Or fewer training data that is more precise?
I am quite new to this neural network stuff, so please bear with me :)
TL;DR:
I want to train a neural network to predict a scalar score from 32 binary features. Training data is expensive to come by, there is a tradeoff between precision and amount of training samples. Which scenario will likely give me a better result:
Training the network with 100 distinct samples of training data where the output (-1 to 1) is averaged from 100 runs of the same sample, and therefore fairly precise
Training the network with 1000 distinct samples of training data where the output (-1 to 1) is averaged from 10 runs of the same sample, and therefore less precise
Training the network with 10000 distinct samples of training data where the output is just binary (-1 or 1), and therefore very imprecise
Something else?
More context:
I am creating an AI for an imperfect information 4-player card game with 32 cards. I already have implemented a MinMax-based tree search that solves the perfect information version of the game, i.e. this can deliver me the score that is reached at the end of the game, assuming perfect play of all players, for the case that the full card distribution is known to all players. In reality, of course, each player only knows their own hand of cards. For the purposes of the AI I get around this by repeating the perfect information game many times while randomly assigning the unknown cards.
I now want to train a neural network that predicts the win probability that is reached with a given hand of cards (of course, not knowing the cards of the other players). I imagine this would be a value between -1 and 1, where 0 means 50% win probability and 1 means 100% win probability. The input features would be 32 binary values, representing the hand of cards. I want to use my MinMax algorithm to generate the training data for the network.
In a perfect world, I would iterate trough 1 Million random hands of cards and determine a precise win probability for each of them by playing 1 Million randomized perfect information games based on that hand. The reality, however, is that my MinMax algorithm is fairly expensive, and I can't improve it much more. So the total amount of perfect information games I can go through is limited.
Now I am wondering: How do I maximize the effectiveness of my training data generation process? I guess the tradeoff is:
If I go through many perfect information iterations for each given hand, the win probability in my training data will be fairly close to the 'real' win probability, so very precise
If I go through fewer (or in extreme case, only 1) perfect information iterations for each given hand, the win probability in my training data will be less precise. However, statistically it should still all even out in the end. Plus, I will have a lot more training samples, covering a much wider range of situations.
In that context I am wondering which side of this spectrum - precision vs. amount - will give me the better tradeoff.
Side note: For my validation data set, of course I will have to determine a fairly precise win probability for at least some samples, where I will probably use more iterations per sample than for the training data.
AI: super-interesting question!
My approach to the problem would be not to do any preprocessing on the data. This is, feed all the experiments to the network with the target being the 0/1 variable corresponding to lose/win. For example, if you have a dataset like
| hand of cards | game output |
|-------------------|-------------|
| [1, 0, 0, ..., 1] | 1 |
| [1, 0, 0, ..., 1] | 0 |
| [1, 0, 0, ..., 1] | 1 |
| [0, 1, 1, ..., 1] | 1 |
| [0, 1, 1, ..., 1] | 1 |
| [0, 1, 1, ..., 1] | 1 |
instead of training the model with
| hand of cards | winning prob |
|-------------------|--------------|
| [1, 0, 0, ..., 1] | 0.66 |
| [0, 1, 1, ..., 1] | 1 |
I would train the model with the first dataset, and try to predict the game output. This is, instead of using a regression model using a classification model. Of course, with this approach, your dataset will have entries with the same features and different targets, however, this is not a problem, since you can interpret the output of the classification model as the probability of winning or losing. From my experience, when I've dealt with similar problems, this approach is the one that gave the best results.
On the other hand, I would try an approach using decision trees, such as XGBoost or a simple RandomForest, since they use to work better with the kind of data you are dealing with. |
H: Small difference in metrics in KERAS for the same model
I see that the MSE metric provided by the model.fit (history) is slightly different from the MSE calculated by model.evaluate?
Can anyone help?
# fit model
Hist = model_rna.fit(x_train, y_train,
validation_data=(x_val, y_val),
callbacks=[early_stopping],
verbose=2,
epochs=epochs)
# get last trained mse
hist = pd.DataFrame(Hist.history)
mse_train = [i for i in np.array(hist['mse']).tolist()]
print(mse_train[-1])
The result is 0.03789380192756653
# evaluate the trained model
model_rna.evaluate(x_train, y_train)
The result is:
5/5 [==============================] - 0s 4ms/step - loss: 0.0379 - mse: 0.0379 - acc: 0.0000e+00
[0.03786146640777588, 0.03786146640777588, 0.0]
If I do the "manual" calculation:
Sum_of_Squared_Errors= np.sum( (y_train - modelo_rna.predict(x_train))**2 )
print(Sum_of_Squared_Errors/len(y_train))
The result is:
0.03786148292614872
This is exactly what I found via model.evaluate() but slightly different of History of model.fit().
Why am I finding this tiny difference?
My training and validation samples are fixed.
AI: I found explanation here:
https://github.com/tensorflow/tensorflow/issues/29964
https://stackoverflow.com/questions/59118430/keras-model-evaluate-on-training-and-val-set-differ-from-the-acc-and-val-acc
https://stackoverflow.com/questions/44843581/what-is-the-difference-between-model-fit-an-model-evaluate-in-keras
Hope this help others. |
H: I want to make a model that minimizes the heat supply, what should I do?
I am a beginner Python user. My weather data is made up of various variables. It consists of three months of one-minute time data, ambient environmental data (sunlight, ambient temperature, wind speed, etc.), internal environmental data (internal temperature, humidity), smart farm internal control variables (shielding screen, exhaust fan, ceiling fan, etc.), control set temperature (ventilation temperature, heating temperature), energy consumption (target).
Through these three-month data, a model that minimizes energy consumption of smart farms should be created. Thereafter, one month's worth of data is additionally provided, and there is no smart farm internal control variable in this data. My model should be used to predict these internal control variables and check the amount of heat supplied to make them as low as possible. (FYI, 1 of the internal control variables shows fan, 0 shows fan stop, and 0 to 100 shows light shielding or heat shielding, 50% open at 50 or 100% open at 100).)
I'm having a hard time solving this problem. I would appreciate it if you kindly let me know how to proceed with the analysis and related data or analysis techniques.
AI: Modeling any industrial process is quite complex because there are a lot of physical and non-linear events.
That's why I usually recommend simulating the most important processes first with a scientific tool like Scilab, Simulink, or Labview:
https://www.scilab.org/use-cases/powerful-modeling-and-big-data-analysis-energy-transition
A simulation is interesting, not only to understand how physics between things works, but also to apply a partial or complete machine learning model to optimize energy consumption.
Finally, you can apply a machine learning model with Reinforcement Learning:
https://github.com/ADGEfficiency/energy-py
https://github.com/smasis001/smart-grid-peak-tariff-optimization/blob/master/notebooks/OptimizationAlgorithm.ipynb
Or gaussian:
https://github.com/jaimergp/easymecp
These are just examples, there are plenty of existing energy optimization models available on GitHub. |
H: What can be done with same samples with different target?
Suppose that we have a dataset that some samples have the save value but with different target. It can be a regression issue or classification. What we should do with them? Should we remove them or that is normal and we can let these data be in training set?
AI: This is completely normal; leave them in. An easy example is in an ANOVA problem (which can be viewed as a regression) where multiple subjects in the same group (so same group "value" where group is the lone feature) will have different outcomes in $y$.
All this means is that, given your particular feature(s), you cannot get perfect predictions, but you should not expect to be able to get perfect predictions, anyway. |
H: Loss decreases, but Validation Loss just fluctuates
I've been trying to implement object detection using a CNN architecture like this:
model = keras.Sequential([
keras.layers.Input(shape=(320, 320, 1)),
keras.layers.Conv2D(filters=16, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.MaxPool2D((2, 2), strides=2),
keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.MaxPool2D((2, 2), strides=2),
keras.layers.Conv2D(filters=64, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.MaxPool2D((2, 2), strides=2),
keras.layers.Conv2D(filters=128, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.MaxPool2D((2, 2), strides=2),
keras.layers.Conv2D(filters=256, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.MaxPool2D((2, 2), strides=2),
keras.layers.Conv2D(filters=512, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.MaxPool2D((2, 2), strides=1, padding="same"),
keras.layers.Conv2D(filters=1024, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.Conv2D(filters=1024, kernel_size=(3, 3), activation="leaky_relu", padding="same"),
keras.layers.Conv2D(filters=5, kernel_size=(1, 1), activation="relu", padding="same"),
]);
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.000005), loss=yolo_loss, run_eagerly=True);
However, while the loss seems to decrease nicely, the validation loss only fluctuates around 300.
Loss vs Val Loss
This model is trained on a dataset of 250 images, where 200 are actually used for training while 50 are used for cross-validation. Why could this be? Could my model be too deep? Do I need to reduce my learning rate even more? Or do I just not have enough data?
For reference I am trying to mimic the Tiny YoloV2 architecture shown here
AI: It looks like your model is overfitting: it's learning from the training dataset, but this learning doesn't apply to the test dataset. You can try to reduce the complexity of the model by simplifying your model -fewer layers, fewer neurons, fewer filters, etc- or by adding regularization -l1, l2, dropout, etc. |
H: How to exercise Quality Assurance Engineering principles to Artificial Intelligence systems?
In deterministic (software) systems we have a set of business requirements and ideally, given enough resources, such a system can be fully defined of which are the expected outputs for each inputs or set of actions within a context. The functional QA then is defined to merely assess if the system is following the rules as described. Even usability, endurance, stress and other kind of settings can be fully defined and thus become part of the requirements
However how does one test effectively and detect difference between required and actual behaviors of Artificial Intelligence systems ?
AI: Without being sure if the approach makes sense but one could take the various steps of the lifecycle of an Artificial Intelligent system and thus attempt to see how as a Quality Assurance Engineer can ensure that the quality is high in each and every step:
Context
Ensure that there are clear specifications and defined requirements before proceeding with any other testing
Collecting training data
Ensure data has a variety of sources and necessary variety to avoid biases
Ensure that after cleaning enough large dataset has remained
Ensure features are sane and within the expected range after cleaning
View training data and sample them by eye to see if they make sense
Write rule based scripts to check if what is generally expected is found within training data
Ensure that training data represent the targets/outputs in as much the same portion as possible
Testing data
Ensure that test data are not merely a sample of the training data but at least some of them reflect the business goals (defining expected outcomes as test oracles) and are characteristic examples
Ensure that testing data are used only once and then are thrown away otherwise they will be used for the next model
Ensure that testing data, even smaller in size, are still a representative portion of the training data
Ensure testing data represent the very latest samples that we expect and reflect at least the near future
Ensure that the system is tested against totally random inputs (noise) and it is returning outputs that are of low certainty
Ensure that using GAN-based metamorphic approaches ([18] PDF - arxiv.org ) will test the AI system using inputs from the same space as the original data
Ensure that QAs will have generated by hand a few new test cases and have manually set (using their brain) the expected output
Ensure that past scenarios executed in production by real users can be replicated fully to be used as test-input
Robustness: refers to the resilience of an AI component towards perturbations
Ensure that small variations, perturbations, in the testing sample will yield similar output to the original and will not yield highly different results (ensure non high variance)
Model wise
Ensure that a baseline model is always there to compare against
Ensure that the proposed model performs better than the baseline model
Ensure that the new proposed model performs better than the latest proposed model
Ensure that easy to create dummy models using Naive bayes for classification or Linear Regression for regression will not perform better than the proposed model
Ensure that a low cost to create rule-based, non ai, model will not work better than the proposed model
Ensure that the model should also provide the probability of the certainty of the model that the output is a good/average/bad prediction
Ensure that the model is non polarized for a few parameters and therefore non prone to AI-attacks (where some inputs are being changed and change the entire output to our own wish)
Ensure that an ensemble model, is not overfitting and it works as good or better than any of the individual underlying models
Ensure that using a Teacher-Student model, that the Teacher is slower yet more accurate model than the Student which is expected to be less accurate but more efficient
Ensure that self-adaptive and self-learning systems (e.g. Reinforcement Learning) will be able to self-assess themselves to make sure that they are not making
Interpretability
Ensure that using the training data to build an interpretable model that fits the predictions of our large model, then the interpretation of the parameters make sense
Ensure that the model is making predictions based on parameters that the current theory supports and does not have any weird pattern which might lead wrong model
Checking output qualitatively
Ensure that the output of the model for very high probability of certainty are truly delivering a good answer
Ensure that the bad answers of the model are handled in such a way that the user retains his/her trust to the overall system instead of being misled
Ensure that the model generates output that is aligned with the business goals and these answers are useful to the user
Performance / Efficiency
Ensure that the model generates answers fast enough in order for the user experience to not be severely impacted by them
Ensure that the time to train the new model will not need so large time as to miss the deadlines
Ensure that minimal resources are provided to AI models which are being under development in comparison to the AI model which is in production and that these are separated without having one (test/staging environment) consuming resources from the other (production)
Production monitoring
Ensure that a feedback system have been set in place in order for users to be able and report unwanted or misleading output of the AI
Ensure that the feedback reported by the users is significantly high
Ensure that the measured error of the system while in production is within the acceptable levels similar to the ones that were measured during the execution of the model to the testing data
Ensure that the measured error of the system remains steady as new inputs are being received and does not have a declining trend
User output
Ensure that the output of the model and its certainty probability are reflected correctly in the app
Ensure the using as input an instance which is very far away from the current distribution of the model will not allow the user to proceed with using the AI system
Ensure that having as output a prediction that has a low certainty will provide the user manual or rule-based alternatives to accomplish his/her tasks
Data privacy: refers to the ability of an AI component to preserve private data information
Example: Having a chatbot and having it accumulate knowledge for a certain user, asking this language model information regarding some other user, should not be delivered. Each language model should be agnostic of other language models
Security: measures the resilience against potential harm, danger or loss made via manipulating or illegally accessing AI components
Ensure that process of AI model is transparent and that there is a history of the changes that have happened to the deployed AI model
Fairness: Avoid problems in human rights, discrimination law and other ethical issues
Ensure that the model output will comply to some "values" which are coded in rule based scripts
Example: A Sentiment analysis to never produce that the output of a language model will be very negative |
H: Evaluating Models that Return Percentage Present of Multiple Classes
If there is a model that returns a vector of the amount of different classes present in the data as percentages, what would be a good way to evaluate it (with charts and/or statistics)?
Say, for example, that a batch of pond water contains 30% Bacteria1 and 70% Bacteria2 (data is [0.3, 0.7]. Our model returns 35% Bacteria1 and 65% Bacteria2 (output is [0.35, 0.65]). How would we evaluate the accuracy of this model?
Am I right in thinking that we can't use things like confusion matrices or ROC/AUC curves because this isn't a classification problem? I'm not sure if there exist other metrics like these ones for this kind of problem though.
AI: If I understand correctly you're asking about the case where the model predicts a probability distribution or similar.
Assuming one has a test set with the true distribution for every instance, I think the most direct way to evaluate such a model is with a distance/similarity measure between distributions, for example KL divergence or Bhattacharyya. The distance is calculated between the predicted and true distribution for every instance, then the value is aggregated across instances (typically taking the mean).
You're correct that this is not classification, so classification evaluation techniques wouldn't apply (unless one is interested only in which class has the highest predicted probability in the distribution). |
H: Keras model.predict get the value for the predicion
I'm new to TensorFlow and keras and I'm trying to learn with an example using this code in google's colab
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv")
#Create a column transformer
ct = make_column_transformer(
(MinMaxScaler(),["age","bmi","children"]),
(OneHotEncoder(handle_unknown="ignore"),["sex","smoker","region"])
)
X = insurance.drop("charges",axis=1)
y = insurance["charges"]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=42)
#Fit the column transformer to our training data
ct.fit(X_train)
#Transform training and test data with normalization (MinMaxScaler) and OneHotEncode
X_train_normal = ct.transform(X_train)
X_test_normal = ct.transform(X_test)
insurance_model_4=tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
]
)
insurance_model_4.compile(loss=tf.keras.losses.mae, optimizer=tf.keras.optimizers.Adam(),metrics=['mae'])
insurance_model_4.fit(tf.expand_dims(X_train_normal, axis=-1), y_train, epochs=200)
y_preds = insurance_model_4.predict(X_test_normal)
X_test_normal.shape #this gives (268, 11)
y_preds.shape #this gives (268,11,1)
my issue is that I can't figure out how to get the actual values for predictions in y_preds, I was expecting an array with shape (268,1), that is the 268 predictions for each input in X_test_normal.¿How can I get the value of the predictions? Thanks.
AI: I don't understand why you would need to expand the dimensions of X_train_normal during .fit(). Remove that part to simply fit on X_train_normal, which would give you the shape of y_pred as (268, 1) as you expected. So,
Replace:
insurance_model_4.fit(tf.expand_dims(X_train_normal, axis=-1), y_train, epochs=200)
with:
insurance_model_4.fit(X_train_normal, y_train, epochs=200) |
H: How does Catboost regressor deal with categorical features at predict time?
I understand that Catboost regressor uses target-based encoding to convert categorical features to numerical features when training. But how does Catboost deal with categorical features at predict time when the labels are completely unknown? How does an object at predict time go down the Catboost decision trees if the decision trees are expecting to see categorical feature values as numbers?
I tried looking at the official documentation but could only find when the encoding was done during training when the labels are available.
AI: In a simplified way of putting it, we substitute the category id with the mean value of the training set target for this category. CatBoost implements some tricks like only using the preceding values when encoding the train set, but transforming the test set will use the whole train statistics anyway (https://github.com/catboost/catboost/issues/838).
What happens when a previously unseen category is encountered in the test set? According to https://towardsdatascience.com/categorical-features-parameters-in-catboost-4ebd1326bee5 unseen categories receive a value based upon prior (controlled by CTR arguments). In other words, same as https://catboost.ai/en/docs/concepts/algorithm-main-stages_cat-to-numberic with countInClass being zero. (category_encoders implementation of CatBoostEncoder() seems to just use the average train target value.) |
H: Reproduce Keras training results in Jupyter Notebook
I am using keras and Jupyter notebook and want to make my results reproducible every time I ran it. This is the tutorial I used https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/. I copied his codes in Stacked LSTMs with Memory Between Batches part.
This is my cell1 in Jupyter Notebook, I only used CPU to avoid randomness brought by GPU, making sure the same results can be reproduced every time.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
This is cell2, stricting following the suggestions from this question https://stackoverflow.com/questions/32419510/how-to-get-reproducible-results-in-keras
# Seed value
# Apparently you may use different seed values at each stage
seed_value= 0
# 1. Set the `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
# 2. Set the `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)
# 3. Set the `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)
import tensorflow as tf
tf.compat.v1.set_random_seed(seed_value)
tf.random.set_seed(seed_value)
This is cell3 from his codes, (changed it a little for example, from keras to tensorflow.keras)
import numpy
import matplotlib.pyplot as plt
from pandas import read_csv
import math
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return numpy.array(dataX), numpy.array(dataY)
# load the dataset
dataframe = read_csv('airline-passengers.csv', usecols=[1], engine='python')
dataset = dataframe.values
dataset = dataset.astype('float32')
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
# reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = numpy.reshape(testX, (testX.shape[0], testX.shape[1], 1))
And this is cell4,
# create and fit the LSTM network
batch_size = 1
model = Sequential()
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True, return_sequences=True))
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
for i in range(100):
model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
# make predictions
trainPredict = model.predict(trainX, batch_size=batch_size)
model.reset_states()
testPredict = model.predict(testX, batch_size=batch_size)
# invert predictions
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
# shift train predictions for plotting
trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict
# plot baseline and predictions
plt.plot(scaler.inverse_transform(dataset))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()
However, I still found that the training loss is different every time I ran cell4.
But I found that, as long as I add cell2's contents to cell4, I can get the same training loss curve every time I ran cell4.
So my question is, to reproduce my results, why should I set the random seed every time I run my model in the cell(cell4), instead of just setting it in the beginning of my jupyter notebook once and for all?
AI: Did you set the same random seed at each step?
The seed works well for the first function, but then it is lost in the next ones because NumPy applies a global seed reset automatically.
For example, you can do:
def reset_seed(seed_value):
np.random.RandomState(seed_value)
tf.compat.v1.set_random_seed(seed_value)
tf.random.set_seed(seed_value)
for i in range(100):
reset_seed(seed_value)
model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
Otherwise, you can also use a function with a random number generator:
https://albertcthomas.github.io/good-practices-random-number-generators/ |
H: Grid Searching seed in randomized machine learning
I was wondering if tuning a seed with cross-validation in order to maximize the performance of an algorithm heavily based on a randomness factor is a good idea or not. I have created an Extra Tree Classifier which performs very bad with basically every seed except the one I found by using grid search. I think this is not a problem because I really don't care about how the conditions were set as long as they classify correctly, therefore I should have the ability to try running the algorithm with different seeds until it works, in order to find the best set of casual conditions for each split. Also, note that the test is done with Leave One Out Cross Validation.
Am I right?
AI: It's definitely an error to select an "optimal" random seed.
If performance depends a lot on the random seed, it means that the the model always overfits, i.e. the patterns used by the model depend on the specific subset used as training data and the performance on the test set is due mostly to chance.
In your scenario, the model doesn't really work better with seed X, it happens that this particular seed leads to good performance on this particular test set.
It wouldn't work for another model trained with the same seed on a different subset
It wouldn't work for a different test set.
Also I assume that you didn't apply the correct methodology for tuning hyper-parameters, otherwise you would certainly have seen the problem. When applying grid search, the performance of the best parameters (and only these) should be estimated again on a fresh test set (because parameter tuning is a kind of training, so performance on the training set is not reliable).
I think this is not a problem because I really don't care about how the conditions were set as long as they classify correctly,
This is a mistake: the classifier doesn't classify correctly, actually the performance you obtain is not reliable, it's an artifact. Testing the model on a fresh test set is the only way to obtain a reliable estimate of the performance. |
H: Creating a grid type 3D data array from data points
I have a 3 data column $(X, Y, Z)$ ranges from $(min, max)$. For example,
$X = (0, 5)$, $Y=(0, 3)$, $Z=(0, 2)$. By using them I need to create a numpy array in the form of
$[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 2, 0)...]$
So in total there will be $6 \times 4 \times 3 = 72$ data points.
Is there a simple command to do this ?
AI: You can use itertools.product to get a possible combinations of x, y, and z and then convert to resulting list to a numpy array:
import itertools
import numpy as np
x = range(0, 5 + 1)
y = range(0, 3 + 1)
z = range(0, 2 + 1)
np.array(list(itertools.product(x, y, z))) |
H: PCA followed by UMAP then go into Random Forest
Is it a valid procedure to apply PCA to your dataset and then apply UMAP clustering on the PCA data, before sending the embedded cluster data to a Random Forest classifier?
Summary of process:
X_train --> x_PCA --> UMAP -->Random Forest
Is this a valid procedure to generate a predictive model??
AI: They are 3 different algorithms: they work better in parallel, rather than in series because they have different purposes.
In addition to that, their output always brings some uncertainty (overall PCA), which will increase if reused in other algorithms.
PCA is mainly used to understand better the features: their variance and their linear correlations.
UMAP is non-linear and makes rational clusters for data exploration.
You have a navigator here to see the difference between PCA and UMAP.
Random Forest is an actual prediction or classification algorithm, but it depends on your data. In the case of time series, LSTMs or XGBoost could be better.
In conclusion, PCA and UMAP will grant you a better comprehension of your data that would allow you to make a good data preprocessing for your prediction algorithm. |
H: How can I implement lambda-mart with lightgbm?
I have a learning to rank task at hand and I want to use the lightgbm implementation of LambdaMART. I'm also following this notebook.
param = {
"task": "train",
"num_leaves": 255,
"min_data_in_leaf": 1,
"min_sum_hessian_in_leaf": 100,
"objective": "lambdarank",
"metric": "ndcg",
"ndcg_eval_at": [1, 3, 5, 10],
"learning_rate": .1,
"num_threads": 2}
res = {}
bst = lgb.train(
param, train_data,
valid_sets=[valid_data], valid_names=["valid"],
num_boost_round=50, evals_result=res, verbose_eval=10)
In the params, the objective is set to lambda-rank which is another learning to rank algorithm. My question is, how do I implement LambdaMART with lightgbm ? What set of paramters should I use to implement LambdaMART with lightgbm ?
AI: Looks like the implementation of lambdaMART in the notebook referenced in the question is correct.
From the paper titled, From RankNet to LambdaRank to LambdaMART: An Overview, it is clearly mentioned in the first line of the paper that :
LambdaMART is the boosted tree version of LambdaRank, which is based on
RankNet.
So, the code that's pasted above clearly says that, the objective function is LambdaRank. There is one more arguement called boosting_type which is set to gbdt by default. The LambdaRank + gbdt is what LambdaMART is in essence.
So, just pasting the above code for completion sake:
param = { "task": "train",
"num_leaves": 255,
"min_data_in_leaf": 1,
"min_sum_hessian_in_leaf": 100,
"objective": "lambdarank",
"boosting_type": "gbdt",
"metric": "ndcg",
"ndcg_eval_at": [1, 3, 5, 10],
"learning_rate": .1,
"num_threads": 2 }
res = {}
bst = lgb.train(
param, train_data,
valid_sets=[valid_data], valid_names=["valid"],
num_boost_round=50, evals_result=res, verbose_eval=10)
This is how we can use lightgbm to train lambdaMART. |
H: In LSTM why h_t output twice?
According the LSTM design:
The hidden state (ht) is output twice (1 and 2 in the picture).
If they are the same, why we need them twice ?
Is there a different use for each one of them ?
According to
nn.lstm
there are 3 outputs (output, h_n, c_n).
I didnt understand what is the different between output and h_n ? (Doesn't they need to be the same) ?
AI: ht was initially defined as a differential function, which value is the same in output and in the next LSTM cell.
LSTM uses the previous steps in a sequential way and chooses whether to memorize or forget according to h(t-1) and C(t-1) and the inner weights, to set h(t) and C(t). h(t) is the cell's output, and it is sent to the next cell in order to keep a sequential logic.
It is quite complex to explain in a few words but let's say that the forget and memorize weights are set during the training process thanks to an auto-regulated system (named "the constant error carrousel") that takes into account several scenarios and at the same time avoids the neurons to diverge during training.
See main publication: https://www.researchgate.net/publication/13853244_Long_Short-term_Memory
Note: Google spent 10 years understanding LSTM's publication. It's very complex but very interesting. |
H: What does this statement relative to neural network weight initialization mean?
"The same value in all the parameters makes all the
neurons have the same effect on the input, which causes
the gradient with respect to all the weights is the same and, therefore,
the parameters always change in the same way."
Taken from my course.
AI: Consider the following image of a simple neural network. Note that the network uses a linear activation function and that there are no bias terms (this makes the intuition easier).
Each path from the input to the output is as follows
$$ f(x) = (x*w1)*w4 = (x*0.5)*0.5 $$
$$ f(x) = (x*w2)*w5 = (x*0.5)*0.5 $$
$$ f(x) = (x*w3)*w6 = (x*0.5)*0.5 $$
When you perform gradient descent, the change in the weights will always be the same as each path is identical. |
H: What did Sentence-Bert return here?
I used sentence bert to embed sentences from this tutorial https://www.sbert.net/docs/pretrained_models.html
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('all-mpnet-base-v2')
This is the event triples t I forgot to concat into sentences,
[('U.S. stock index futures', 'points to', 'start'),
('U.S. stock index futures', 'points to', 'higher start')]
model.encode(t) returns a 2d array of shape (2,768), with two idential 768-dimension vectors, and its value is different from both model.encode('U.S. stock index futures') and model.encode('U.S. stock index futures points to start'). What could possibly have it returned?
It is the same situation for other models on huggingface such as https://huggingface.co/sentence-transformers/stsb-distilbert-base
AI: There are two valid inputs to MPNet's tokenizer:
Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
When you give a list of tuples as an input, from each tuple only the first two sentences, i.e. "U.S. stock index futures" and "points to" are used to encode, similar to BERT's next sentence prediction pre-training task.
This is tokenized and converted to input_ids with special tokens as follows:
['<s>', 'u', '.', 's', '.', 'stock', 'index', 'futures', '</s>', '</s>', 'points', 'to', '</s>']
<s> indicates start of sentence and </s> indicates end of sentence. I'm not sure about the </s> before 'points', but this is the output from the model's tokenizer. As such NLU models can work with one or two sentences together, these special tokens are added to identify how many sentences are in the model and they help to separate the sentences.
Thus, you get two same vectors from
model.encode(t)
If you want the same vector without using a tuple, the encode function should contain the following input string:
"U.S. stock index futures </s> </s> points to" |
H: tensorflow beginner demo, is that possible to train a int-num counter?
I'm new to tensorflow and deep-learning, I wish to get a general concept by a beginner's demo, i.e. training a (int-)number counter, to indicate the most repeated number in a set (if the most repeated number is not unique, the smallest one is chosen).
e.g.
if seed=[0,1,1,1,2,7,5,3](int-num-set as input), then most = 1(the most repeated num here is 1, which repeated 3 times);
if seed = [3,3,6,5,2,2,4,1], then most = 2 (both 2 and 3 repeated most/twice, then the smaller 2 is the result)
Here I didn't use the widely used demos like image classifier or MNIST data-set, for a more customized perspective and a easier way to get data-set. so if this is not a appropriate problem for deep-learning, please help me know it.
The following is my code and apparently the result is not as expected, may I have some advice?
like:
is this kind of problems suitable for deep-learning to solve?
is the network-struct appropriate for this problem?
is the input/output data(or data-type) right for the network?
import random
import numpy as np
para_col = 16 # each (num-)set contains 16 int-num
para_row = 500 # the data-set contains 500 num-sets for trainning
para_epo = 100 # train 100 epochs
# initial the size of data-set for training
x_train = np.zeros([para_row, para_col], dtype = int)
y_train = np.zeros([para_row, 1], dtype = int)
# generate the data-set by random
for row in range(para_row):
seed = []
for col in range(para_col):
seed.append(random.randint(0,9))
most = max(set(seed), key = seed.count) # most repeated num in seed(set of 16 int-nums between 0~9)
# fill in data for trainning-set
x_train[row] = np.array(seed,dtype = int)
y_train[row] = most
# print(str(most) + " @ " + str(seed))
# define and training the network
import tensorflow as tf
# a simple network according to some tutorials
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(para_col, 1)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# train the network
model.fit(x_train, y_train, epochs = para_epo)
# test the network
seed_test = [5,1,2,3,4,5,6,7,8,5,5,1,2,3,4,5]
# seed_test = [1,1,1,3,4,5,6,7,8,9,0,1,2,3,4,5]
# seed_test = [9,0,1,9,4,5,6,7,8,9,0,1,2,3,4,5]
x_test = np.zeros([1,para_col],dtype = int)
x_test[0] = np.array(seed_test, dtype = int)
most_test = model.predict_on_batch(x_test)
print(seed_test)
for o in range(10):
print(str(o) + ": " + str(most_test[0][o]*100))
the training result looks like converged according to
...
Epoch 97/100
16/16 [==============================] - 0s 982us/step - loss: 0.1100 - accuracy: 0.9900
Epoch 98/100
16/16 [==============================] - 0s 1ms/step - loss: 0.1139 - accuracy: 0.9900
Epoch 99/100
16/16 [==============================] - 0s 967us/step - loss: 0.1017 - accuracy: 0.9860
Epoch 100/100
16/16 [==============================] - 0s 862us/step - loss: 0.1082 - accuracy: 0.9840
but the printed output looks unreasonable and random, the following is a result after one of the trainings
[5, 1, 2, 3, 4, 5, 6, 7, 8, 5, 5, 1, 2, 3, 4, 5]
0: 0.004467500184546225
1: 0.2172523643821478
2: 2.9886092990636826
3: 1.031165011227131
4: 69.71694827079773
5: 12.506482005119324
6: 1.0543939657509327
7: 0.2930430928245187
8: 8.086799830198288
9: 4.100832715630531
actually 5 is the right answer (repeated five times and most), but is the printed output indicating 4 is the answer (at a probability of 69.7%)?
AI: This type of problem is not really suited to deep learning. Each node in the neural network expects numeric input, applies a linear transformation to it, followed by a non-linear transformation (the activation function), so your inputs need to be numeric. While your inputs are numbers, they are not being used numerically, as the inputs could be changed to letters or symbols. Also, your network looks like it is overfitting. It is very large for the number of inputs and so is probably just memorising the training data, which is why you appear to good results on your training data.
Tensorflow has a tensorflow-datasets package (installed separately from the main TF package) which provides easy access to a range of datasets (see https://www.tensorflow.org/datasets for details). Maybe look here to find a suitable dataset to use. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.