text
stringlengths 83
79.5k
|
---|
H: Best practice to test (unit and integration) a neural network implementation
I'm implementing my own neural network (with the term implementing i mean writing the code that run the neural network itself, not training).
I implemented it for didactical purpose, but i does not know how to proceede for a validation of what i have done.
So i would ask if there is some best practice for neural network test (eg : is there a dataset of neural nets with weight, input and expected output)
AI: Make your code testable. You should be able to write plenty of tests for individual components without having to mock up an entire training pipeline for each one. A "property-based testing" library like Hypothesis can be helpful for this.
Train a standard model on a standard dataset and make sure the accuracy numbers are similar.
See here for an in-depth discussion, with more and better advice than I've been able to provide. |
H: Relation between amount of training samples and model depth?
When I add more hidden layers to my CNN (e.g. Dense Layers) it seems that the model needs more training samples to produce good results for classes with few training samples. In the single layer case the model provided better results, even for classes with few samples. (multi-class text classification with >10 classes)
Is there any evidence that my assumption is correct?
AI: Yes, this is common knowledge. Every time you add a parameter to a model, you will need to give it more data in order for it to learn as well as the simpler model. Every individual weight in a neural network is a parameter. The more weights, the more parameters, and the more data is needed.
One of the fundamental tasks in model-building is finding a good tradeoff between having enough parameters to learn fine details and having enough data to train all those parameters. Too many parameters lead to overfitting in part because if you don't have enough data you end up memorizing individual cases, as opposed to learning about and averaging over a spread across several cases.
This is one reason why class imbalance is a problem. If you don't have enough data on one of the classes, then that class will be poorly understood by the model. |
H: In supervised learning, how to get info from correlation?
I am trying to build a classification model so I have tried to check the correlation between the features.
Here Loan_Status is my target variable.
I just don't know how to extract information from this? Please help.
I have questions like.
Is -0.0047 corelation of ApplicantIncome with Loan Status useful?
AI: To help you, that shows the correlations between features and each other feature. For example, the number one in the image that you gave, is shown all along the diagonal part of the matrix. The ones represent the 100% linear correlation with one of the features and one of the other features. This image might help:
As you can see negative correlations mean that as one feature value increases, the other feature value decreases. A correlation of 0 means that the features appear to have no linear correlation.
If you want to merely concentrate on the correlations between the label and the features, then here is some python code (the language I assume you are using) to help you:
# Your data should be a pandas core dataframe
yourdata = ...
# To find correlations, use the corr() function
corr_matrix = yourdata.corr()
corr_matrix["your label"].sort_values(ascending=False)
# This should print out a correlation list.
# If it doesn't then wrap the last line of code in print( )
# You are going to notice that some features will be missing from the list.
# That is because the corr() function does not return any discrete features.
# If you still have every feature, then every one of your features are continuous. |
H: How filters are made in a CNN?
I am new to Data Science and CNN.
My understanding of CNN is that:
An image's pixel data is convoluted over with filters which extract features like edges and their position.
This creates filter maps.
Then we apply max pooling which will down sample the data.
Then we feed this data to a neural network which learns to classify.
Now I know I can do things with convolution like blurring or get edges from image.
But lets say I am trying to make a "Hotdog or not" app, how do I make my filters? What will the matrix be?
AI: First of all, feature maps are the output of the convolution after an activation function (e.g. ReLU or sigmoid) is applied, not the matrix that the image is convolved with. Usually, this is called a filter.
The magical thing about CNNs is that we don't know what the filters should look like for any given problem. The CNN works out what each filter should look like automatically. This is done through the backpropagation procedure. Without getting heavy into the math of it all, essentially every time a training example (or a batch of examples) goes through the network, the values inside each filter get updated by some small amount. This small amount is determined using the derivatives of a loss function. As each step of the training procedure is completed, the values inside each filter (if all goes well!) slowly converge towards a value that minimises the loss function, thereby producing the best quality predictions.
For more information on the math, I encourage you to read the chapter on backprop in the free online book Neural Networks and Deep Learning, available here.
A more simple explanation is also provided here |
H: Recurrent neural network (LSTM) dimensions error
I have data in a dataframe named ddf as follows:
labels X
L1 [1,2,3,7,8,9...]
L1 [4,2,6,9,8,7...]
...
L2 [5,6,8,9,6,3...]
L2 [7,8,5,6,9,0...]
...
There are 250 rows, 7 labels and 2000 elements in every list under X. These 2000 elements are values of a signal over a period of about 60 seconds.
I am trying to build a recurrent neural network for above data. Following is my code:
Xall = ddf['X'].values
Xall = np.array(Xall)
ydf = pd.get_dummies(ddf.drop('X', axis=1))
Yall = np.array(ydf.values)
# Split the data
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(Xall, Yall, test_size=0.1, random_state=0)
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense
model_lstm = Sequential()
model_lstm.add(Embedding(2000, 128))
model_lstm.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model_lstm.add(LSTM(200, dropout=0.2, recurrent_dropout=0.2))
model_lstm.add(Dense(Yall.shape[1], activation='softmax'))
model_lstm.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model_lstm.fit(X_train, Y_train, epochs=50, verbose=True, validation_data=(X_test, Y_test))
However, I am getting error at second LSTM layer:
ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2
I think this has something to do with LSTM arguments. Also are arguments of Embedding layer OK? How are both these adjusted? Where is the error coming from and how can it be solved? Thanks for your help.
AI: EDIT after comments
Also are arguments of Embedding layer OK?
Yes. But You need to pass return_sequences = True in the first LSTM layer so that it will pass sequences to the next LSTM layer.
From the docs
return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
How are both these adjusted?
From the docs.
input_dim:
Size of the vocabulary, i.e. maximum integer index + 1. This determines the largest integer in the input data. The largest integer in the input should be no larger than the vocabulary size. This should be the
output_dim: Dimension of the dense embedding. int >= 0
input_length: Length of input sequences, when it is constant. This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed).
I am posting code below for dummy data. The size of vocabulary has been taken 100.
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
input_array = np.random.randint(100, size=(250, 2000))
input_y = np.random.randint(7, size = (250))
Y_dumy = pd.get_dummies(input_y)
X_train, X_test, Y_train, Y_test = train_test_split(input_array, Y_dumy,
test_size=0.1, random_state=0)
model = Sequential()
model.add(Embedding(input_dim = 100, output_dim = 64, input_length=2000))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2, return_sequences
=True))
model.output_shape
#Output shape should be:
#model.output_shape = (None, 2000, 64)
#3D tensor with shape: (batch_size, sequence_length, output_dim)
model.add(LSTM(200, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(Y_dumy.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=
['accuracy'])
model.fit(X_train, Y_train, epochs=50, verbose=True, validation_data=
(X_test, Y_test))`
Where is the error coming from and how can it be solved?
I believe the error is coming because of absence of input_length. For similar errors please have a look at this post.
After the comments
The error is coming because of return_sequences =False in the first LSTM layer. |
H: Clustering Customer Data
I dont know if this kind of question is allowed but i kinda hit a wall. I know about some clustering algorithms. I already implemented Fuzzy C-Means and Gaussian Mixture Model, but I dont really know what's the efficient way to cluster customer data and there is no label at all.
Since it's company data I can't say the detail, but if this helps here is the columns of the data:
First of all, I group by using panda's for each customer data so there's no duplicate. And then i just soft cluster to 15 clusters. Why 15 ? I tried to cluster it with number of product categories as number of clusters (since this what my supervisor asked me unless i can propose better method).
Is this the right way to do it ? or there is some papers that explain better methods?
what I make will be used for real marketing on e-commerce so I'm scared if my method screws the company up or something.
AI: The answer could be anything according to your data! As you can not post your data here, I propose to spend some time on EDA to visualize your data from various POVs and see how it looks like. My suggestions:
Use only price and quantity for a 2-d scatter plot of your customers. In this task you may need feature scaling if the scale of prices and quantities are much different.
In the plot above, you may use different markers and/or colors to mark category or customer (as one customer can have several entries)
Convert "date" feature to 3 features, namely, year, month and day. (Using Python modules you may also get the weekday which might be meaningful). Then apply dimensionality reduction methods and visualize your data to get some insight about it.
Convert date to an ordinal feature (earliest date becomes 0 or 1 and it increases by 1 for each day) and plot total sale for each customer as a time-series and see it. You may do the same for categories. These can also be plotted as cumulative time-series. This can also be done according to year and month.
All above are just supposed to give you insight about the data (sometimes this insight can give you a proper hint for the number of clusters). This insight sometimes determines the analysis approach as well.
If your time-series become very sparse then time-series analysis might not be the best option (you can make it more dense by increasing time-stamp e.g. weekly, monthly, yearly, etc.)
The idea in your comment is pretty nice. You can use this cumulative features and apply dimensionality reduction methods to (again) see the nature of your data. Do not limit to linear ones. Try nonlinear ones as well.
You may create a graph out of your data and try graph analysis as well. Each customer is a node, so is each product when each edge shows a purchase (directed from customer to product) and the weight of that edge is the price and/or quantity. Then you end up with a bipartite graph. Try some analysis on this graph and see if it helps.
Hope it helps and good luck! |
H: Missing Values in Classification
I'm working on a classification problem. I'm trying to build a model which can predict if a bank client will get a loan or not. Some of clients have co-borrower and the majority don't.
I also have information on co-borrowers like salary, etc. but as the majority of clients don't have co-borrower, I have missing values.
How can I impute this missing data ?
thank you
AI: As @shadowtalker indicated, a binary features indicating the existence of co-borrower should be helpful anyway. As you certainly know, "imputing missing values" as a procedure for guessing value of missing values is absolutely meaningless as it produces values for something which do not exist. If it's about salary and similar numeric data, just simply put 0 (or the neutral value. Whatever it is.) and if it's about nominal variables, add a new value equivalent to None. at the end, all single customers must have exactly same values for these features.
Also one practical idea could be dividing your data to single borrowers and 2-borrowers at the very beginning of classification so the whole model will become like this:
Above model is somehow equivalent to the feature-selection property of some classifiers such that Decision Trees. In a Bayesian Network, such a feature could be the root node. |
H: What is the immediate reward in value iteration?
Suppose you're given an MDP where rewards are attributed for reaching a state, independently of the action. Then when doing value iteration:
$$ V_{i+1} = \max_a \sum_{s'} P_a(s,s') (R_a(s,s') + \gamma V_i(s'))$$
what is $R_a(s,s')$ ?
The problem I'm having is that terminal states have, by default, $V(s_T) = R(s_T)$ (some terminal reward). Then when I'm trying to implement value iteration, if I set $R_a(s,s')$ to be $R(s')$ (which is wha I thought), I get that states neighboring a terminal state have a higher value than the terminal state itself, since
$$ P_a(s,s_T) ( R_a(s,s_T) + \gamma V_i(s_T) ) $$
can easily be greater than $V_i(s_T)$, which in practice makes no sense. So the only conclusion I seem to be able to get is that in my case, $R_a(s,s') = R(s)$.. is this correct?
AI: what is $R_a(s,s')$ ?
In this case, it appears to represent the expected immediate reward received when taking action $a$ and transitioning from state $s$ to state $s'$. It is written this way so it could be implemented as a series of square matrices, one for each action. Those matrices might of course be very sparse if only a few transitions are possible, but it's a nice generic form for describing MDPs.
Notation varies between different RL tutorials, so do take care when looking at other sources.
The problem I'm having is that terminal states have, by default, $V(s_T) = R(s_T)$ (some terminal reward).
No, $V(s_T) = 0$ by definition. The value of a state, is the expected discounted sum of future rewards. A terminal state has no future rewards, thus its value is always $0$.
The "terminal reward" in your system occurs on the transition $s \rightarrow s_T$, so its expected value should be represented by $R_a(s,s_T)$
If the terminal state is some goal state, or a bad exit from an episode, and it doesn't matter how you arrive there, then in the formulation you have given, you still represent it as $R_a(s,s_T)$, just that the values of $a$ and $s$ don't matter for your case. In general they might matter, for other MDPs.
So the only conclusion I seem to be able to get is that in my case, $R_a(s,s') = R(s)$.. is this correct?
Not in general. However, in some problems this might be a reasonable simplification. If you make that simplification, then you should take care when having a value associated with a single state to be consistent about whether the reward is for entering a particular state, or exiting it. There cannot be a reward for "being in a state", the closest you can get to that is granting a consistent reward when entering a state (ignoring the previous state and action that caused the transition).
It appears here that you have suggested here that you want a reward for exiting a particular state - ignoring which action is taken in it, or what the next state is. I don't know the details of your MDP, so cannot say whether this would work for you. Maybe not, given the rest of your question.
If you are asking in general, then you have the answer $R_a(s,s') \ne R(s)$
If you are asking in order to work on a specific MDP, I suggest take another look at your original problem, with the extra information in this answer. |
H: Question Related Numpy
With Numpy, what’s the best way to compute the inner product of a vector of size 10 with each row in a matrix of size (5, 10)?
AI: Here are a few ways, using some dummy data:
In [1]: import numpy as np
In [2]: a = np.random.randint(0, 10, (10,))
In [3]: b = np.random.randint(0, 10, (5, 10))
In [4]: a
Out[4]: array([4, 1, 0, 6, 3, 3, 6, 6, 1, 8])
In [5]: b
Out[5]:
array([[9, 0, 6, 1, 1, 1, 4, 7, 4, 7],
[5, 8, 8, 3, 4, 8, 7, 3, 0, 4],
[2, 2, 5, 3, 9, 6, 1, 5, 8, 3],
[2, 0, 4, 3, 5, 3, 3, 4, 3, 3],
[3, 3, 6, 4, 7, 5, 8, 6, 7, 3]])
Because of the dimensions you asked for, in order to compute inner products (a.k.a. scalar products and dot products), we need to transpose the matrix b so that the dimensions work out.
With a vector of length 10, numpy gives it shape (10,). So it seems 10 rows and no columns, however it is kind of ambiguous. Numpy will essentially do what it has to in order to make dimensions work. We could force it into a (10, 1) vector by using a.reshape((10, 1)), but it isn't necessary. The matrix has a defined second dimensions, so we have a shape (5, 10). In order to multiply these two shapes together, we need to make the same dimensions match in the middle. This means making (10,) * (10, 5). Performing the transpose on matrix reverses the dimensions to give us that (10, 5). Those inner 10s will then disappear and leave us with a (1, 5) vector.
That all being said, we can use any of the following to get equivalent answers:
The standard standard dot-product:
In [7]: a.dot(b.T)
Out[7]: array([174, 174, 141, 119, 190])
The convenient numpy notation:
In [6]: a @ b.T
Out[6]: array([174, 174, 141, 119, 190])
The efficient "Einstein notation", a subset of Ricci calculus (I leave the interested reader to search online for more information):
In [8]: np.einsum('i,ij->j', a, b.T)
Out[8]: array([174, 174, 141, 119, 190])
Here as in the comments from shadowstalker:
In [9]: np.array([np.dot(a, r) for r in b])
Out[9]: array([174, 174, 141, 119, 190])
If your matrices are of dimensions (100, 100) or smaller, then the @ method is probably the fastest and most elegant. However, once you start getting into matrices that make you wonder if you laptop will handle it (e.g. with shape (10000, 10000)) - then it is time to read the documentation and this blog about Einstein notation and the amazing einsum module within numpy! |
H: How to make two parallel convolutional neural networks in Keras?
I created two convolutional neural networks (CNN), and I want to make these networks work in parallel. Each network takes different type of images and they join in the last fully connected layer.
How to do this?
AI: You essentially need a multi-input model. This can only be done through keras' functional api and can work with the pretrained nets in keras.applications. To create one you can do this:
from keras.layers import Input, Conv2D, Dense, concatenate
from keras.models import Model
1) Define your first model:
in1 = Input(...) # input of the first model
x = Conv2D(...)(in1)
# rest of the model
out1 = Dense(...)(x) # output for the first model
2) Define your second model:
in2 = Input(...) # input of the first model
x = Conv2D(...)(in2)
# rest of the model
out2 = Dense(...)(x) # output for the first model
3) Merge the two models and conclude the network:
x = concatenate([out1, out2]) # merge the outputs of the two models
out = Dense(...)(x) # final layer of the network
4) Create the Model:
model = Model(inputs=[in1, in2], outputs=[out]) |
H: saving the images to a folder with custom filenames
I'm new to Python and need assistance. I am performing segmentation with my own medical data set. The test images in my folder are named like "1.0.2.34.56_1.png". I would like to remove the actual image extension and append the image names with "_mask.png". The current code throws an error.
def labelVisualize(num_class,color_dict,img):
img = img[:,:,0] if len(img.shape) == 3 else img
img_out = np.zeros(img.shape + (3,))
for i in range(num_class):
img_out[img == i,:] = color_dict[i]
return img_out / 255
def saveResult(save_path,npyfile,flag_multi_class = False,num_class = 2):
for i,item in enumerate(npyfile):
img = labelVisualize(num_class,COLOR_DICT,item) if flag_multi_class else item[:,:,0]
io.imsave(os.path.join(save_path,"%d_mask.png"%i),img)
saveResult("data/membrane/test/",results)
AI: Most of the code you provided doesn't do much to help your actual problem of renaming the files. I will just focus on tha tpart by showing you how to get the filenames you want.
You can rename the files, as you describe them, like this - assuming you have a list of filenames that match your description.
filenames = ['1.0.2.34.57_1.png', '1.0.2.34.58_1.png', '1.0.2.34.59_1.png']
for file in filenames:
# Do what you want with the file...
# Image processing and modelling, etc.
# Replace the extension with your custom ending
final_filename = file.replace('.png', '_mask.png')
Another way to do it, breaking down the removal and addition into two steps
for file in filenames:
file_without_ext = file.replace('.png', '')
# Add you custom ending
final_filename = file_without_extension + '_mask.png'
# Another variant - joining the extension to the file_without_ext using "_"
final_filename = '_'.join(file_without_ext, 'mask.png')
The three variants of final_filename will be identical.
Now you can save the output of your image processing using the final_filename. |
H: Machine learning algorithm which gives multiple outputs from single input
I need some help, i am working on a problem where i have the OCR of an image of an invoice and i want to extract certain data from it like invoice number, amount, date etc which is all present within the OCR. I tried with the classification model where i was individually passing each sentence from the OCR to the model and to predict it the invoice number or date or anything else, but this approach takes a lot of time and i don't think this is the right approach.
So, i was thinking whether there is an algorithm where i can have an input string and have outputs mapped from that string like, invoice number, date and amount are present within the string.
E.g:
Inp string: the invoice #1234 is due on 12 oct 2018 with amount of 287
Output: Invoice Number: #1234, Date: 12 oct 2018, Amount 287
So, my question is, is there an algorithm which i can train on several invoices and then make predictions?
AI: Keras functional API's are a way your can solve you problem. Using keras functional API, we can build models that resembles more like graphs such as this:
In order to build a model like this, you can use keras as follows:
from keras.models import Model
from keras import layers
from keras import Input
input_layer = Input(shape=(100,), dtype='float32', name="Input")
split_layer = layers.Dense(32, activation='relu', name='split_layer')(input_layer)
first_layer = layers.Dense(32, activation='relu', name='first_layer')(split_layer)
second_layer = layers.Dense(32, activation='relu', name='second_layer')(split_layer)
model = Model(input_layer,[first_layer, second_layer])
model.summary()
In order to compile this model, we can define different loss functions for different layers
model.comile(optimizer=optimizer,
loss={'first_layer':'mse', 'second_layer':'binary_crossentropy'},
metrics=['accuracy'])
Once you are done with building the network, you could simply fit you data as follows:
model.fit(X,
{'first_layer': first_layer_Y,
'second_layer': second_layer_targets},
epochs=10
) |
H: How to access substrings in pandas column and store it into new columns?
I'm working on a dataset for building permits. In the dataset there is a column that gives the location (lattitude and longitude) for the building permit. The data in the location column look like this:
0 (37.785719256680785, -122.40852313194863)
1 (37.78733980600732, -122.41063199757738)
2 (37.7946573324287, -122.42232562979227)
3 (37.79595867909168, -122.41557405519474)
4 (37.78315261897309, -122.40950883997789)
Name: location, dtype: object
As you can see, the data is stored as strings. I wanted to store the lattitude and longitude in two separate columns, so I wrote the following code to accomplish this:
df.location = df.location.str.replace('(','')
df.location = df.location.str.replace(')','')
for i in range(len(df)):
if df.location[i] == np.nan:
df['lattitude'] = np.nan
else:
df['lattitude'] = df.location[i][0:df['location'][i].index(',')]
for i in range(len(df)):
if df.location[i] == np.nan:
df['longitude'] = np.nan
else:
df['longitude'] = df.location[i][0:df['location'][i].index(',')]
There is some missing data in the column, 1700 entries to be exact. So in order to avoid a key error, I wrote the if-else statement to fill in the new columns with np.nan anytime the loop would iterate to a missing entry.
When I ran the code, I got the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-72-81826748e81c> in <module>()
7 df['lattitude'] = np.nan
8 else:
----> 9 df['lattitude'] = df.location[i][0:df['location'][i].index(',')]
10
11 for i in range(len(df)):
AttributeError: 'float' object has no attribute 'index'
Can anybody spot the error I'm making?
AI: This is perhaps more suited for StackOverflow. I would also use a better/more descriptive title for the question itself; that way others that are facing a similar problem are able to find it.
The reason you are seeing that error is because of the nan values, which are of type float. So while most of the rows in df['location'] contain strings, every row instance of an nan in the column is a float, and str.index() is not available for floats.
Your check of if df.location[i] == np.nan: is pointless, because np.nan == np.nan is always False due to the very definition of nan. Refer to this question on the topic. Because your check fails, the loop enters the else block and encounters a float object attempting to invoke a string method.
In my opinion you are using a very complicated approach to get what you want.
Replace your code with this. It should give you what you are looking for. Any nan values encountered will be handled by python.
df['location']=df['location'].str.replace(" ","").str.strip('(').str.strip(')')
df['latitude']=df['location'].str.split(',').str[0]
df['longitude']=df['location'].str.split(',').str[1]
I tested this using the following code segment:
df=pd.DataFrame()
df['location']=(
"(37.785719256680785, -122.40852313194863)",
"(37.78733980600732, -122.41063199757738)",
"(37.7946573324287, -122.42232562979227)",
"(37.79595867909168, -122.41557405519474)",
"(37.78315261897309, -122.40950883997789)",
np.nan,
"(37.78615261897309, -122.405550883997789)")
df['location']=df['location'].str.replace(" ", "").str.strip('(').str.strip(')')
df['latitude']=df['location'].str.split(',').str[0]
df['longitude']=df['location'].str.split(',').str[1]
print(df[['latitude','longitude']])
This produces the output:
latitude longitude
0 37.785719256680785 -122.40852313194863
1 37.78733980600732 -122.41063199757738
2 37.7946573324287 -122.42232562979227
3 37.79595867909168 -122.41557405519474
4 37.78315261897309 -122.40950883997789
5 NaN NaN
6 37.78615261897309 -122.405550883997789 |
H: Find ID for the name in the list from csv file in python
The CSV file contains:
ID,Name
1,AS
2,er
3,rtf
4,addfs
The list contains the name for example (er,rtf)
I want to find the ID's corresponding to these names mentioned above
How to find the ID's using the Python code.
Thanks in advance
AI: This question should be posted in StackOverflow.
Store your CSV into a Pandas DataFrame and operate on that.
import pandas as pd
df=pd.read_csv('nameofyourcsv.csv')
names=('er','rtf')
for each in names:
print(df.loc[df['Name'] == each, 'ID'].iloc[0]) |
H: What does it mean if a high or low number of my componenets describe a percentage of the cumulative explained variance?
In the following code run after PCA i can see that X number of components explain Y % of cumulative explained variance (CEV).
I would like to know
1- What percentage of the CEV is typically acceptable e.g. 95% or 99%? (Or is it a case by case basis?)
2- If 20 out of 200 components explain 95% of the CEV, what does this say about my data, what about when 200 out of 200 explain 95% ?
pca = PCA().fit(X_train)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('Number of components')
plt.ylabel('cumulative explained variance');
AI: I always think that Principal component analysis is a very interesting tool. Many know its applications, few knows exactly what is going on.
If you care about the math, you need to study eigen values and eigen vectors, but here let me explain pca in a very simple way. Imagine that you have a point in a 2-d space (x, y), this same point can be represented by using another two values, let us say (r, theta), where r is the distance from the origin and theta is the angle with respect to the +v x-axis.
Given any point (x,y), you can find the point (r, theta) corresponding to the point (x,y). You should be able to move between the two coordinates easily. Now let us ask the following question:
If I decide to ignore the theta dimension (for example set it to zero) how many points in the (x,y) you still can recover? All the points across the +ve x-axis can be recovered correctly. Now what about the r dimension, if we set it to zero, we can recover only one point which is the origin point.
Principal component analysis helps find this new coordinate system depending on the data, number of dimensions and variation across each dimension. The higher variation across dimension the more important to reserve this dimension. Very important to know that pca proposes both new coordinates and a measure of how much each dimension is important to recover the points in the original coordinate.
For example, let us take these points,
(1,1)
(2,2)
(3,3)
Both x and y dimensions are important and they experience the same variation.
If we transform, these data points into a new coordinate system (k,m), such that
K=x. And. M=0
Then. K is very important 100% and m is not important at all 0%
You can recover the original points by using, x=k. And y=k. Note that the variation across the m dimension is zero and it is high across the k dimension.
How much you want to reserve depends mainly on the application, usually > 95% is good.
For the second question, it is possible to recover less than 100% when you use less dimensions, but when you use 200 of 200 you should be able to recover 100%. |
H: Autoregression with multiple factors
I am not sure if this is the right place to ask this question. Anyway, I am working on a Forecasting using spending data. Using autoregression, I am able to predict the following number decently well but I would like to improve this. I have two other factors that don't predict nearly as well as the lagged factor but could still be useful in the model.
My question is this: would it be valid to include those factors in the model even though I am using a lagged version of the predictor? Does an autoregression imply that the only factor to be used is the lagged version of the predicted value?
I can't show the data as it is sensitive and my question is more about how autoregression and forecasting work in general.
Thanks for any help.
AI: The idea of adding external variables is not new. ARIMAX model (https://robjhyndman.com/hyndsight/arimax/) was specifically created to include them.
If you ask whether it is ok to include the external variables of some particular period then the answer is that you need to understand when they are available for prediction. If you predict a profit of the company at an end of a year based on today's stock price then it is ok. The profit and loss statement appear much later than the information is available in stock price. And stock price should have some info on expected profits based on the financial theory. If you want to predict tomorrow's stock price based on tomorrow's number of sun spots then it is a bad idea. The number of sun spots tomorrow is not available today.
And generally you can use any combination of lags for different variables. Taking today's stock price and last year's profit to predict this year's profit is totally valid and even has financial theory supporting the relationship. |
H: how to transformation of row to column and column to row in python pandas?
I have a large dataset
I want to transform this dataset into this format
I have try it through transpose but i couldn't figure out
AI: Use pandas melt function.
##init dataframe
df = pd.DataFrame({'item': ['a', 'a', 'a', 'b', 'b', 'b'],
'class_a': [1, 1, 2, 3, 3, 1],
class_b': [2, 1, 2, 3, 3, 1],
'class_c': [1, 2, 2, 3, 1, 3]})
##shape it into desired format
pd.melt(df, id_vars='item', value_vars=['class_a', 'class_b', 'class_s']) |
H: Prediction on timeseries data using tensorflow
I have an input and output of below format:
(X) = [[ 0 1 2]
[ 1 2 3]]
y = [ 3 4 ]
It's timeseries data. The task is to predict the next number. Basically, the input was crafted by the below snippet:
def split_sequence(arr,timesteps):
arr_len = len(arr)
X,y = [],[]
for i in range(arr_len):
end_idx = i + timesteps
if end_idx > arr_len-1:
break
input_component,output_component = arr[i:end_idx],arr[end_idx]
X.append(input_component)
y.append(output_component)
return np.array(X), np.array(y)
Now, I would like to train the model on the input and predict the next number. For instance, x = [81,82,83] and the predicted output would be y = 84. I learned how to do this in keras. However, I would like to try to do this in tensorflow as well.
Below is the code in tensorflow:
# Data generator
def generate_batch(X,y,batch_size):
m = X.shape[0]
indexes = range(m)
n_batches = m // batch_size
for batch_index in np.array_split(indexes,n_batches):
yield X[batch_index],y[batch_index]
# parameters
n_inputs = 3
n_epochs = 1000
batch_size = 40
learning_rate = 0.01
n_steps = 3
# generate the input and output using split_sequence method
input, output = split_sequence(range(1000),n_steps)
# Define the input variables
X = tf.placeholder(tf.int32,shape=(None,n_inputs),name='X')
y = tf.placeholder(tf.float32,shape=(None),name='y')
theta = tf.Variable(tf.random_uniform([n_steps,1],-1.0,1.0),name='theta')
# predictions and error
y_predictions = tf.matmul(X,theta,name='predictions')
error = y_predictions - y
mse = tf.reduce_mean(tf.square(error),name='mse')
# train the model
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
for epoch in range(n_epochs):
for X_batch,y_batch in generate_batch(input,output,batch_size):
if epoch % 10 == 0:
print('epoch',epoch,'MSE=',mse.eval())
session.run(training_op,feed_dict={X:X_batch,y:y_batch})
To be honest, I am completely stuck with the below error:
You must feed a value for placeholder tensor 'X' with dtype float and shape [?,3].
My input is an integer, so that was the reason behind defining:
X = tf.placeholder(tf.int32,shape=(None,n_inputs),name='X')
Can someone help me fix this? Also, if I wanted to add bias variable will I be able to achieve for the above input?
AI: The error is caused by this line:
print('epoch',epoch,'MSE=',mse.eval())
This happens because the tensor mse also depends on the placeholders X and y. One way to fix this would be to change the training loop to be:
for X_batch,y_batch in generate_batch(input,output,batch_size):
mse_val, _ = session.run([mse, training_op],feed_dict={X:X_batch,y:y_batch})
if epoch % 10 == 0:
print('epoch',epoch,'MSE=',mse_val)
Also you will need to switch X back to tf.float32 since tf.matmul is not compatible with int and float. The data will automatically be casted once you feed it in.
To add a bias variable, you can do it similarly to how you define theta.
b = tf.Variable(0.0, dtype=tf.float32, name='b')
...
y_predictions += b |
H: Difference between bagging and boosting
Can anyone explain me the basic difference between bagging and boosting and which technique can be used in which scenario?
AI: Bagging: Also known as Bootstrap Aggregation is an ensemble method. First, we create random samples of the training data set (sub sets of training data set). Then, we build a classifier for each sample. Finally, results of these multiple classifiers are combined using average or majority voting. Bagging helps to reduce the variance error.
Boosting provides sequential learning of the predictors. The first predictor is learned on the whole data set, while the following are learnt on the training set based on the performance of the previous one. It starts by classifying original data set and giving equal weights to each observation. If classes are predicted incorrectly using the first learner, then it gives higher weight to the missed classified observation. Being an iterative process, it continues to add classifier learner until a limit is reached in the number of models or accuracy. Boosting has shown better predictive accuracy than bagging, but it also tends to over-fit the training data as well.
Algorithms work on these techniques
Bagging: Random Forest
Boosting : Ada Boost, Gradient Boosting, XGBoosting, etc |
H: Why does my LSTM perform better when randomizing training subset vs. standard batch training?
I am training a simple LSTM network using Keras to predict time series values. It is a simple 2-layer LSTM.
I get the best performance when I train on subsets of the training set that start at random points. Each subset has a training size of 100 samples and a validation size of 30 samples. At each sample the model has a batch size of 16, trains for 100 epochs with an early stop after 20 epochs of little improvement. I run this training 10 times.
How is this different from me training me simply training my model as such:
model.compile(loss='mean_squared_error', optimizer='adam')
results = model.fit(X_train, y_train, epochs=100, batch_size=32, shuffle=False,
validation_split=0.3, verbose = 1, steps_per_epoch=10,
callbacks = [EarlyStopping(monitor='val_loss', min_delta=5e-5, patience=20, verbose=1)])
Doesn't Keras automatically select random subsets of the training data for training and validation through the mini batches? Is it simply because I am training the model so many more times?
Is this effectively a k-fold-esque training implementation?
However, using sk-learn's KFold or Repeated Kfold doesn't get results as good.
NB: I do not want to use shuffle as this is time series data and that would distort the model training.
AI: Couple of points:
1) When you 'run this training ten times' does that mean you completely re-initialize the model and train from scratch?
2) I'm not entirely sure that you're loading the data correctly from the code you've supplied. What are the shapes of X_train and y_train?
3) It may depend on your data and your selection process. Time-series data is typically temporally correlated -- the closing value of the Dow Jones on Thursday is more likely to be close to the values on Wednesday and Friday than on Monday or in two weeks. It may be that if you sample 100 data points in a row and they're temporally correlated, you're giving your model an easier problem, because those values are more like each other than values elsewhere in the series.
Good luck! |
H: Regularization in simple math explained
I read a lot of articles online about how regularization works and most of them just show the equations with regularization terms but did not use example numbers to explain how the coefficient values change as lambda increase.
For example:
L1 Regularization
theory states that When Lambda is large, coefficients tend to 0.
X =
How is the coefficients(beta i assume) value changing when lambda changes?
Is it because X a fixed value?
Example, does beta have to decrease when lambda increase to ensure that X is the same?
AI: The function you have described is a loss function. It is the function which we want to minimize in order to train our model. The loss function is called ordinary least squares. We can also see that the model that we are trying to fit is a linear function with model parameters $\beta$. This is an optimization function and regularizarion is often used in optimization problems to attain a solution which is less likely to be a result of overfitting.
No regularization
Without any regularization the solution to a linear regression is
$\beta=(X^TX)^{-1}X^TY$
L2 regularization
To understand regression it is much easier to first start with the more widely used L2 regularization, ridge regression. This is defined as
$\min_\beta (Y-X\beta)^T(Y-X\beta) + \lambda ||\beta||^2$
We can solve this to get a closed-form solution using a derivative
$0 = -2X^TY + 2X^TX\beta+2\lambda\beta$
$X^TY = (X^TX + \lambda I)\beta$
$\beta = (X^TX + \lambda I)^{-1}X^TY$
Let's analyze this result in contrast to the solution without regularization and see what it means. We can see that the only difference is the added regularization term $\lambda$ within the inverse. As we should know, the inverse of a larger value, will cause its result to get smaller. For example the inverse of 1/5, is larger than 1/2. So by adding the regularization term we actually reduced the associated weight $\beta$.
This is why regularizers are often called a penalty term. They further constrain the optimization function. By adding an additional cost associated to each weight, we can reduce the impact that each weight will have on our model. Thus reducing the potential for overfitting.
L1 regularization
L1 regularization is the penalty used for LASSO regression, it's cost function is defined as
$\min_\beta(Y-X\beta)^T(Y-X\beta) + \lambda||\beta||$
From what we learned above we can already tell that this additional cost will cause the resulting weights to be penalized. Unfortunately L1 regularization does not have a closed form solution because it is not differentiable when a weight $\beta$ falls to 0. Thus this requires some more work to solve. LASSO is an algorithm for finding the solution.
First let's remember that our constraint here is $||\beta||\leq c$ where c is a real number which is inversely related to $\lambda$. The plan is to find an appropriate $\lambda$ such that the minimum of the function falls within or on the contour of the constraint. This constraint has very sharp edges which lie on each dimensional axis at a distance $c$ from the origin. You can imagine it like a diamond in 2D space, an octohedron in 3D space, etc.
In high dimensional space these spikes have a very high likelyhood to being hit by the function you wish to optimize thus this will cause many of the features to have an associated weight of 0. For example if we compare the regularization of this line in 2D space using both L2 and L1 regularization.
In this picture the distance $r$ is equivalent to the factor $c$ I used above which is inversely related to the regularization term $\lambda$. The function being optimized (the cost function) is the described using the ellipses which represent their levels (like an altitude map). The shape at the center of the axis is the regularizer which is used to constrain the weights associated to each feature.
We can first notice that the original function is minimized exactly at its minimum. However, choosing this solution has a high probability of being one which overfits the training data.
We see that L2 regularization did add a penalty to the weights, we ended up with a constrained weight set. However both weights are still represented in your final solution. The function being optimized touches the surface of the regularizer in the first quadrant.
However, in the L1 case we can see that $\theta_2$ is the only feature that is required for this regression problem and $\theta_1$ will have a weight set to 0. Thus we can ignore it. It just happens that smooth functions encountering a function with pronounced edges tends to hit these edges with higher probability than hit the vertices. I do not know the proof for this, but I have seen it in a course as supplemental material and it was complicated. This is why L1 regularization is often used for feature selection.
Combining both together
What is often done is first using L1 regularization to find out what features have lasso weights which tend to 0, these are then removed from the original feature set. Then with this new feature set we can apply L2 regularization in order to still control these new features to control for overfitting. |
H: Complex-Valued input to CNN
I want to train a CNN. However, my input is images of size 100*100 with complex numbers. I have runned the model, but it failed and the loss didn't decrease. Then I found out that my because my inputs are complex, they are not able to train very good. Actually, I think the activation functions and the convolutions and the optimizer, backpropagation, cost should be changed. I wonder how I can Implement my CNN with Complex-Valued input. I would be thankful if anyone could help me.
AI: There has been some research in building very specialized deep learning models to use complex numbers using complex algebra, but these tools are not yet user-friendly. So unless you want to go down the rabbit hole and try to understand their implementations and then try to write your own implementation, i would avoid this.
In the past, I worked with MRI data which is a 3D complex-valued matrix. I inspired myself from the work done in color images, where 3 channels are used to represent red (R), green (G) and blue (B). I this separated the real and imaginary part, and treated them as separate channels into the deep learning model all as real numbers.
1D example:
From a complex vector, we get 2 channels where the imaginary part can then be treated as a real number.
$[1+1j, 2+2j, 3+3j] = [1, 2, 3], [1, 2, 3]$ |
H: How to detect influence on behavior
From a behavioral study data was extracted. The study was about how people change their eating behavior, following visual cues. There were to groups of people: One was shown visual cues and then it was recorded what they chose to eat and the other group was just shown random stuff or nothing and then it was recorded was they chose to eat. This experiment was repeated with the same people a large number of occasions, at different times of day.
I have a dataset that precisely recordes personal information about each individual as well as their eating behaviour during each time the experiment was carried out.
The goal is to predict whether the visual cue will make someone eat the stuff presented in the cue or not. The problem is that it is unclear to me how to separate the influence of the treatment from the behavior that they might have shown anyway. I.e., suppose a person sees an image of a cake and then, when he is presented with a variety of different foods, eats cake. How can we know that it was actually the image that influenced him and that he did not wanted to eat cake anyway, so the image change actually nothing?
Thus I can't directly treat it as a binary classification and define a categorical feature, where I assign "1" if someone ate what was in the picture and "0" if he didn't, since by that way I may also identify those who wanted to eat what was in the cue before it was shown to them. How can I solve this problem?
AI: I think you can approach it as a logistic regression problem, where you will have a on/off (1/0) feature called "exposure_to_image". And your goal will be to detect if that coefficient is statistically significantly different than 0. If yes, then the probability of eating the item was influenced by the exposure.
As for the assumption that they would have eaten it anyway:
That's the whole point of having the control group. So your dataset will have people where exposure_to_image=1, and people for who exposure_to_image=0. All of them will have some baseline probability of wanting to eat the cake. But that coefficient tells you by how much that baseline is affected. |
H: pandas: How to impute the categorical column by the nearest neighbors?
I've a categorical column with values such as right('r'), left('l') and straight('s'). I expect these to have a continuum periods in the data and want to impute nans with the most plausible value in the neighborhood. In the beginning of the input signal you can see nans embedded in an otherwise continuum 's' episode. My definition as to what characterizes an episode is occurrence of the corresponding symbol at least 5 times in a row. Also, in my interest to be given more weight to 'r' and 'l' when tied with 's'.
iput = ['s','s','s','s','s','s',np.nan,'s',np.nan,'s','s','s','s','r',np.nan,np.nan,'r','r','r','r','s','s','s','s','s',np.nan,np.nan,'s','s','s','l','l','l','l','l',np.nan,'l','l','l']
oput = ['s','s','s','s','s','s','s','s','s','s','s','s','s','r','r','r','r','r','r','r','s','s','s','s','s','s','s','s','s','s','l','l','l','l','l','l','l','l','l']
I tried knn as following but it is rather suitable for numerical column and also imputing nans with zeros.
I was hoping for some ideas how to tackle this problem.
from fancyimpute import KNN
knnimpute = KNN(k=5)
>>>x = np.array([0,np.nan,1,1,1,np.nan,2,2,2,2,np.nan,2,3,3,3,3,np.nan,1,1,2,2,np.nan,1,3,3,3,3])
>>>x2 = knnimpute.fit_transform(x.reshape(-1,1))
>>>x2
>>>
array([[0.],
[0.],
[1.],
[1.],
[1.],
[0.],
[2.],
[2.],
[2.],
[2.],
[0.],
[2.],
[3.],
[3.],
[3.],
[3.],
[0.],
[1.],
[1.],
[2.],
[2.],
[0.],
[1.],
[3.],
[3.],
[3.],
[3.]])
AI: The following script will give the value of the most frequent item to the nan value. It is a list of 7 items, since it checks the three samples before the nan, the nan itself and the three after the nan samples.
iput = ['s','s','s','s','s','s',np.nan,'s',np.nan,'s','s','s','s','r',np.nan,np.nan,'r','r','r','r','s','s','s','s','s',np.nan,np.nan,'s','s','s','l','l','l','l','l',np.nan,'l','l','l']
for i in range(len(iput)):
if type(iput[i]) is float:
iput[i]=max(iput[i-3:i+3],key=iput[i-3:i+3].count) |
H: Free parameters in logistic regression
When applying logistic regression, one is essentially applying the following function $1/(1 + e^{\beta x})$ to provide a decision boundary, where $\beta$ are a set of parameters that are learned by the algorithm, and $x$ is an input feature vector. This appears to be the general framework provided by widely available packages such as Python's sklearn.
This is a very basic question, and can be manually implemented by normalization of the features, but shouldn't a more accurate decision boundary be given by: $1/(1 + e^{\beta (x - \alpha)})$, where $\alpha$ is an offset? Of course an individual can manually subtract a pre-specified $\alpha$ from the features ahead of time and achieve the same result, but wouldn't it be best for the logistic regression algorithm to simply let $\alpha$ be a free parameter that is trained, like $\beta$? Is there a reason this is not routinely done?
AI: You get the same effect from including a bias term, i.e. $\frac{1}{1+\exp(-\beta x + bias)}$, and the defaults for most software is to include such bias.
To see why, you can do the math, expanding $\beta(x-a)$ and seeing that you can express the sum of those differences as a single variable. |
H: packages installed after activating conda environment
I want to know after creating and activating a conda environment in terminal. Say venv:
conda create -n venv
source activate venv
Then the prompt will come with the enviroment name (venv).
would the packages installed (say conda install tensorflow without --name venv) after environment activation only effect inside the environment venv? Or would it affect outside venv?
I don't want to mess up my environments.
AI: It will only affect the environment that you activated. |
H: Calculation of distance between samples in data mining
I am confused about a little issue related to distance calculation. What I want to know is, while calculating the distance between samples in classification or regression, is the label or output class also used, or the distance is calculated using all other attributes excluding the label attribute?
AI: The way that various distances are often calculated in Data Mining is using the Euclidean distance. You can read about that further here. If I understand your question correctly, the answer is no. The Euclidean distance can only be calculated between two numerical points. Therefore it would not be possible to calculate the distance between a label and a numeric point. |
H: Neural network recommendations if only few features
Can there be some general recommendations for architecture of neural network if there are only a few features, say 2-5 features?
What should be the number of hidden fully connected layers here? How many neurons may be there in each layer? Do number neurons in different layers need to stay constant (e.g. 16,16,16) or rise (e.g. 8,16,32) or fall (32,16,8) or rise and fall (e.g. 16,32,16) or fall and rise (e.g. 32,16,32)?
AI: There's all sorts of rules of thumb for the structure of a neural net (if n features use n+1 or 2n or 1.5n nodes, etc). I wouldn't take any of them as gospel.
The size and structure of your neural network chiefly depends on the complexity of the data you're trying to learn. If your data is already pretty well represented by, say, linear regression, then you don't really need much additional size to your neural network to represent it well. On the other hand, if your five inputs were, say, the move and shoot commands of a recording of a game of Doom, you're gonna need a large neural network to accommodate all the deep and meaningful interactions between the individual commands.
Each successive layer of nodes of a neural network deals with increasingly complicated shapes. The first hidden layer is all sigmoid or ReLU functions, the second layer is combinations of the first layer, the third is combinations of the second layer, etc.
The general structure of the hidden layers is cone-shaped, tapering towards the right. There are lots of nodes in the first layer, and fewer in each successive layer. You can have constant numbers of nodes per layer, though in my playing around it doesn't seem to help much, and many of the layer nodes go relatively unused. The opposite direction (having few first layer nodes and many last layer nodes) is actively harmful, because the later nodes can only ever be combinations of the earlier nodes, and if the first nodes don't create enough shapes for the later nodes to combine, it won't be able to fit anything.
I've provided a couple of interactive examples through tensorflow's playground.
Normal structure (Decreasing nodes per layer)
Same no. of nodes per layer
Rising no. of nodes per layer
The first two should converge relatively easily, whereas the third shouldn't converge at all. |
H: How can we use machine learning to distnguish between similarly looking images
How can I build a model which can distinguish between Milk and Phenyl? I want to predict whether a given item is edible to eat or not. If I train a model with thousands of photos of Milk and Phenyl which are labelled, Won't the model get confused between them because they look very very similar? No matter how much I train, I will be getting poor results, won't I? Another such scenario is in the case of salt and sugar.
I want to know if there is a way to tackle this situation. Please let me know if I can achieve better results with any other approach.
Also Please let me know If I'm unclear.
AI: From the images alone, assuming the links you posted are representative, I doubt a model will ever be able to predict better than random guessing.
There is a chance the model could effectively learn to read the label, but it seems like a long (and somewhat pointless) path to go down, unless you will only be predicting on labelled bottles in the future!
You will need to introduce different information with each sample, perhaps the price, amount of liquid and other such feature. At this point, you will likely want to ditch your CNN for images and instead use a standard classifier, such as Support Vector Machines or a simple feed-forward neural network. |
H: Can I call this graph as a gaussian?
My program is a chatbot. It has rule to represent the state that user is talking to the bot at node level n. I have 1 to 9 nodes in the application. Here is the summary of each states
1 3331
2 695
3 1381
4 945
5 1754
6 5303
7 2235
8 1664
9 3844
Name: visited, dtype: int64
If number 1 and 9 is not too high. I will not have a question.
Question:
Is it safe to call this distribution as a gaussian?
AI: If you left out the large blue and yellow peaks, then maybe. Otherwise, no.
With all three distinct peaks, you might call it a multi-modal Guassian - meaning it is a mixture of three standard Gaussian distributions. This illustrates the idea:
Important: As pointed out by Spacedman in the comments, this comparison would only strictly apply if the data itself could be approximated as a continuous variable. This would mean you x-axis variable (state) should not be discrete. If the values were put in bins and your graph were therefore a histogram of the underlying data. Please have a look at this question for more details.
We can normally describe the distributions as being fat-tailed, when the extremes on the left and right of the curve don't ever really head towards zero, but seeing as your curve really shoots up again at both ends, I don't think it would be a useful description here. |
H: Mapping column values of one DataFrame to another DataFrame using a key with different header names
I have two data frames df1 and df2 which look something like this.
cat1 cat2 cat3
0 10 25 12
1 11 22 14
2 12 30 15
all_cats cat_codes
0 10 A
1 11 B
2 12 C
3 25 D
4 22 E
5 30 F
6 14 G
I would like a DataFrame where each column in df1 is created but replaced with cat_codes. Column header names are different. I have tried join and merge but my number of rows are inconsistent. I am dealing with huge number of samples (100,000). My output should ideally be this:
cat1 cat2 cat3
0 A D C
1 B E Y
2 C F Z
The resulting columns should be appended to df1.
AI: You can convert df2 to a dictionary and use that to replace the values in df1
cat_1 = [10, 11, 12]
cat_2 = [25, 22, 30]
cat_3 = [12, 14, 15]
df1 = pd.DataFrame({'cat1':cat_1, 'cat2':cat_2, 'cat3':cat_3})
all_cats = [10, 11, 12, 25, 22, 30, 15]
cat_codes = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
df2 = pd.DataFrame({'all_cats':all_cats, 'cat_codes':cat_codes})
rename_dict = df2.set_index('all_cats').to_dict()['cat_codes']
df1 = df1.replace(rename_dict)
If you still have some values that aren't in your dictionary and want to replace them with Z, you can use a regex to replace them.
df1.astype('str').replace({'\d+': 'Z'}, regex=True) |
H: How to interpret the mean for output clusters for expected-maximization?
I am trying to cluster data using scikit's expectation-maximization. So I created two different data sets from a normal distribution which is I have shown in the graph below.
The mean for each of the distribution is:
Mean of distr-1: 0.0037523503071361197
Mean of distr-2: -0.4384554574756237
But after I run the EM using scikit, I get the mean as follows:
Mean after EM: [[-0.12327634 0.39188704]
[-1.31191255 -4.4292102 ]]
How am I supposed to interpret this mean? I am trying to create two clusters from the data. Here is my code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.mixture import GaussianMixture
distr_1 = np.sin(2 * np.random.randn(100) + np.random.randn())
distr_2 = (3 * np.random.randn(100)) + np.random.randn()
x = list(range(0,100))
X_train = np.concatenate((distr_1, distr_2))
plt.scatter(x,distr_1)
plt.scatter(x,distr_2)
plt.gca().legend(('sin', 'linear'))
plt.savefig('cluster_data.png')
plt.clf()
print("Mean of distr-1:",np.mean(distr_1))
print("Mean of distr-2:",np.mean(distr_2))
gmm = GaussianMixture(n_components=2)
gmm.fit(X_train.reshape(100,2))
print("Mean after EM:",gmm.means_)
Am I doing this incorrectly? What does the output mean?
AI: It seems like you are trying to create a mixture of 2 univariate distributions, but you happen to get a bivariate distribution (which is why you have a 2x2 array).
This is because you reshaped the X_train array. You need to change your penultimate line to:
gmm.fit(X_train) |
H: Opensource Speech Recognition Library that is secure and trained on large data
For all those who are working on developing a chatbot/assistant and care about the privacy of users consuming the speech recognition library, can you suggest an open souce library which is trained on a large data. Big concern is the privacy that's why not going for Google Diagflow or IBM Watson or Amazon Lexa or Wit.
Would appreciate a lot if someone can suggest a good library.
AI: I'd recommend having a look at DeepSpeech.
You can find a maintained implementation of the project in here.
You may clone it, and train a model in order to have your very own model for speech recognition. |
H: What is the relation between input into LSTM and number of cells?
I want to train an LSTM network for time-series predictions, and want to get to the bottom of LSTM's.
In my understanding, the number of cells in a single LSTM layer can vary. However, since each cell takes an input at time-step t, wouldn't the number of cells need to be equal to t?
For example (from TensorFlow tutorial):
t=0 t=1 t=2 t=3 t=4
[The, brown, fox, is, quick]
[The, red, fox, jumped, high]
words_in_dataset[0] = [The, The]
words_in_dataset[1] = [brown, red]
words_in_dataset[2] = [fox, fox]
words_in_dataset[3] = [is, jumped]
words_in_dataset[4] = [quick, high]
batch_size = 2, time_steps = 5
Wouldn't the maximum number of cells in each layer be 5? since after the 5th input we no longer have any other information to input. However, I've seen many networks with a higher number of cells than that. Therefore, why is this a possibility?
AI: Consider RNN networks as a simple MLP which for each time span t, takes the inputs of that time and the outputs of the previous step. Actually each time, you unroll the network. Consequently, the number of cells does not have any relation to the input size or the length of the time series. Take a look at here for a better understanding. |
H: How to implement keras LSTM time series
I am learning how to implement Keras LSTM on a simple time series data.
The dataset I'm using has $12$ columns and $300k$ rows. Each group of $200$ rows represents one time-series cycle. Then, the time starts again from zero and run for the next $200$ rows. I want to make predictions for sixof the columns.
My questions are, how many (multiple of 200?) rows do I choose for train/test sets? Is my batch size $200$? What is the basis for choosing number of neurons?
AI: You've not provided the dataset but I will try to answer based on your descriptions.
how many (multiple of 200?) rows do I choose for train/test sets?
Actually, the train/validation/test splits do not really depend on the length of the time series. It is more dependent on the number of examples, time series here. $300k / 200 = 1.5k$ which means the size of your dataset is small and you can use the customary percentages for your splits. Something like 60/20/20 for train/validation/test.
Is my batch size 200?
$200$ is not your batch size, it is the length of your signal. In LSTM networks, you usually deal with temporal data. For each sample, you have 12 features for each time step. $200$ means each sample has $200$ steps. In vectorise implementations of LSTMs it is customary to define batch size. That is, you stack $m = batch size$ samples for acceleration in training, it has details that I skip.
What is the basis for choosing number of neurons?
Take a look at here. Although the details of MLPs are described, they can be generalised to LSTMs too. Consider RNNs like simple MLPs which take the inputs of the current time step and the outputs of the previous step. |
H: How to migrate R decision tree to Java
I have trained a conditional inference decision tree in R using library party with function ctree and saved the model in an .Rda file.
I need to migrate this model from R to Java so that I can utilise the tree to make predictions in a Java environment. Can someone please point me in the right direction on how I can do this?
AI: You could convert your decision tree model to the PMML format. In Java you could use JPMML to parse/read the model and predict. |
H: TypeError: unhashable type: 'numpy.ndarray'
I'm trying to do a majority voting of the predictions of two deep learning models.The shape of both y_pred and vgg16_y_pred are (200,1) and type 'int64'.
max_voting_pred = np.array([])
for i in range(0,len(X_test)):
max_voting_pred = np.append(max_voting_pred, statistics.mode([y_pred[i], vgg16_y_pred[i]]))
I run into the following error:
TypeError: unhashable type: 'numpy.ndarray'
How should I pass the data?
AI: The problem is that you're passing a list of numpy arrays to the mode function.
It requires either a single list of values, or a single numpy array with values (basically any single container will do, but seemingly not a list of arrays).
This is because it must make a hash map of some kind in order to determine the most common occurences, hence the mode. It is unable to hash a list of arrays.
One solution would be to simple index the value out of each array (which then means mode gets a list of integers). Just changing the main line to:
max_voting_pred = np.append(max_voting_pred, mode([a[i][0], b[i][0]]))
Let me know if that doesn't fix things.
If you want something that is perhaps easier than fixing your orignal code, try using the mode function from the scipy module: scipy.stats.mode.
This version allows you to pass the whole array and simply specify an axis along which to compute the mode. Given you have the full vectors of predictions from both models:
Combine both arrays to be the two columns of one single (200, 2) matrix
results = np.concatenate((y_pred, vgg16_y_pred), axis=1)
Now you can perform the mode on that matrix across the single rows, but all in one single operation (no need for a loop):
max_votes = scipy.stats.mode(results, axis=1)
The results contain two things.
the mode values for each row
the counts of that mode within that row.
So to get the results you want (that would match your original max_voters_pred, you must take the first element from max_votes:
max_voters_pred = max_votes[0] |
H: Does CNN take care of zoom in images?
Suppose a convolution neural network is trained on small images of an object, say flower, as in following 3 training images:
Will this CNN correctly classify if the same object is present in zoomed form in a test image? As in following example:
What if the situation is reverse, i.e. training on large sized objects and testing with small sized object?
What is the best way to take care of different size of object that may be present for correct classification of images?
(The images are from: https://www.shutterstock.com/video/search/flowers )
AI: I think more information on the project can help. For instance in your example, you just need to detect the flower frame in the whole image and crop it. Then you simply resize your images to be the same size before feeding your NN. In this case, detecting the target frame and scaling it is a part of pre-processing and the learning phase stays the same.
Another approach is to learn this scaling. It means that the learning algorithm needs scale-invariant features but NNs are supposed to actually create features in their hidden layers through learning. So if you have sufficiently big dataset in which all variants are presented enough, your NN will learn it (most probably with poor results). The process is intuitive ML-wise:
NN extract features from input images i.e. when the label of same object with different sizes are the same, it captures those features which discriminate these objects in a scale-invariant manner.
This was simplified! The trick is that CNN can learn resizing using a technique called Pooling. But you are the one who plays with parameters to make your NN working!
Probably the best is the situation in which your dataset is not that huge and rich or you do not want to play with many parameters or you want to increase the quality of results. Then focus on finding scale-invariant features. Either you extract them from your images and feed it to the NN, or you design NN in a way that it finds them itself. A more intuitive way is to train different models on different scales and ensemble them.
Hope it helps :) Good Luck!
PS: SIFT is patented. Be careful not to use it for commercial purposes. |
H: Why is eulers number used as a constant in sigmoid
I was asking myself why eulers number was used in the sigmoid function 1/(1+e^-x) instead of any other constant like for example 2 or 3?
I am pretty new to data science stuff, but I read somerwhere that eulers number is the natural growth of a curve, so would this mean, that eulers number is used in the sigmoid function because it makes it possible to output values that are evenly distributed between the values 0 and 1?
AI: Euler's number pops up in a lot of places naturally; not quite something to do with growth rates but arises easily in common limits.
The form of the sigmoid function wasn't chosen because its derivative has a nice property, although that's true. It also wasn't chosen because it's a function with range (0,1); many functions do that.
The sigmoid function arises because it's the correct answer to a common type of problem. We think of logistic regression as a common classifier predicting a class probability, but it really is regression under the hood. It's not regressing the probability though, but log-odds of the probability (logit function). This is the right way to apply regression, given assumptions about the distribution of errors in a classification problem, which aren't the same as in a simple linear regression.
The sigmoid function is the inverse of the logit link function. That's why it's there. It gets from the regression output to the actual desired output, a probability. The logit function is there because it is implied by the assumption about the distribution of the 0/1 dependent variable.
That's actually it. It has to be there in certain types of common problems because it's the answer to those problems, not because it was chosen for nice properties. |
H: AUC with sklearn vary each time script is started
I'm using the following code to perform a tree classification. I set up an int value for random_state in train_test_split function but each time I got different values for auc or accuracy_score.
I don't see what I am missing...
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state=1,stratify=Y, test_size=0.33)
clf = clf.fit(X_train, y_train)
predicted_probas = clf.predict_proba(X_test)
y_predict = clf.predict(X_test)
print(accuracy_score(y_test, y_predict))
print(classification_report(y_test, y_predict))
classes = np.unique(y_test)
probas = predicted_probas
fpr = {}
tpr = {}
roc_auc = {}
for i in range(len(classes)):
fpr[i], tpr[i], _ = roc_curve(y_test, probas[:,i],pos_label=classes[i])
roc_auc[i] = auc(fpr[i], tpr[i])
print(classes[i])
print(fpr[i], tpr[i])
print("roc_auc")
print(roc_auc[i])
AI: Ok I miss the fact that you can set also random_state here to remove this variability... but I don't see how it works here exactly so if someone want to explain...your are welcome.
clf = tree.DecisionTreeClassifier(random_state=5) |
H: K-nearest neighbors complexity
Why does the complexity of KNearest Neighbors increase with lower value of k? And when does the plot for k-nearest neighbor have smooth or complex decision boundary? Please explain in detail.
And also , given a data instance to classify, does K-NN compute the probability of each possible class using a statistical model of the input features or just gets the class with the most number of points in favour of it?
AI: The complexity in this instance is discussing the smoothness of the boundary between the different classes. One way of understanding this smoothness complexity is by asking how likely you are to be classified differently if you were to move slightly. If that likelihood is high then you have a complex decision boundary.
For the $k$-NN algorithm the decision boundary is based on the chosen value for $k$, as that is how we will determine the class of a novel instance. As you decrease the value of $k$ you will end up making more granulated decisions thus the boundary between different classes will become more complex.
You should note that this decision boundary is also highly dependent of the distribution of your classes.
Let's see how the decision boundaries change when changing the value of $k$ below. We can see that nice boundaries are achieved for $k=20$ whereas $k=1$ has blue and red pockets in the other region, this is said to be more highly complex of a decision boundary than one which is smooth.
First let's make some artificial data with 100 instances and 3 classes.
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=100, centers=3, n_features=2, cluster_std=5)
Let's plot this data to see what we are up against
Now let's see how the boundary looks like for different values of $k$. I'll post the code I used for this below for your reference.
$k$ = 1
$k$ = 5
$k$ = 10
$k$ = 20
The code
The code used for these experiments is as follows taken from here
from sklearn import neighbors
k = 1
clf = neighbors.KNeighborsClassifier(20)
clf.fit(X, y)
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.show() |
H: How can we use Neural Networks for Decision Making intead of Bayesian networks or Desicion Trees?
I am working on Decision Making in Self driving cars and I am wondering how I can use Neural networks (is there any type) ? that can repleace or mimic the bayesian networks or Decision Tree for Decision Making process ?
AI: I would like more details regarding your actual problem, but here is my suggestion to apply artificial neural networks for decision making.
One way of approaching this problem is by using priori as one of the parameters for a deep neural network. This could be treated as similar as a classification problem with the basis of supervised learning.
The desired decisions and its parameters can be labelled and the output layer can have as many neurons as the desired number of decisions. Then series of training and validation can be conducted until the training reaches expected accuracy.
Best,
Sangathamilan Ravichandran. |
H: AI that maximizes the storage of rectangular parallelepipeds in a bigger parallelepiped
As you can see in the title, I'm trying to program an AI in Java that would help someone optimize his storage.
The user has to enter the size of his storage space (a box, a room, a warehouse etc...) and then enter the size of the items he has to store in this space. (note that everything must be a rectangular parallelepiped) And the AI should find the best position for each item such that the space is optimized.
Here is a list of what I started to do :
I asked the user to enter the size of the storage space (units are trivial here except for the computing cost of the AI later on I'm guessing), telling him that the values will be rounded down to the unit
I started by creating a 3-dimensional array of integers representing the storage space's volume, using the 3 values taken earlier. Filling it with 0s, where 0s would later represent free space and 1s occupied space.
Then, store in another multidimensional array the sizes of the items he has to store
And that's where the AI part should be starting. First thing the AI should do is check whether the addition of all the items' volumes doesn't surpass the storage space's volume. But then there are so many things to do and so many possibilities that I get lost in my thoughts and don't know where to start...
In conclusion, can anyone give me the proper terms of this problem in AI literature, as well as a link to an existing work of this kind ?
Thanks
AI: This looks very much like a variation of the bin packing problem. The bad news is that this is mathematically a hard problem, NP-hard in fact (which means that no polynomial-time solution is possible). The good news is that it is heavily studied, with multiple approaches to solving it, or at least optimising solutions to within reasonable bounds.
Which approach to optimisation will work best for you will depend on how physically accurate you want your model to be. Your voxel approach already makes some decisions in that regard, but you may also want to consider issues such as gravity, stability of any piles of structures you create, and access to items. These things add additional constraints that make modelling the problem and implementing an optimiser harder, so you might want to start with more unrealistic model initially.
One simple approach might be to use a simple heuristic e.g. pack largest objects first, fill close to edges first etc. Then attempt to swap or revise the order of packing of a few pieces and see if it makes an improvement. You can make those changes greedily, or look into making them more randomly but with a system that rewards better improvements. Approaches such as simulated annealing or genetic algorithms can be used as stochastic approaches, many others are possible. There are a very large number of global optimisers that might help with such a combinatorial problem.
Here is an example of solving a bin packing problem using Simulated Annealing.
Here is an example of solving a bin packing problem using a Genetic Algorithm |
H: training when Multiple labels per image
I have multiple labels per image. is it better to train taking each each label separately or should i mark all the labels present as 1 in the same image? which method is better? i will be using CNN architecture
AI: Assuming you want to classify the images (and not use bounding boxes to locate classes within each image), a common way it to create a target vector for each image, which holds the information regarding all classes and is what the model would eventually predict.
If you have a dataset with, say 5 classes, and your first example image contains classes 1 and 4, you would create your target vector for that image to be:
example_sample = ... # your image array
example_sample_y = [1, 0, 0, 1, 0]
This is a kind of one-hot encoding, as the vector has a placeholder for each of the 5 classes, but only a 1 when the class is present.
Have a look at this high-level walkthrough.
I think your other suggestion of training an image
You want to learn some kind of joint probability between the classes, and in my opinion, training one the same image with different outcomes (e.g. the sample image above twice, producing either a 1 or a 4) will not only be very inefficient during training, but will also be mathematically confusing. The same input can give 2 possible outputs! This implies your underlying function that maps images to classes is not well-defined. That isn't usually a good thing! |
H: remove special character in a List or String
Input_String is Text_Corpus of Jane Austen Book
output Should be :
['to', 'be', 'or', 'not', 'to', 'be', 'that', 'is', 'the', 'question']
But getting this Output :
['to', 'be,', 'or', 'not', 'to', 'be:', 'that', 'is', 'the', 'question!']
AI: Regular expressions can be used to create a simple tokenizer and normalizer:
from __future__ import annotations
import re
def tokens(text: str) -> list(str):
"List all the word tokens in a text."
return re.findall('[\w]+', text.lower())
assert tokens("To be, or not to be, that is the question:") == ['to', 'be', 'or', 'not', 'to', 'be', 'that', 'is', 'the', 'question']
Otherwise, use an established library like spaCy to generate a list of tokens. |
H: Doubt with SVM math
I have a question about SVM that some of you may help me with…
I know that y(xi), by convention, would be -1 or 1 depending on which class the Xi belongs to.
But I don't fully understand why it's stablished that the hyperplane equation should be:
w·xi + b >= 1 or w·xi + b <= -1
Where do those "1" and "-1" come from?
Shouldn't it be that, for any point, depending on its classification, the hyperplane equation would be like this?
w·xi + b >= yi * margin/2 (yi = -1 or 1)
There is no 1 or -1 anywhere…
Thanks in advance
AI: The -1 and +1 you see in many SVM proofs is due to a scaling factor that is applied to the distance between the margin and the 2 closest points. This is useful for simplifying the math, but most textbooks gloss over this super important detail which makes it complicated.
This is all based on the idea that given a dataset you can scale it, and your decision boundary however you want without affecting its performance. For example, if you have dataset with units in the millions of dollars and decision boundary also in the millions of dollars, you can scale these all to the dollar scale to get the same performance.
The math
We agree that for an arbitrary dataset the SVM algorithm wants to maximize the distance of the boundary $\gamma$. The decision boundary is described by a linear line as described in our first constraint. The second constraint ensures that the functional margin is equal to the geometric margin, and thus we can employ the scale invariability property of the geometric margin. This optimization is described as
$max_{\gamma, w, b} \gamma$
$\text{s.t. } y^{(i)}(w^Tx^{(i)} + b) \geq \gamma, i = 1, ..., m \text{ and } ||w|| = 1$.
We then want to find the maximum possible margin between the positive and negative classes.
We can reformulate this problem such that we get rid of the $||w|| = 1$ constraint to get
$max_{\hat{\gamma}, w, b} \frac{\hat{\gamma}}{||w||}$
$\text{s.t. } y^{(i)}(w^Tx^{(i)} + b) \geq \hat{\gamma}, i = 1, ..., m$
where
$\gamma = \frac{\hat{\gamma}}{||w||}$
Now all we will do is we will force the margin of the separation to be 1 in order to simplify our math. In other words we will want $\hat{\gamma} = 1$. So we now have
$max_{\gamma, w, b} \frac{1}{||w||}$
$\text{s.t. } y^{(i)}(w^Tx^{(i)} + b) \geq 1, i = 1, ..., m$
Finally, maximizing $1\||w||$ is equivalent to minimizing $||w||$, which is equivalent to minimizing $||w||^2/2$ and thus the usual quadratic optimization SVM equation is born
$min_{\gamma, w, b} \frac{1}{2}||w||^2$
$\text{s.t. } y^{(i)}(w^Tx^{(i)} + b) \geq 1, i = 1, ..., m$ |
H: How can I merge 2+ DataFrame objects without duplicating column names?
This is for work. TLDR: Bottom-line question at the bottom.
I am gathering and parsing test results produced by an old test setup whose output formatting is not likely to change anytime soon. I've made good progress on parsing the output data into lists of strings, booleans, etc., but I'm having trouble pulling the data together into an easily searchable and retrievable whole. The data looks something like this:
- test_case_A
- pass/fail file
- big header info with test case metadata
- test 1 pass/fail supporting data
- test 2 pass/fail supporting data
- ...
- test 100 pass/fail supporting data
- test_case_B
- pass/fail file
- big header info with test case metadata
- test 1 pass/fail supporting data
- test 2 pass/fail supporting data
- ...
- test 70 pass/fail supporting data
- test_case_C
- pass/fail file
- big header info with test case metadata
- test 1 pass/fail supporting data
- test 2 pass/fail supporting data
- ...
- test 10 pass/fail supporting data
- test_case_D
- pass/fail file
- big header info with test case metadata
- test 1 pass/fail supporting data
- test 2 pass/fail supporting data
- ...
- test 30 pass/fail supporting data
I parse them into individual DataFrame objects like so:
df_case_A_pass_fail = pd.DataFrame({
"case" : "test case A",
"header" : *big header string*,
"test ID" : [*list of IDs*],
"test passed" : [*list of bool*],
"test data" : [*list of strings*],
})
*repeat for test case B, C, and D*
Now I try to merge them together.
big_df = fancy_merge_step_probably_involving_reduce_and_a_lambda(...)
Problem 1: The "case" and "header" strings appear to be duplicated all the way down their DataFrame. Like so:
case test ID test passed header supporting data
0 "test case A" "test 1 ID" True "big header string" "all kinds of stuff"
1 "test case A" "test 2 ID" False "big header string" "all kinds of stuff"
...
99 "test case A" "test 100 ID" True "big header string" "all kinds of stuff"
I checked the DataFrame size according to this get_real_size(...) algorithm and the size explodes exponentially as I merge in more results (~80KB for 1 test case, ~800KB for 2 test cases), so the big header string is definitely getting duplicated. I want to establish one->one relationship between test case and header and one->many between test case and test data, but all I'm seeing is duplication until there is a unique permutation of every line. Am I making the DataFrame wrong for what I need?
Problem 2: (Possibly related to problem 1.) I attempt to merge the DataFrames with an outer join and get the NaN results for places where the column sizes don't match up (expected), but also get duplicated columns "case", "header", "test passed", "test data" (any column that wasn't merged on), appendend with automatic suffixes ("_x", "_y"). I know that pandas does this automatically when there is a column name class, but it is now a problem. Result: searching on column "case" fails because the merged DataFrame has no column "case". All columns formerly named "case" are now "case_x" or "case_y".
I want to query like this:
match1 = (big_df["case"] == "test case D")
match2 = (big_df["test ID"] == "test 3")
single_test_df = big_df.loc[match1 & match2]
match1 = (big_df["case"] == "test case A")
match2 = (big_df["header"] == "header")
header_str = big_df.loc[match1 & match2].values[0]
Question: How do I set up these DataFrame objects and merge them so that I can query the test data as mentioned earlier?
AI: Welcome to the site!
Problem 1 seems to be an issue that could be solved by treating your dataframes like tables in a relational database. By this I mean: you have your main data table and secondary tables called 'headers' and 'supporting_data', and you store each header in the 'headers' table once with an ID (a key, maybe just an integer) assigned to it, and in the main table you just store the integer corresponding to the header data. If you ever need to refer to the header data, you look up the integer and go over to the 'headers' table to retrieve it. You can do something similar with the supporting data, but not knowing what that data is it's up to you to figure out if you should parse out parts of that data or store it as whole text blocks or whatever. But if you do this, you'll just have two integers on each row of the main table instead of all that superfluous repeated data.
Problem 2 I'm less sure about, but I'd look into the pandas concat function. I think you should probably just have a case column containing 'A', 'B', etc, and be appending the rows from those files into the main table; concat should be able to do that if all the data types work out, etc, etc.
Good Luck! |
H: how to split available data into training and testing (Information security)
I was advised to ask my question here.
Recently, I made a post about finding suitable dataset for SIEM (Security Information and Event Management) systems. The goal was to work on classification and correlation to detect security attacks.
I decided to use the dataset from the Honeynet Project Challenges. The problem is this: I don't know if I should use the whole dataset set for my project, because if you look, for example, at the KDD99 dataset, it is devised into two parts: 10% for training and 90% for testing.
I have seen some researcher use dataset A for training and dataset B for testing, do you have any other ideas? I am really stuck at the part of training vs. testing
If my question is too broad. I don't mind some reading materials that will help me deal with my dataset.
Bests,
AI: You should check out k-fold cross validation techique. Its a quality control technique which makes sure, the prediction/classification done by a model would generalize well to unseen test data. The idea is to split the dataset into k times and use it recursively to train and cross validate your network which at end can be used on the test data to report the actual accuracy.
Here is a link for reference :
https://machinelearningmastery.com/k-fold-cross-validation/ |
H: C++ return array from function
I would like to implement machine learning algorithm in C++ without using any C++ machine learning library. So I'm writing this initializer function for generating zero matrices but can't figure out how can I accomplish this. I'm actually trying to write C++ code for simple logistics regression for now.
float * intializer_zero(int dimension){
// z = wx + b.
float b = 0;
float w[dimension]= { };
return w,b;
}
It's throwing error "cannot convert 'float' to 'float' in return."
How can I write this initializer function in C++?
AI: you can use vector from the Standrad library to store your matrix in a variable way.
#include<vector>
Then you defined your function to initiliaze it to 0
void fill_zero( std::vector<std::vector<float>> &matrix, int row, int column)
{
for(int i = 0; i<row; ++i)
{
std::vector<float> vector(column,0);
matrix.push_back(vector);
}
}
this fill a row x column matrix with 0. As matrix is passed by reference (& matrix) you need a c++11 compiler (I think). Same for the 'auto' keyword.
then you just need to create the matrix before calling the function
std::vector<std::vector<float>> matrix;
fill_zero(matrix);
//this is just for printng the matrix to see if it match your requirement !
for(auto vector : matrix)
{
for(auto value : vector)
{
std::cout<<value<<" ";;
}
std::cout<<std::endl;
}
I didn't test this an my own compiler, but it works here :
C++ Shell compiler with the code above, if you want to test it ! (it fill it with ones instead of zeroes, it's just to be sure that the compiler doesn't initiliaze value of vector to 0 by default. That way I am sure that the function work as we want to) |
H: How to implement word to word Co-occurence matrix in python
To implement co-occurence matrix in sucha a way that number of times word1 occured in context of word2 in neighbourhood of given value, lets say 5. There are 100 words and a list with 1000 sentences. So how can i calculate co-occurence matrix of size (100* 100) using python?
AI: from nltk.tokenize import word_tokenize
from itertools import combinations
from collections import Counter
sentences = ['i go to london', 'you do not go to london','but london goes to you']
vocab = set(word_tokenize(' '.join(sentences)))
print('Vocabulary:\n',vocab,'\n')
token_sent_list = [word_tokenize(sen) for sen in sentences]
print('Each sentence in token form:\n',token_sent_list,'\n')
co_occ = {ii:Counter({jj:0 for jj in vocab if jj!=ii}) for ii in vocab}
k=2
for sen in token_sent_list:
for ii in range(len(sen)):
if ii < k:
c = Counter(sen[0:ii+k+1])
del c[sen[ii]]
co_occ[sen[ii]] = co_occ[sen[ii]] + c
elif ii > len(sen)-(k+1):
c = Counter(sen[ii-k::])
del c[sen[ii]]
co_occ[sen[ii]] = co_occ[sen[ii]] + c
else:
c = Counter(sen[ii-k:ii+k+1])
del c[sen[ii]]
co_occ[sen[ii]] = co_occ[sen[ii]] + c
# Having final matrix in dict form lets you convert it to different python data structures
co_occ = {ii:dict(co_occ[ii]) for ii in vocab}
display(co_occ)
Output:
Vocabulary:
{'london', 'but', 'goes', 'i', 'do', 'you', 'go', 'not', 'to'}
Each sentence in token form:
[['i', 'go', 'to', 'london'], ['you', 'do', 'not', 'go', 'to', 'london'], ['but', 'london', 'goes', 'to', 'you']]
{'london': {'go': 2, 'to': 3, 'but': 1, 'goes': 1},
'but': {'london': 1, 'goes': 1},
'goes': {'london': 1, 'but': 1, 'you': 1, 'to': 1},
'i': {'go': 1, 'to': 1},
'do': {'you': 1, 'go': 1, 'not': 1},
'you': {'do': 1, 'not': 1, 'goes': 1, 'to': 1},
'go': {'london': 2, 'i': 1, 'to': 2, 'do': 1, 'not': 1},
'not': {'do': 1, 'you': 1, 'go': 1, 'to': 1},
'to': {'london': 3, 'i': 1, 'go': 2, 'not': 1, 'goes': 1, 'you': 1}}
PS
Do text preprocessing yourself (remove punctuations, lemmatization, stemming, blahblah)
Continue the code for any conversion you want. You have the dict, you can convert it to sparse matrix or pandas datframe |
H: Data Cleaning without pandas
How can I clean a data csv file with the restriction of only using python and its standard library? No third party programmes such as pandas can be used. For example: removing a column from the dataset, correcting spelling mistakes, inconsistencies in data formatting, null entries etc.
AI: The following solution uses the library csv. The if statement can drop a column before loading the whole csv file. You will need to create your own list for punctuation to remove. It should be noted that if you have a text with a comma in it, it will be in two different keys in your dictionary. In other words, you should clear commas beforehand. Moreover, the stopwords list should be in lowercase letters, since the first function will change the sentences into lowercase. The result would be a list with ordered dictionaries. I hope this answer helps you with your problem.
path=r"C:\Users\****\****\file_name.csv".replace('\\', '/')
import csv
my_data=[]
punctuations_list=[':','.','!','#']
stopwords_list=['i','there','it','this','is']
def remove_character(give_string):
return "".join([letter for letter in give_string.lower() if letter not in punctuations_list])
def remove_words(give_string):
return " ".join([word for word in give_string.split(" ") if word not in stopwords_list])
with open(path) as csvfile:
reader = csv.DictReader(csvfile, fieldnames=('1stcol','2ndcol','3rdcol'))
for row in reader:
if '2ndcol' in row.keys(): #Deletes a column(rather a pair of key value from dictionary)
del(row['2ndcol'])
row['3rdcol']=remove_words(remove_character(row['3rdcol'])) #Make some cleaning to the text in a specific column
my_data.append(row) |
H: Twitter tweet classification
I am trying to do a small project on my own to find out job openings using twitter data. I saved data using flume and converted it to .csv for analysis. My problem is i don't know how to classify tweets, whether it is a job vacancy or just some news on say machine-learning.I read online about neural networks and word2vec but i am not sure if it will solve my problem. Can anyone suggest some ways to do this based on tweet text and hashtags.I don't have training and test data, i just have tweets stored using flume.Also what kind of analysis will i be able to do with it.I am new to data science :/
AI: You should be reading about text classification with supervised learning technique. You could chose a neural network such as CNN implementation and use it to your data. But you have to first prepare your data by cleaning it up and labelling it with respect to your desired category. For example,
You should collect the tweet data and label each tweet or set of words as job or news with respect to their original category.
Then you should use the prepared data and labels to train the chosen network.
Read about k fold cross validation and segregate data in such a way so that systematic iterative training and validation can be done and tweaks to the network could be made.
Finally once the training is completed you could use the model to test it on unseen test data.
These are common steps involved in text classification. It should be customized according to your need. You could google profanity filter which is mostly developed with a NLP type classifier. That would help you a lot. |
H: What is the reason behind taking log transformation of few continuous variables?
I have been doing a classification problem and I have read many people's code and tutorials. One thing I've noticed is that many people take np.log or log of continuous variable like loan_amount or applicant_income etc.
I just want to understand the reason behind it. Does it help improve our model prediction accuracy. Is it mandatory? or Is there any logic behind it?
Please provide some explanation if possible. Thank you.
AI: This is done when the variables span several orders of magnitude. Income is a typical example: its distribution is "power law", meaning that the vast majority of incomes are small and very few are big.
This type of "fat tailed" distribution is studied in logarithmic scale because of the mathematical properties of the logarithm:
$$log(x^n)= n log(x)$$
which implies
$$log(10^4) = 4 * log(10)$$
and
$$log(10^3) = 3 * log(10)$$
which transforms a huge difference $$ 10^4 - 10^3 $$ in a smaller one $$ 4 - 3 $$
Making the values comparable. |
H: Can I use xgboost on a dataset with 1000 rows for classification problem?
I have used all types of classification algorithms on my dataset yet I couldn't improve my score no matter how I try.
So I've read about Xgboost classifier. So I was wondering is it practical to use xgboost on a dataset with around 1000 rows.
Please let me know.
AI: Yes, XGBoost is famous for having been demonstrated to attain very good results using small datasets often with less than 1000 instances.
Of course when choosing a machine learning model to fit your data, the number of instances is important and is related to the number of model parameters you will need to fit. The greater the number of parameters in the model the more data you will need to reduce the bias of your final model. If you do get good results using a complex model on very few instances then there is a high probability that you are overfitting. For example 1000 instances is hardly enough to fit a deep neural network.
That being said, the distribution of your classes and the noise in the data is always going to be a limiting factor to how well any model you select will fit your data. |
H: how to create new columns in pandas using some rows of existing columns?
i have a dataset like this
my desire format is like this
I tried using index slicing eg
dll.loc[:4,'category'] = "CAPITAL FUND"
dll.loc[5:10,'category'] = "BORROWING"
but this idea is risky so is there any idea to solve this?
AI: Instead of perhaps iterating of each row and filling the gaps as required, I would suggest trying to do it via indexing. The solution is:
df['category'] = df.where(~df.id.isnull())['item'].ffill()
Here I break down my solution to help you understand why it works.
Imagine your dataframe is called df. I created a small version of yours as follows:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame.from_dict(
{'id': [1, None, None, 2, None, None, 3, None, None],
'item': ['CAPITAL FUND', 'A', 'B', 'BORROWINGS', 'A', 'B', 'DEPOSITS', 'A', 'B']})
In [3]: df # see what it looks like
Out[3]:
id item
0 1.0 CAPITAL FUND
1 NaN A
2 NaN B
3 2.0 BORROWINGS
4 NaN A
5 NaN B
6 3.0 DEPOSITS
7 NaN A
8 NaN B
I get the dataframe back where the id column is not null (~ reverses the isnull()). On the resulting dataframe, I take only the item column (using [item]) and then fill the missing gaps, using the previous valid value in that column.
In [4]: df['category'] = df.where(~df.id.isnull())['item'].ffill()
In [5]: df
Out[5]:
id item category
0 1.0 CAPITAL FUND CAPITAL FUND
1 NaN A CAPITAL FUND
2 NaN B CAPITAL FUND
3 2.0 BORROWINGS BORROWINGS
4 NaN A BORROWINGS
5 NaN B BORROWINGS
6 3.0 DEPOSITS DEPOSITS
7 NaN A DEPOSITS
8 NaN B DEPOSITS
The trick is to understand this part: df.where(~df.id.isnull())['item']
It returns really the whole dataframe, with the values where ~df.id.isnull() is True. Then only the item dataframe. The result is this:
In [6]: df.where(~df.id.isnull())['item']
Out[6]:
0 CAPITAL FUND
1 NaN
2 NaN
3 BORROWINGS
4 NaN
5 NaN
6 DEPOSITS
7 NaN
8 NaN
Now it should be clear why the final .ffill() works as we would like. It forward fills the missing values, using the last known valid value. |
H: I want to create an additional feature(column) based on some manipulation of values from existing features
Consider my data-frame to be like this ('x','y','z' are features):
I want to create a python function which will take an expression as a string (something like this: 'x+y-2z') and create a new feature by evaluating the expression. Output should be like:
I want to generalize this function so that it will work for different data-frames with different column(feature) names in the expression.
Edit- I have a prototype of the desired function(named 'parser'):
def parser(exp):
df['new_col'] = df.apply(lambda row: row.x+row.y-2*row.z, axis=1)
However, I want to generalize this part - row.x+row.y-2*row.z so that it will adjust itself according to the string(i.e. expression) provided as its argument.
AI: Welcome to the community!
The code below is a starter. You can go on by naming the column and adding it to the original DataFrame:
import pandas as pd
data = pd.DataFrame({'x':[1,2,3], 'y':[10,20,30],'z':[100,200,300]})
print(data)
def my_fun(data,expression,variables):
for v in variables:
expression = expression.replace(v,'data.'+str(v))
return eval(expression)
my_fun(data,'2*x+y',['x','y'])
output
x y z
0 1 10 100
1 2 20 200
2 3 30 300
0 12
1 24
2 36
dtype: int64
There are two ways in general:
Use eval function to evaluate/execute expression in string form as I did.
Use symbolic libraries, most commonly used is SymPy, to use symbolics directly as variables.
Hope it helps. Good Luck! |
H: Need input on which features to drop in classification model
This is the correlation of features with my target variables. I have done all the features engineering but I am left with these features.
Any input on what columns to keep for model training and what to drop. Is there any criteria for dropping features that I don't need. It seems credit history is the only feature that has high correlation.
Loan_ID 0.011610
Gender 0.017987
Married 0.091478
Education -0.085884
Self_Employed -0.003700
ApplicantIncome -0.004710
CoapplicantIncome -0.059187
LoanAmount -0.037318
Loan_Amount_Term -0.022549
Credit_History 0.561678
Total_Income -0.031271
Total_Income_Log 0.007240
LoanAmt_Log -0.037536
CH__0 -0.540556
CH__1 0.432616
EMI -0.011552
EMI_Log -0.028496
Dependents_1 -0.038740
Dependents_2 0.062384
Dependents_3 -0.026123
Property_Area_1 0.136540
Property_Area_2 -0.043621
Loan_Status 1.000000
Name: Loan_Status, dtype: float64
AI: Welcome to the community Sai!
Let's assume your problem is a regression problem (i.e. you have continues target).
There are some points:
First of all there is no written rule for this. Feature Engineering is an EDA kind of thing. There is no final solution. Your model selection strategy will choose some.
Just as a reminder, if these are Pearson correlations, be careful that nonlinear correlations might exist which is not captured by linear correlation analysis. Plus the fact that linear correlation analysis always comes with visual inspection.
Negative correlations inhibits information. If whenever a variable increases the other decreases, then knowing it tells you about the other one! So take them into account. You better use Mutual Information for checking dependencies.
Sparse Linear Models seem to be fruitful here. I suggest letting LASSO or Ridge Regression choose the final set of features.
In case you really insist on your current way (e.g. if your supervisor asked for it), use the threshold as the hyperparameter of your model selection and find the optimal. It means, you train and validate your model using different threshold (which results in different sets of features) and according to empirical error (validation error) you choose the best threshold.
Hope it helped. Good Luck! |
H: How to normalize data of a different nature?
I am working a price prediction LTSM model for the stock market. I am using multiple features: Open, Close, High and I would like to add the Volume.
The 3 first features are of the same nature but the volume presents much higher values.
What would be the safest way to keep consistency during the normalization process?
AI: You would usually just scale all of them to be within the same range.
You can do this by using something like the Scikit-Learn MinMaxScaler, or just a simple function like this:
def scale(data, new_min=-1, new_max=1)
"""Scale values of data to be within the range [new_min, new_max]
data must be a numpy array or a Pandas Dataframe/Series"""
return (data - data.min()) / (data.max() - data.min()) * (new_max - new_min) + new_min
Between -1 and +1 is just nice, as the data is centered around zero. You could play with those values.
You can think about and perhaps experiment to see if you should scale all variables together (meaning one global min and max in the dataset), or whether to scale the individual columns/features of your dataset, so each one lies in the given range.
A tip for financial data is to use the log returns - that means to take your raw prices, compute the logarithm of those values, then take the difference between the closing prices of each day.
The reason for this is to because the resulting values are normally distributed, which is an underlying assumption of many models you will subsequently use (Boosting, ARIMA, GARCH for volatilities etc.). There are also other reasons of convenience - check out this article |
H: Stacking LSTM layers
Can someone please tell me the difference between those stacked LSTM layers?
First image is given in this question and second image is given in this article. So far what I learned about stacking LSTM layers was based on the second image. When you build layers of LSTM where output of one layer (which is $h^{1}_{l}, l=..., t-1, t, t+1...$) becomes input of others, it is called stacking. In stacked LSTMs, each LSTM layer outputs a sequence of vectors which will be used as an input to a subsequent LSTM layer. However, in the first image, the input variables are fed again into second layer. Can someone tell me whether there is something wrong about stacking LSTM layers like given in the first image?
AI: You are correct that "stacking LSTMs" means to put layers on top of one-another as in your second image.
The first picture is a "bi-directional LSTM" (BiLSTM), whereby we can analyse a point in a series (e.g. a word in a sentence) from both sides. We care about the context of that point.
The most common example I know of is within NLP. Here we want to know the representation of a word in a gap, how it is found between other words. If we have the entire sentence, we can look at the words before and the words after our word. In this case, we could use a bi-drectional LSTM to process the sequence in the opposite direction, which your first diagram shows.
Let's play a game, and say you need to guess the missing word in this text snippet:
i need to review an __________ ...
What could it be? "article", "iPad", "aerial image" ?
Here is the solution:
i need to review an article, ...
It was incredibly hard to get that right - perhaps impossible! Well, maybe not if you have some context with it. How about I give you both sides of that snippet:
i need to review an ________, for tomorrow's newspaper.
A BiLSTM would be fed the sentence from both sides, thus letting it see some more context to understand each word.
Have a look at this article, which eventually get the bi-directional networks.
Here is a similar question to yours with a few nice answers.
In time-series data, such as device readings from IoT devices or the stock market, using such a bi-directional model wouldn't make sense, as we would be violating the temporaneous flow of information i.e. we cannot use information from the future to help predict the present. That isn't a problem in text analysis, voice-recordings or network analysis on sub-network traffic flow. |
H: Multivariate VAR model: ValueError: x already contains a constant
I have already read this question and the associated answer.
I have removed any 'all zero' columns, as recommended in the answer. I have 3,169 columns remaining.
datavals_no_con = datavals.loc[:, (datavals != datavals.iloc[0]).any()]
I checked whether any were missed, for some bizarre reason:
varcon = np.asarray([np.var(datavals_no_con[datavals_no_con.columns[i]]) for i in range(len(datavals_no_con.columns))])
print np.where(varcon==0.) #empty array.
Also checked the minimum column variance value, which ended up being 4.306x10^(-7)
This was generated by a column that has no zero entries.
When I run this:
model = VAR(datavals_no_con)
results = model.fit(2)
I still get:
Traceback (most recent call last):
File "vector_autoregression.py", line 163, in <module>
results = model.fit(2)
File "/user/anaconda2/lib/python2.7/site-packages/statsmodels/tsa/vector_ar/var_model.py", line 438, in fit
return self._estimate_var(lags, trend=trend)
File "/user/anaconda2/lib/python2.7/site-packages/statsmodels/tsa/vector_ar/var_model.py", line 457, in _estimate_var
z = util.get_var_endog(y, lags, trend=trend, has_constant='raise')
File "/user/anaconda2/lib/python2.7/site-packages/statsmodels/tsa/vector_ar/util.py", line 32, in get_var_endog
has_constant=has_constant)
File "/user/anaconda2/lib/python2.7/site-packages/statsmodels/tsa/tsatools.py", line 102, in add_trend
raise ValueError("x already contains a constant")
ValueError: x already contains a constant
How can I resolve this?
EDIT: It occurred to me that the problem would be that x contains a constant, not that x contains all 0s. So the original answer suggested in the previous question was not entirely sufficient.
To test whether any of my columns contained 'all the same value' (e.g. a column of all 0.5), I tried this:
ptplist = []
for i in range(len(datavals_no_con.columns)):
ptplist.append(np.ptp(datavals_no_con[datavals_no_con.columns[i]], axis=0))
ptparray = np.asarray(ptplist)
print any(ptparray==0.) #FALSE
So none of my columns are constant, unless I'm still missing something.
EDIT 2: I have found the root cause of the problem.
Suppose my input matrix (that is, my set of endogenous variables) is a 5x5 identity matrix, for the sake of argument, and that my lag value is 2 (that is, I'm looking for an AR(2) model: y_{t+1} = A + B_1y_{t} + B_2y_{t-1} + error) :
y = np.eye(5)
1 0 0 0 0 (row 1)
0 1 0 0 0 (row 2)
0 0 1 0 0 (row 3)
0 0 0 1 0 (row 4)
0 0 0 0 1 (row 5)
In the get_var_endog function in /statsmodels/tsa/util.py, under lags=2, the y matrix gets rearranged to this general idea:
[row 2, row 1] (i.e. concatenate these two)
[row 3, row 2]
[row 4, row 3]
And this new matrix could have zero columns, in places where my original data matrix did not. In fact, this is exactly what was happening. Following my example, the np.array Z in get_endog_var looks like this:
0 1 0 0 0 1 0 0 0 0
0 0 1 0 0 0 1 0 0 0
0 0 0 1 0 0 0 1 0 0
So now columns 0, 4, 8, and 9 are completely 0, which throws the ValueError.
Two possible approaches to a solution come to mind:
1) Remove the zero columns from the Z matrix.
2) Edit the original data set such that these zero columns never occur in the first place (much harder, because then the Z matrix here would never have existed, so how can you know which columns to remove...catch 22).
I chose option 1, but now I'm dealing with shape issues down the line. Because, of course, when doing the least squares fit, the shape of the parameters is going to be different from the shape of the original data set (some columns don't exist in the parameters, because I removed them, that do exist in the original data set).
Now, this looks like it should be a relatively frequent problem. A lot of the time, we're working with high-dimensional sparse data, which would generate this issue.
Does anyone have a more robust solution than what I've proposed?
AI: It seems you may have things working, but maybe I can still help
Catching those pesky non-varying features
Assuming a is your dataframe, with only numerical types and columns as features, you can try the following, which will return rows filled with NaN values for features of zero variance i.e. the mean value is equal to the minimum value, so the value cannot vary.
a.T.where(a.T['mean'] == a.T['min'])
They check is slightly differently in the source code of the VAR model which raises your error:
result = (np.ptp(s) == 0.0 and np.any(s != 0.0))
So they check that the value doesn't have any peaks (ptp = "peak-to-peak"), so it doesn't vary - and also that the value is not equal to zero.
In any case, there is little tolerance for it. You have done right by trying to prune out features with low (near zero) variance.
Another possiblity which I didn't notice you having already tried, would be to utilise the other arguments to the fit() method of the VAR model class. It has the following signature:
VAR.fit(maxlags=None, method='ols', ic=None, trend='c', verbose=False)
If you were to change the trend argument to nc, meaning no constant - you may also get around your error. Check out the documentation.
Source code analysis
Looking through the contents of the file that raised your error, it seems it occurs while processing the "original input data" - as defined by the function add_trend (where your error was raised).
This suggest that your reasoning about the error being found on the data after lagged features are produced would be incorrect in this case (although it's a clever idea!)
Clearing my name...
You said:
... So the original answer suggested in the previous question was not entirely sufficient.
I would like to point out that my answer to your linked question was indeed sufficient, because I had written that there are constants - not that the values must all be equal to zero! :-P |
H: How to choose PCA or KernelPCA a priori?
I am learning about dimensionality reduction and I understood that one of the most used techniques in ML is PCA.
If I understood correctly, I use PCA whenever I want to reduce the number of features which should be mostly linearly separable (independent ?).
When the features are not-linearly separable (linearly not independent?), a nonlinear technique is required to reduce the dimensionality of a dataset and therefore I use KernelPCA.
Question: Supposing that what I just wrote is correct, how can I know in advance if the features are linearly or non-linearly separable before i decide which technique to use? So far the only way I was able to "guess" is by plotting the features after a PCA and checking if i can separate the new features through straight lines/surfaces. If I am not able to do so, then I apply Kernel PCA. Is this approach even correct?
Note:
Feel free to modify my question where the "?" are :)
My features don't have labels. It's an unsupervised problem.
AI: Lots of the dimensionality problems are trial and errors at first.
Only very few datasets are linear "manifolds" that can be described with PCA. But it's a good start to see if there are disjoint sets, or if you can figure out some structure in 2D or 3D.
KPCA is a good technique, but there are many different kernels you can try. You may want to start with other "better" techniques (better as more deterministic) like ISOMAP, Laplacian Eigenmaps or LLE.
One of the things to remember is that you need enough data to populate the embedded space. |
H: pandas: how to change the specific column as index and change index into various columns
Hi I'm new to data science. Learning data science from course-era.
I'm having pandas data frame as follows,
time value
A 9 5
A 8 4
A 7 3
B 9 3
B 8 2
B 7 1
C 9 3
C 8 2
C 7 1
I want to convert this as ,
A B C
9 5 3 3
8 4 2 2
7 3 1 1
As I start to write query for this, it is getting complicated. Is there any easy way to do this? Thanks for the help.
AI: For me, when it comes to reshaping a dataframe(switching columns/indices/rows and such) its fairly intuitive using the pivot_table function.
my_df.pivot_table(index='time', columns=my_df.index, values='value') |
H: Post training classifier configuration
I have a behaviours vector representing some identity. I need to binary classify [malicious or benign] each instance [ideally with a normalised severity score].
For that I can use a variety of linear classifiers/kernelized SVM/Random Forest etc...
The issue is that once the classifier has been trained I'd like to allow the user the ability to configure which behaviours are more (or less) critical.
For example, one behaviour might be encryption done by some process and a user fearing ransomware might want to make this behaviour more significant.
Given linear classifier (which I'd like to avoid) simply multiplying the given W with the given configuration would do the trick. What can be done in kernelized SVM/Random Forest/DNN etc for equivalent result?
AI: From the description of your problem, it sounds like applying a weight to the input prior to sending the data to the classifier might do the trick and would be easy to implement. The weight could be positive or negative. |
H: I am getting a Type Error in this Line
Diff = [i - j for i,j in zip(text_features, author_signature)]
Diff is a List ,
text_features = [1, 2, 3] ,
author_signature = [3, 2, 1]
AI: You must be getting a type error because the elements of text_features and/or author_signatures are not able to be subtracted in the part i - j.
Depending on what they are, you could try converting them. For example, you could try making one or both of them numbers or strings like this:
Diff = [float(i) - float(j) for i,j in zip(text_features, author_signature)] # both made into numbers
Diff = [str(i) - str(j) for i,j in zip(text_features, author_signature)] # both made into strings (words)
To give better advice, you need to give an example of what your two lists are that are in the zip()
EDIT
Based on the update in the question, I see that it works as expected for me:
In [1]: text_features = [1, 2, 3]
In [2]: author_signature = [3, 2, 1]
In [3]: Diff = [i - j for i,j in zip(text_features, author_signature)]
In [4]: Diff
Out[4]: [-2, 0, 2] |
H: What's a difference between the neoperceptron and CNN?
What's a difference (in terms of architecture) between the neoperceptron and CNN?
Both ANNs have hidden layers and scanners, as I understood, but many sources subdivide them in two classes.
AI: According to the research paper, neoperceptrons are a class of CNN that are not sensitive to rotations.
One of the issues with traditional kernels (that was the case before CNN and it is still true with them) is that the rotation of the input image would lead to different results, because the neurons in the dense layer would have different levels of activations.
With these new neurons, you don't get an issue with orientation. So in theory, if you have a gradient in your image, no matter what the orientation is, you would get the same value.
For a traditional CNN, you would get maximum activation with the original orientation, inverse with a 180° rotated image, and no activation with 90° or 270°. |
H: Is SVD non-linear while PCA (by eigendecompostion) is linear?
I am quite confused because a colleague of mine recently told me that he preferred using SVD instead of PCA (by eigendecomposition) because, contrary to the latter, the former is non-linear so it can identify also some non-linear patterns.
However, I cannot see exactly in what way SVD is non-linear since I have the impression that it simply applies a series of linear matrix multiplications (see also this StackExchange answer).
I know that t-SNE is certainly non-linear and for this reason it is sometimes called as non-linear PCA.
Is SVD non-linear while PCA (by eigendecompostion) is linear?
AI: To the best of my knowledge no.
SVD and PCA are both linear dimensionality reduction algorithms. Some nonlinear dimensionality reduction algorithms are e.g. LLE, Kernel-PCA, Isomap, etc.
About t-SNE I would like to add a point. It reduces the dimensionality (and does it pretty well!) but it is only for visualization and can not be used in learning process! So be careful putting all these next to each other. In other words, they are all dimensionality reduction algorithms however, PCA and SVD can be used for feature extraction but t-SNE can not. All can be used for visualization purposes (in EDA).
I certainly recommend reading this answer. Probably the fact that "the square roots of the eigenvalues of $XX^⊤$ are the singular values of $X$" confused your friend that it's a nonlinear method.
Hope it helps. Good Luck! |
H: Classification method when idea conditions are known?
Dataset: Concrete measured on 8 sets of properties. ~4000 data points.
Known: under ideal condition, value of 8 properties for 10 different types of concrete.
The objective is to find: in 8 dimension space, what is the 'type of concrete' to which the given data point is nearest to.
I think image well explains the question if my words are confusing. Black = idea condition. Red = points who's category need to be identified.
Clarification:
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=8)
knn.fit(X_train, y_train)
pred = knn.predict(X_test)
If i understand correctly, it would be acceptable to: X_train will be array of 8 by 10 and Y_train will be array of 1 by 10. Is this correct.
AI: Welcome to the community Martan!
If I understood your question well, you have a set of patterns (you call them idea condition) and a set of query points (samples) and you want to determine (predict) their labels according to their affinity/similarity/closeness to the pattern.
If it's right, then K-Nearest Neighbor algorithm is what you are looking for. Please note that in high dimensions Euclidean distance is distorted however 8 dimensions is fine.
Hope it helps and in case I did not understand well, please comment here so I can update my answer. Good Luck!
Update
What you mentioned in your comment about training in KNN is right. Let me clarify the thing.
Classification as A Supervised Learning Process: This means that you already had some data and their classes. So you can partition your space according to all those labels. Image below shows partition of different classes (i am not the best painter specially in MS paint :D). Having this, a new point falls into one of these partitions so you can determine its label. Building such a partitioning is done in the training process as you said.
Nearest Neighbor Search: But isn't this method pretty natural? You really do not need to know Machine Learning to perform such an algorithm and indeed you use KNN in daily life (today you see your friend with some of his colleagues and a new person you dont know and you instantly think he is probably a colleague. The other day you see your friend with his family and a guy you dont know and you guess he is most probably a relative. That is conceptually KNN!). In your example you dont need to learn how to partition the space as you already have your predefined labels with fixed position (we do training to find this out. you already have it, so just go on!). Now you can just make a nearest neighbor search and say to which class a point belongs. |
H: Evaluation of linear regression model
I want to evaluate the performance of my linear regression model. I have the true values of y (y-true). I am thinking of two way for evaluation but not sure which one is correct.
Let's assume that we have 2 samples and each sample has two outputs as following:
y_true = [[0.5, 0.5],[0.6, 0.3]]
y_pred = [[0.3, 0.7],[0.9, 0.1]]
- Approach#1 :
One way to calculate the sum of the difference between the actual and predicted for each vector and then average all, as follows:
sum_diff_Vector(1) = abs( 0.5 - 0.3 ) + abs( 0.5 - 0.7 ) = 0.4
sum_diff_Vector(2) = abs( 0.6 - 0.9 ) + abs( 0.3 - 0.1 ) = 0.5
Then avg ( sum_diff_Vector(1) , sum_diff_Vector(2) ) = 0.45
- Approach#2 :
Another way to use the mean absolute error provided by sklearn.metrics in python. The thing with this metric, as opposed to the previous method, it calculates the mean absolute error for each output over all samples independently and then average all of them, as follows:
MAE_OUTPUT(1) = abs(( 0.5 - 0.3 ) + ( 0.6 - 0.9 )) / 2 = 0.25
MAE_OUTPUT(1) = abs(( 0.5 - 0.7 ) + ( 0.3 - 0.1 )) /2 = 0.2
Then avg ( MAE_OUTPUT(1) , MAE_OUTPUT(1) ) = 0.225
Which way is correct and I should use ? please advise?
AI: The only difference is in your example is that you divide by an additional two, because you take the mean per vector instead of the sum. Correctness does not play here because for comparison between different models the only difference is a constant factor and for interpretability it depends on the problem you are solving.
The mean absolute error punishes mistakes linearly while the mean squared error punishes larger mistakes more heavily. This means this depends a bit on what you want to measure, based on the problem you are solving. Next to proper evaluation you could use this same measure to change the KPI you are optimizing directly with a different loss function. |
H: Is PCA (by eigendecomposition) or SVD better in decorrelating the predictors of a machine learning model?
Is there any reason to think that SVD is better than PCA (by eigendecomposition) in decorrelating the predictors of a machine learning model?
AI: To the best of my knowledge, the answer to your question is no. Regarding finding the correlations of different variables, they work the same. They both capture linear associations and do not capture nonlinear ones. The difference between them is mostly about numerical computation which makes SVD more handy than traditional PCA. I recommend having a look at this answer and this explanation.
As a final remark, let’s discuss the numerical advantages of using SVD. A basic approach to actually calculating PCA on a computer would be to perform the eigenvalue decomposition of $X^TX$ directly. It turns out that doing so would introduce some potentially serious numerical issues that could be avoided by using SVD. |
H: Spectral clustering with heat kernel weight matrix
I am studying normalized graph cuts, and one of the way to define weight matrix is using heat kernel, which is $W_{ij} = e^{\frac{−∥x_i − x_j∥^2}{σ^2}}$.
I want to ask: what's the meaning of sigma? Does it affect on the partition of the data? How do we pick sigmas? And what happens if its too large or too small?
What happens to the Laplacian matrix L as $\sigma → 0$ and $\sigma → \infty$? What are the eigenvectors and eigenvalues of this matrix?
AI: $\sigma$ represents a typical distance between points. If all points of one cluster of your graph are separated from all points of another by a distance that is significantly higher than $\sigma$, then the spectral clustering will probably use this as a cut.
If you already know the number of clusters (or cuts) that you want to make, $\sigma$ does not need to be tuned very finely. But it becomes really important if you have no idea how many cuts you want to make.
A good approach is to make a parametric study over $\sigma$, and make the decision through the eigen values. Here's an example from one of my study cases, giving eigen values (in ascending order) of the Laplacian matrix, for 4 different values of $\sigma$:
What you are looking for is a break in the increase of eigen values: theoritically, a wide gap between two consecutive values ($n$ and $n+1$) should tell you that it is a good idea to make $n$ clusters.
As you can see, small sigma values (upper left plot) lead to very low eigen values, sometimes even numerically negative. Very small values tend to make very small clusters containing outliers, and a very big cluster with all other points. High sigma values (lower right plot) will make your matrix look like the identity, with high eigen values, this usually won't bring anything interesting.
What I usually do is try a wide window for sigma values, and shorten the range progressively, until I find a satisfying result. This is all graphical, and depends on the prior knowledge about your problem (how many clusters do you approximately expect? 2, 10, 100?).
Eigen values represent the quality of the cut. Small values correspond to very distant clusters. For instance, in my upper right case, I could make 4 very well separated clusters (first 4 eigen values are almost 0). The next cut (to reach 5 clusters) would be less effective than the 3 previous ones. 13 clusters could also have been a good try. But I actually selected 9 clusters, because it was closer to what I actually expected. |
H: Multi-Class Neural Networks | different features
This may be a wrong question or something so feel free to correct me :).
I have been studying neural networks for weeks now. I came across the multi-class classification model that uses neural networks.
As we see in this picture, the model allows you to classify your input into different classes. But this also assumes your input always use the same number features, right ? Meaning if I want to recognize handwritten digits, I should have images with the same dimension (for example 24x24) so I can use the same number of features (in this case 24x24=576). But what if one class, for example number 6, requires a different number of features (like the dimension of the handwritten digit 6 is 30x30 pixels)
I know that the logical way to do this, is to have two different neural networks, but Is there any way to simultaneously train a multi-class classification model where inputs might use different features? What does research say about this?
PS: If you have good reading materials about this, please feel free to link them too.
Bests,
AI: It is essential for all input patterns to have the same number of features. The reason is that each input feature is connected to specified neurons and they have specified trained weights. A typical solution is to resize your input by reserving its aspect ratio. You should consider that if you want to have good accuracy, you have to have such an operation while training the network too. Even in convolutional networks, it is essential to have the same input size. Although the convolutional layers won't bother you, the connection of fully connected layers and flattened layers used after convolutional layers will have a problem if you don't use correct input shape; the dimensions won't match.
About recent studies, I have not seen yet, but a typical solution can be employing PCA and using let's say its top-10 features as the input of a fully connected network, although for input patterns with a huge difference in the number of input dimensions I guess it is not logical. |
H: How to calculate temporary/periodic similarities of an increasing series in real time?
Considering there are two series over time and new data is added in the series over a gap of n second . The series might have periodic similarity/dis-similarity within themselves. How to calculate correlation among the series value in real time?
AI: well, This is an interesting question! Let's formulate it again to be precise in answer:
I have two real-time streams of values. There are temporary similarities (transient similarities) between two. How to capture it?
As you are talking about similarity, I did not limit the scope to correlation. One may define another measure of similarity. I would say you should define your similarity function first (you may choose correlation. I do as well for my example) and a period of inspection (it's simply a window of length $m$ ending at current timestamp).
Using these two and starting from $m+1$th sample, you can calculate your similarity measure (here correlation) as a time-series. This time-series starts after $m$th sample of input series so keep it in mind to keep similarity measure and original timestamps synchronized. The interval between two consecutive points in output (similarity series) is defined based on the resolution you need. after $m+1$th sample you can shift your window on step at time and calculate the similarity accordingly. Then the intervals in output will be as same as input.
Of course it keeps you more away from real-time as the frequency and number of calculations is maximum. You can define a margine for yourself (let's say $d$ sample) and calculate similarity every $d$ time. Then you lose some info (discarding the samples in between) but you get closer to real-time (you will have $d*n$ seconds to perform computation).
Please let me know if it need clarification. Welcome to the community BTW :) |
H: How to handle “not label Y” in a multi class machine learning problem?
I have a train data set that comprises information in the form:
feature 1, ..., feature N, label
1 x1, ..., xn, A
2 x1, ..., xn, B
3 x1, ..., xn, C
...
4 x1, ..., xn, not A
5 x1, ..., xn, not B
6 x1, ..., xn, D
7 x1, ..., xn, E
...
In the test dataset, the labels A, B, C, D, E should be predicted.
I would like to use the information in row 4 and 5 in order to avoid overfitting.
Which techniques / tools can I use to utilize the "not A" and "not B" information in rows like 4 and 5?
AI: I think you should start by looking into Multi-label Classification, as your problem seems to be a subset of MLC, where one example can have multiple correct labels. If one class is "Pictures with people in them" and another is "Pictures with guitars in them", one picture could certainly belong to both - this is an example of multi-label classification.
In your case, you could formulate your problem by considering that most examples have only one appropriate label, but 'Not A' has four: 'B', 'C', 'D', and 'E'. (If you have more specific information about the possible distributions of the correct label, you could use that information here too)
There are several methods for MLC, not all of which I'm familiar with, but I know plenty of research has been done on this using the CelebA image dataset, for example (it comes along with a number of potential label vectors).
Good Luck, and welcome to the site! |
H: how to do Time Based splitting of Amazon fine food reviews dataset
I want to do time-based splitting on Amazon food reviews dataset (https://www.kaggle.com/snap/amazon-fine-food-reviews ). But I don't understand the time format and also how can I divide the data after it is sorted according to time
AI: Those dates are on timestamp. try to convert them useing:
from datetime import datetime
datetime.fromtimestamp(1540574790)
where "1540574790" is a moment ago for example.. for more, check https://en.wikipedia.org/wiki/Unix_time |
H: Can training examples with almost the same features but different output cause machine learning classification algorithms to perform poorly?
We usually filter out features (columns) that have low correlation or no significant impact on target variable. How would an algorithm, being trained with high dimensional data set (let’s say, more than thousands of features) contain rows with very high correlation but have different target variable, perform? Wouldn’t it make the ML algorithm confused in classification task?
Let me give a simple example to explain what I mean. Assume, we are given the price of a car and the task is to classify it as either of ‘Cheap Car’, ‘Budget Car’, ‘Luxury Car’, and ‘Elite Car’. Further assume, the distance between two rows is generally expected to be greater than 1000. For example, if a row describes a Car with price 1000, the next higher level car in our classification expected to be at least 2000. What if there is some anomaly in data set like a car with price 1000 classified as ‘Cheap’ whereas a car with price 1050 classified as ‘Elite’. That is grossly wrong. We eliminate irrelevant features. Shouldn’t there be something to eliminate confusing training examples?
AI: The answer is yes, highly similar instances in your dataset that have different target classes will cause your model to perform poorly.
The reason for this is at the core of how all classification machine learning algorithms work. The goal of a classifier is find a function which can separate the two classes. Thus, if these two classes are very mixed then the probability of making a classification error increases and thus you will lose precision in your resulting classification.
One method to correct this problem is to add more features to your dataset. You should try to find features which will distance the distributions of these two classes. For example if classifying cats and dogs, it would not be a good idea to use features such as: number of legs, number of eyes, etc. This will cause the classes to be indistinguishable. Try adding features such as: weight, frequency of cry, etc. This can be difficult to do often as collecting additional data is expensive. You can also try to transform your data to a new feature space. A transformation mapping your features to a different space can cause their distributions to distance themselves. |
H: What are the techniques for anomaly detection of Unsupervised learning problem
I have sufficient and properly formatted data in millions without labels.
I have to find out the anomalies.
Heard Isolation forest, Mahalanobis distance about identifying anomalies in unsupervised learning.
Are these ok to try?
Are their any other techniques we can try?
Thanks
AI: You can try these techniques and many more.- All anomaly detection techniques
As discussed in article, these are outlier detection techniques.Are you looking for outliers? better to get some known abnormalities and build a classification.
If supervised not possible, try to fit one of these approaches-
ABOD for identifying abnormalities in high dimensional data
Should clustering be based on distance or density to find outliers(abnormalities)
Connectivity based outlier detection technique
There are other techniques like PCA based, regression based, auto-encoder, knn, weighted Knn and even self organizing map (SOM) . let me know if you need some more information.
Imp- Know your abnormalities better before jumping to machine learning, I have experienced that even qq plot or just data points 3sd away might give better anomaly detection. |
H: How to derive the sum-of squares error function formula?
I'm attending a Machine Learning course and I'm studying linear models for classification right now. Slides present approaches to learn linear discriminants (Least squares, Fisher's linear discriminant, Perceptron and SVM), more specifically, how to compute the weight matrix $\tilde{\textbf{W}}$ to determine the discriminant function:
\begin{equation}y = \tilde{\textbf{W}}^T \tilde{\textbf{x}} + w_0. \end{equation}
My problem is about least squares: I don't understand how the minimization of sum-of-squares error function:
\begin{equation}E(\tilde{\textbf{W}}) = \frac{1}{2} Tr\Bigl\{(\tilde{\textbf{X}}\tilde{\textbf{W}} - \textbf{T})^T(\tilde{\textbf{X}}\tilde{\textbf{W}} - \textbf{T})\Bigr\} \end{equation} (where $Tr$ is the trace).
is derived and how it is possible to reach the closed formula solution:
\begin{equation}\tilde{\textbf{W}} = (\tilde{\textbf{X}}^T\tilde{\textbf{X}})^{-1}\tilde{\textbf{X}}^T\textbf{T}\end{equation}
Can someone explain me the main steps in the simplest and clearest possible way to make sense of these formulas? I'm a beginner.
P.S. These formulas come from C.Bishop. Pattern Recognition and Machine Learning.
AI: There are two interpretations of this formula that I explain one of them.
\begin{equation} Xw = y \end{equation}
\begin{equation} X^tXw = X^ty \end{equation}
The above is for making sure that you make a square matrix that it has an inverse. It is possible that $X^tX$ does not have any inverse but its chance for linear regression problems is not that much. The reason is that you have a matrix $X$ which belongs to $R^{m \times n}$ which $m$ represents the number of samples and $n$ represents the number of features. Usually, the number of samples is much more than the number of features. Next,
\begin{equation}(X^tX)^{-1}(X^tX)w = (X^tX)^{-1}X^ty\end{equation}
\begin{equation}w = (X^tX)^{-1}X^ty\end{equation}
Consequently, you have found a closed form for the $w$ linear regression problem which can be generalised to non-linear regression.
Be aware that $(X^tX)^{-1}X^t$ is called the pseudo-inverse of the matrix $X$. The reason is that $X$ is not a square matrix and it does not have inverse but by the mentioned formula you can find its pseudo-inverse. Just multiply it by $X$ and you will get Identity matrix.
There is another interpretation of this. You can find here. |
H: What exactly does the model generation mean in this diagram?
I've been trying to grasp a research paper on image colorization using neural networks here I am stuck at this diagram. What I need help on, is the Model Generation step after Feature extraction. What exactly do we do in this step?
AI: A model is a simplified representation of a complex real world object/phenomena. As a simple example, I see a mountain and because I cannot study the exact shape of the mountain I represent it by a model that it is a triangle. Representing the mountain by a triangle is a model. Someone else comes and makes a better model by saying it is a cone. Every model captures some features of the real world phenomena. A model is better if it is able to capture the true features of the object/phenomena.
From a machine learning perspective, this corresponds to exploring the dataset and then thinking about the dataset and creating a model of the dataset. For instance, if I have the famous Titanic dataset, I would be thinking how different features affect the probability of survival. One "model" will be to think that different attributes independently affect the chances of a person's survival. I would be wanting to use a naive bayes classifier to learn the parameters of this model and see how it performs. A different model would be to assume that different attributes probably follow a hierarchy. Maybe, first comes gender. Then among those of the same gender, the deciding factor is probably class and so on. To learn the parameters of this model I would want to train the data with a decision tree classifier or a random forest classifier. This is precisely the process of model generation. It involves thinking about the data by combining your insights from the exploratory data analysis and the domain knowledge you have. You then make certain assumptions. And then test your model. |
H: Do we need to use off-policy methods for policy shaping?
Let's say that there is a reinforcement learning task and an agent in a environment. I want a human teacher to manually modify the policy of the agent (policy shaping) to speed up the learning of the agent. Do I have to use off-policy methods or I can get away with on-policy? Why?
AI: You can use Policy Gradients for this which is on-policy. One of the most straightforward ways to do that is to provide the teacher/expert's probabilities over actions at each time-step. Then you can use the usual policy gradient loss plus the cross entropy between expert's actions distribution and agent's action distribution:
$$\mathcal{L(\boldsymbol{\theta})} =(R_t-V_{\boldsymbol{\theta}}(\boldsymbol{s}_t)) \log \pi_{\boldsymbol{\theta}}(a_t|\boldsymbol{s}_t) + \lambda H(\pi_{expert},\pi_{\boldsymbol{\theta}})$$
You could get a look at this paper for basic ways to affect on-policy learning in PG methods.
On the why this works, I will give you a descriptive explanation: Cross entropy is related to KL divergence which is a "measure" of the distance between two distributions. With the extra term in the Loss function we are trying to minimize the distance between the expert's distribution and agent's distribution (i.e. making the agent's one more similar to the expert's one). |
H: Multi-label classification model in python?
Assume you have the following artificial dataset
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))
df['sex'] = [np.random.choice(['male', 'female']) for x in range(len(df))]
df['weight'] = [np.random.choice(['underweight',
'normal', 'overweight', 'obese']) for x in range(len(df)) ]
This produce the following artificial dataset
df.head()
Out[50]:
A B C D sex weight
0 0.955136 0.802256 0.317182 -0.708615 female normal
1 0.463615 -0.860053 -0.136408 -0.892888 male obese
2 -0.855532 -0.181905 -1.175605 1.396793 female overweight
3 -1.236216 -1.329982 0.531241 2.064822 male underweight
4 -0.970420 -0.481791 -0.995313 0.672131 male obese
I am trying to know both the sex and the weigh based on the value of the features A,B,C,D. I learned that this a multi-label classification problem and there is a nice python library that should help (e.g. scikit-multilearn ). However I do not know how this is achieved. Can someone show me how I could train a model and test its accuracy on this artificial dataset? Specifically:
1. Assuming that A,B,C,D are the feature and sex and weight are the label, how to create a test and training sets? The following will not work
X=df[list('ABCD')]
y=[['sex','weight']]
X_train, X_test, y_train, y_test = train_test_split(X, y)
How to you train and test the model?
AI: First of all you would need to encode your target columns.We can use sklearn.preprocessing.MultiLabelBinarizer here:
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
X=df[list('ABCD')]
Y=pd.DataFrame(mlb.fit_transform(df[['sex','weight']].values), columns=mlb.classes_, index=df.index)
yields:
In [75]: Y
Out[75]:
female male normal obese overweight underweight
0 1 0 0 1 0 0
1 1 0 0 0 0 1
2 1 0 1 0 0 0
3 1 0 1 0 0 0
4 1 0 0 0 0 1
5 1 0 1 0 0 0
6 0 1 0 0 0 1
7 1 0 1 0 0 0
8 0 1 0 0 0 1
9 0 1 0 0 0 1
.. ... ... ... ... ... ...
90 1 0 0 1 0 0
91 0 1 0 0 0 1
92 1 0 0 0 0 1
93 1 0 0 0 1 0
94 1 0 1 0 0 0
95 1 0 0 0 0 1
96 0 1 0 0 0 1
97 0 1 0 0 1 0
98 1 0 0 0 1 0
99 1 0 1 0 0 0
[100 rows x 6 columns]
now we can use one of the classifiers that support multi-label classification (see Support multilabel:)
Example:
from sklearn.neighbors import KNeighborsClassifier
knc = KNeighborsClassifier()
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
knc.fit(X_train, Y_train)
Y_pred = knc.predict(X_test) |
H: how to convert multiple columns into single columns in pandas?
I have a dataframe like this
my desire format is like this
how can i do this?
AI: Let's say you have the following data:
import pandas as pd
import numpy as np
df = pd.DataFrame({'values': ['1', '2', '3', '4', '5', '6'],
'month1': ['January', 'March', np.nan, np.nan, np.nan, np.nan],
'month2': [np.nan, np.nan, 'February', 'April', np.nan, np.nan],
'month3': [np.nan, np.nan, np.nan, np.nan, 'May', 'October']})
values month1 month2 month3
0 1 January NaN NaN
1 2 March NaN NaN
2 3 NaN February NaN
3 4 NaN April NaN
4 5 NaN NaN May
5 6 NaN NaN October
You can use the following solution:
df['month']=np.nan
for month in [col for col in df.columns if 'month' in col]:
df['month'].fillna(df[month],inplace=True)
It first creates an empty column named "month" with NaN values, and you fill the NaN with the values from the "monthX" columns, concretely it gives you:
values month1 month2 month3 month
0 1 January NaN NaN January
1 2 March NaN NaN March
2 3 NaN February NaN February
3 4 NaN April NaN April
4 5 NaN NaN May May
5 6 NaN NaN October October |
H: How to compute the maximum likelihood hypothesis?
The Bayes theorem states that:
\begin{equation}
P(h|D) = \frac{P(D|h)P(h)}{P(D)}
\end{equation}
where $D$ is the dataset and $h$ is an hypothesis from the hypothesis space $H$. Now (I'm not sure so if I'm wrong please correct me) I can consider:
$P(h|D)$ = the probability $h$ has generated the dataset $D$. More specifically, for each $h$ we have a probability that it has generated the dataset $D$.
$P(D|h)$ = the probability that $D$ has been generated by $h$. More specifically, for each possible dataset $D$, a certain hypothesis $h$ (that we have) can have generated it.
And I can represent them visually, for example:
Now, if we know the prior probability $P(h)$ then we can compute the maximum a posteriori hypothesis with the following formula:
\begin{equation}
h_{MAP} = argmax_{h \in H} P(h|D) = argmax_{h \in H} \frac{P(D|h)P(h)}{P(D)}
\end{equation}
Otherwise, we can consider the maximum likelihood hypothesis:
\begin{equation}
h_{ML} = argmax_{h \in H} P(D|h)
\end{equation}
At this step I don't understand how I compute $h_{ML}$ because if I consider $P(D|h)$ represented as in the previous example in the cartesian space we have $D$ in the x-axis, so if I consider the $argmax P(D|h)$ I will find the best $D$ and not the best hypothesis $h$.
What am I doing wrong? Are probabilities $P(h|D)$ and $P(D|h)$ not well interpreted in the cartesian space?
AI: We can define a function $f(D,h) =P(D|h)$, that is it is a function of both $h$ and $D$ and we want to maximize it.
In the event that $D$ is already fixed to be $\hat{D}$, then the goal should be to maximize the function $$g(h) = f(\hat{D}, h) = P(\hat{D}|h).$$
We can plot the function $g$, here $g$ is a function of $h$ rather than $D$. |
H: How many Hidden Layers and Neurons should I use in an RNN?
I am very new to neural networks and machine learning and I have been making a Bitcoin price predictor to learn it. I was wondering about the number of hidden layers I'd need in a recurrent neural net using LSTM cells.
I have 60 inputs for 30 previous days' close prices in 12-hour intervals and require 1 output for the future 12 hours.
I am doing this with Keras in python 3.6.
Any help would be awesome!
AI: Number of layers is a hyperparameter. It should be optimized based on train-test split. You can also start with the number of layers from a popular network. Look at kaggle.com and see how many layers do they use in competitions. |
H: How to tell if the “clusters” I see in my pair plots are statistically significant or occurring by random chance?
I have a data set with one row per subject. Some variables include laboratory parameters for blood chemistry, hematology, etc. I also have some flag variables: any = 1 if the subject experienced an adverse event, 0 if not; and ser_flag = 1 if the subject experienced a serious adverse event, 0 if not.
There doesn't seem to be any difference in the distribution of laboratory parameters between subjects who experienced an adverse event (any=1) and subjects who did not (any = 0). When I do a pair plot of all the lab parameters against each other and color by the any flag, there doesn't seem to be any clustering or separation of subjects.
However, when I do the same pair plot and color by ser_flag - I notice that the 20 subjects who experienced a serious adverse events seem to be clustered together in many of the plots.
What test (if any) can I use to determine if these clusters I think I am seeing are occurring randomly, by chance...or if they statistically significant?
AI: Regarding clustering in machine learning, there is the Hopkin's statistic which basically compares your data to randomly generated data and returns a score of how likely your data shows tendencies of clusters.
However, you need to be able to define a distance function between two data points which I don't know is possible in your scenario. |
H: How to train f(x)=x*x using Artificial neural network?
let's take some training data of size 100
x_input = [1,2,3,4,.....,100]
y_label = [1,4,9,16,....,10000]
Now, let's consider that we don't know the function f where f(x_input) = x_input2
How should we train it?
AI: The answer to your question lies here.
Personally, since I can see the high autocorrelation of your input signal x (obvious that x(t+1) = x(t) + 1), you could treat it as timeseries and use an LSTM-RNN to model the nonlinear function y. For simplicity, you could always treat your data as iid and, under this assumption, use an MLP. |
H: Linear Regression in python with multiple outputs
I have a time series dataset which represented as following:
x=[
[12.19047619, 18.28571429, 6.0952381 ] ,
[ 80.98765432, 14.17283951, 11.13580247 ] ,
[ 50.82644628, 16.26446281, 9.14876033 ] , .... ]
and to predicted -->
Y = [13.9, 18, 14.987]
How I can use LASSO and SVR linear regression models in python to predict Y (which represented as a vector as shown in the above example)
AI: Both Lasso and SVM are available in sklearn library. Lasso: sklearn.linear_model.Lasso. SVM: sklearn.svm.SVT
An example from Lasso page:
>>> from sklearn import linear_model
>>> clf = linear_model.Lasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
Lasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000, normalize=False, positive=False, precompute=False, random_state=None, selection='cyclic', tol=0.0001, warm_start=False)
>>> print(clf.coef_)
[0.85 0. ]
>>> print(clf.intercept_)
0.15...
In your case clf.fit looks like this:
clf.fit(X, Y)
X should be the size (nn,n)
Y should be the size nn
Where nn is the number of observations (points) and n is the number of variables. So rows in X are observations and columns are different variables.
If you have more variables than observations then you should read this post about the problems you can have with it and how to solve them. |
H: Folds in Cross validation
I am performing 10-folds cross-validation to evaluate the performances of a series of models (variable selection + regression) with R. I created manually the folds with this code.
At the moment I'm performing first variable selection, then hyperparameters tuning through cv, and finally testing the performance with RMSE and MAE for all the models, but I have a doubt.
Is it correct to "use" the same fold for all the models? Or should I do a separate cv for each model?
AI: I recommend trying both (more than once), and exploring any differences. In my experience, using the same set of folds for all models or using a new set of folds for each model doesn't make any material difference. Post if you find different!
Regarding "I'm performing first variable selection, then hyperparameters tuning through cv", maybe watch https://www.youtube.com/watch?reload=9&v=S06JpVoNaA0 to be sure you are not introducing any bias. |
H: Is it better to use a MinMax or a Log Return normalization to predict stock price movements?
I am trying to use a LSTM model to predict d+2 and d+3 closing prices. I am not sure whether I should normalize the data
with a MixMax scaler (-1,+1)
using the log return
(P(n)-P(0))/P(0) for each sample
I have tried quite a lot of source code from Github and they don't seem to converge on any technique.
AI: Log returns are symmetric compared to percentage change. log(a/b) = - log(b/a) and this (less skewness), in theory, leads to better results for most models (linear regression, neural networks).
Neural networks like lstm work better if the values are close to zero, but the difference in normalizations is usually not that big.
Any returns (log or percentage) are better than raw values because prices change according to previous prices. Their absolute (raw) values have almost negligible influence compared to previous price.
I would recommend first to convert to log returns and then normalize. If it is daily prices then I would divide the log returns by something like 0.05. Price changes have very heavy tails in a distribution so I would not suggest using minmax because then you divide by something like 0.5 (which probably was in great depression) and get all values too close to zero. Dividing by standard deviation should also be good.
But reality is different than theory, so it is better to benchmark. Maybe percentage changes are better because this is the number people see and react to. And markets are a lot about psychology.
And be prepared to see very high errors and bad models. Financial markets are badly predictable both in practice and theory. According to economic theory if they were predictable and people are rational and have unlimited credit lines then any possibility of earning additional money compared to the whole market will be closed in milliseconds. Only if you find some way to analyze data that noone is currently using only then will you be able to earn money. Neural networks were discussed in 1990s to predict financial markets. So LSTM is not really new in 2018. |
H: How to save a Numpy array output of an autoencoder as an image
I have a 256*256*3 numpy array "SP" out of an autoencoder decoder layer which I want to save and open as an .jpg image. I used something like the following python code snippets:
img = Image.fromarray(SP, 'RGB')
img.save('my.jpg')
img.show()
However I have noticed the array "img" is 256*256 in dimension and the image is just a noise. What can be the right way to display the image? I have attached the array as a output.npy file: ---> https://ufile.io/410iu
AI: That is right. The size you see is the Frame Size of the image i.e. height and width. It does not refer to the dimensionality of a color image array. To see it try this
rgb = np.zeros((255, 255, 3), dtype=np.uint8)
img = Image.fromarray(rgb, 'RGB')
r = img.getchannel("R")
g = img.getchannel("G")
b = img.getchannel("B")
print(np.array(r.getdata()))
print(np.array(g.getdata()))
print(np.array(b.getdata()))
where output is
[ 0 1 2 ... 252 253 254]
[55 55 55 ... 55 55 55]
[ 1 0 255 ... 5 4 3]
So you have 3 dimensions (or colors). And the point about noise is in dtype=np.uint8. Convert your array to this and it will work.
You could also simply try
from scipy.misc import imsave
imsave("file_name.jpg", SP)
It does the job.
Good Luck! |
H: Is it correct to use non-target values of test set to engineer new features for train set?
Suppose, I have a dataset with a feature_1 value and a target value. Now, I want to engineer a new feature by creating relative value by subtracting mean from each value.
Question: Can I (1) use feature_1 value of test set to calculate mean or (2) should I use only the train set values?
If (1) is correct than I can use the same mean for test set and train set by calculating the mean of feature_1 for all dataset. I'm not sure it's legal, because here we use information from the test set in the train set. On the other hand, we don't use target value, so it might be ok.
If (2) is correct, then, I suppose, we can't use test to calculate the mean for train set, but we can use train set feature_1 values to calculate mean for test set. But then train and test sets' means might be different and influence the correctness of the model for test set. I could use the mean of train set for test set, but again I'm not sure it's correct.
It might be irrelevant for a large dataset, because influence of each value for mean is negligible, so I suppose (1) is ok her. But if I have very small dataset of, say, less than 30 samples or, if e.g. I want to generate new feature by calculating relative value of feature_1 for each category in some categorical feature_2 by calculating mean of all the samples belonging to the same category. Then, it might turn out to be just a few samples in some category of feature_2, so that each sample would influence mean greatly.
AI: Definitely (2). You should not include the testing values when calculating features over the data (or normalizing or scaling the data, etc).
Let's take a step back for a second. Why are we holding out a test set at all? Well, we're training a model on a dataset so that in the future, when we collect more data from the same domain, we can use our model to predict the target for that new data. We don't just want to do as well as possible on the data that we have, we already have the labels for it. So we hold out a test set to measure the performance on, so that we can get a sense of how well the model's predictions generalize to new data.
If we give the model information about the test set (which putting the test samples into a mean calculation would do), we'd improve the model's performance against that particular test set -- but presumably not against any new data that was collected later, when the model was actually deployed and in use! All we'd be doing is making our performance metrics less effective measures of whether or not the model generalizes.
So even while the performance would appear better, it would both possibly not reflect an actual improvement (depending on the actual distributions of train, test, and post-deployment collected data), and would harm our ability to say for sure whether our choices in model design improved our generalization performance at all.
I actually talked about this a little bit with respect to feature scaling in this answer. |
H: Timeseries of odds in race - how to pick a model
Being new to AI/ML I'd like some pointers to where to begin.
I got data from horse races. Specifically, I got the odds
for each runner during the race - ca 5 times per second.
t1 r1 r2 r3 ...
1 5.25 2.04 3.25
2 5.10 2.50 2.75
...
I also know if the runner won/placed/lost
My goal is to be able to say that runnerx X will win/place
after ca 50-75% of the expected racetime with say 80% accuracy.
My problem is that I don't know how to model this situation.
I've seem tournament strategies - ie who out of two runner will win -
but here's more data - both in time and in participants
What model should I pay attention to?
/Björn
AI: Odds are directly connected to prediction accuracy. 4 means that you loose your money in 4 cases and win in one case. This is 20% probability $\left(\frac1{4+1}\right)$. If the bet goes down to 0.25 then the probability is 80% $\left(\frac1{0.25+1}\right)$. This logic is true if the spread between betting for the horse and against the horse is low. Betting is a financial market where people with the best betting algorithms bet the most and actually decide what the bet should be. If there is a situation on the market where someone could earn money it will be closed in milliseconds by trading robots. This is called an Efficient Market Hypothesis. In your data you can actually check if the hypothesis is true for horse betting.
If you would like to use ML algorithms to try to predict the outcomes better:
There are several ways to interpret the data.
You predict the probability that each horse is a winner.
You predict a rank of each horse.
You predict time difference between horses and the winner
You predict time
Some algorithms could be better at predicting one problem or the other.
Possible algorithms:
Linear regression. I always start with this because it is easy to calculate and easy to interpret. You build separate models for each horse number. Can be used for all problem definition.
Logistic regression. Similar to regression, but can only be used for the first problem definition.
LGBoost and XGBoost. Most likely the best models for such data, because the data is structured (not an image, sound). Can be used for all problem definitions.
Neural networks. For example MLP (multilayer perceptron).
You can also calculate some additional features. For example, multiply some odds, or calculate inverse normal distribution from the odds.
I would also recommend to collect more data. For example weather data. Then it is good to know the horse. Some horses could be more popular than others and receive better odds. Also time of the year because the horses could have a training plan according to the time of the year. |
H: How do I use a model after it's fitted to predict the class of a single string?
After a model is built, how can I use it to predict the class of a single string?
model.predict() is returning something like [[0.41100174 0.5889983 ]] instead of it's predicted class (0 or 1).
Say I just built model like so:
hist = model.fit(data.x_train,
data.y_train,
validation_data=(data.x_test, data.y_test),
epochs=500,
batch_size=50,
shuffle=False,
verbose=2,
callbacks=[checkpoint, estopping, tensorboard])
I'm looking to predict a string's class using model.predict(), but it returns something like [[0.41100174 0.5889983 ]] instead of it's predicted class (0 or 1).
The shape of data.x_test (used for validation data) is the same shape as data.x_data (reformatted string to predict): (1, 250, 70) (except the number of rows, obviously)
Here's how I'm trying to use the model to predict the class of a string.
def predict_string(model,s):
df = pd.DataFrame([s], columns=['text'])
df = df.reset_index(drop=True)
df['label'] = [0]
df.label = pd.to_numeric(df.label, errors='coerce') # Convert to integer
df = df.dropna()
df = df[df.label.apply(lambda x: x !="")]
df = df[df.text.apply(lambda x: x !="")]
vocab_len = 70
data = char_preproc(df.text, df.label, vocab_len, True, None)
y_pred = model.predict(data.x_data)
return y_pred
s = "Best movie ever" # Out: [[0.41100174 0.5889983 ]]
# s = "Worst movie ever" # Out: [[0.5436389 0.45636114]]
y_pred = predict_string(model, s)
print("Review: {}\"\nPredict: {}".format(s, y_pred))
I'm not sure it matters, but for testing, I'm classifying movie reviews as good (1) or bad (0) using a Character-level CNN trained on the Rotten Tomatoes Movie Review dataset, running on GPU via Google Colab.
AI: [[0.41100174 0.5889983 ]] what this means is the probability of class 0 is 0.411 and probability of class 1 is 0.588. Since probability of class 1 is greater than probability of class 0, it belongs to class 1.
a = [[0.41100174 0.5889983 ]]
np.argmax(a)
Output : 1
np.argmax will get you the class. |
H: .h5 file size is same before and after training?
learner = ConvLearner.pretrained(arch, md, ps=0.5) #dropout 50%
learner.load('ResNet34_256_1-2')
learner.fit(lr,1)
learner.save('ResNet34_256_1')
h5 file in load and save is having same size. Should it increase after training?
How do I know that saved model is better than loaded on?
AI: This is not surpising.
h5 is the save file of the model's weights. The number of weights does not change before and after training (they are modified, though), therefore, your file should have the same size.
To know if your model after training is better, you have to measure it : you have to make predictions on a test set, and measure it, with usual metrics like accuracy or F score and so on. look at this to see usual metrics used in machine learning. |
H: TF-IDF Features vs Embedding Layer
Have you guys tried to compare the performance of TF-IDF features* with a shallow neural network classifier vs a deep neural network models like an RNN that has an embedding layer with word embedding as weights next to the input layer? I tried this on a couple of tweet datasets and got surprising results: f1 score of~65% for the TF-IDF vs ~45% for the RNN. I tried the setup embedding layer + shallow fully connected layer vs TF-IDF + fully connected layer but got almost same result difference. Can you guys give some opinion on how TF-IDF features can outperform the embedding layer of a deep NN? Is this case common? Thanks!
I've used unigrams and bigrams to produce the TF-IDF features
AI: It is common for TFIDF to be a strong model. People constantly get high places in Kaggle competitions with TFIDF models. Here is a link to the winning solution that used TFIDF as one of its features (1st place Otto product classification). You will most likely get a stronger model if you combine the TFIDF and RNN into one ensemble. Other results from Kaggle:
2nd place: https://www.kaggle.com/c/stumbleupon/discussion/6184
4th place: https://www.kaggle.com/c/avito-demand-prediction/discussion/59881
3rd place: https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/52762
https://www.kaggle.com/c/avito-demand-prediction/discussion/56897:
A good number of kernels are going the traditional route with
CountVectorizer/TF-IDF, and some brave souls (I say brave because
training is slower and the results don't seem as spectacular so far)
have been experimenting with embeddings, as per the previous
competitions. |
H: How can I combine two single-column datasets into a single Pandas data frame?
I'd like to import the Rotten Tomatoes Movie Review dataset into a single data frame. How can I combine two datasets that are 1-column strings into a text, label shape?
Here's where I'm at so far (you can duplicate in Google Colab) :
import os
import pandas as pd
# Reset
!rm -rf "rt-polarity.csv"
def fetch_rt_polarity_data():
# Fetch Data
if not os.path.isfile("rt-polaritydata.tar.gz"):
!wget -q http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz
!tar -xzf rt-polaritydata.tar.gz
!mv rt-polaritydata/rt-polarity.pos rt-polarity.pos
!mv rt-polaritydata/rt-polarity.neg rt-polarity.neg
!rm -rf rt-*
# Format Data
df_pos = pd.read_csv("rt-polarity.pos", encoding='latin-1', sep="\t", lineterminator="\n")
df_pos = df_pos.reset_index(drop=True)
df_pos.columns = ['text']
df_pos['label'] = 1
df_neg = pd.read_csv("rt-polarity.neg", encoding='latin-1', sep="\t", lineterminator="\n")
df_neg = df_neg.reset_index(drop=True)
df_neg.columns = ['text']
df_neg['label'] = 0
# Combine
df = pd.concat([df_pos, df_neg], ignore_index=True)
df.head(10)
df.to_csv("rt-polarity.csv")
df = pd.read_csv("rt-polarity.csv", encoding='latin-1', sep="\t", lineterminator="\n")
return df
df = fetch_rt_polarity_data();
df.head(5)
AI: I would import the datasets in pandas separately, mold them as you please, and then you can use the pd.concat function. This will assume that the instances are aligned by the automatically assigned index in pandas. If there is more data in one list than the other, the missing values will be NaN.
df1 = pd.DataFrame(data=[1,2,3])
df2 = pd.DataFrame(data=['a','b','c','d'])
dfs = pd.concat([df1, df2], axis=1)
If you have an index to link the text to the labels, then you can use the pd.merge function.
df1 = pd.DataFrame(data=[1,2,3])
df2 = pd.DataFrame(data=['a','b','c','d'])
dfs = df1.merge(df2, on='index') |
H: Convolution and Cross Correlation in CNN
What would be the intuition behind using the convolution and cross correlation operation inside Convolutional Neural Networks? I am interested in putting together the theory from Digital Image Processing where these 2x operations are defined, and CNNs. Could anyone help me connect the dots?
AI: A neural network takes every input into a neuron and then through an activation function the neuron produces an output. When applying this method to images we can quickly see that this is not an ideal solution. The similarity of a pixel is much more related to a neighboring pixel.
The convolution operation in deep learning was used for this exact purpose. It is better to focus on the neighborhood of inputs before considering the correlation of that pixel with those on the other side of the image. So we can instead apply a mask that will mix the neighborhood of pixels.
Before we get into some theory, it is important to note that in CNNs although we call it a convolution, it is actually cross-correlation. It is a technicality, but in a CNN we do not flip the filter as is required in typical convolutions. However except for this flip, both operations are identical.
Discrete convolutions
From the wikipedia page the convolution is described as
$(f * g)[n] = \sum_{m=-\inf}^{\inf} f[m]g[n-m]$
For example assuming $a$ is the function $f$ and $b$ is the convolution function $g$,
To solve this we can use the equation first we flip the function $b$ vertically, due to the $-m$ that appears in the equation. Then we will calculate the summation for each value of $n$. Whilst changing $n$, the original function does not move, however the convolution function is shifted accordingly. Starting at $n=0$,
$c[0] = \sum_m a[m]b[-m] = 0 * 0.25 + 0 * 0.5 + 1 * 1 + 0.5 * 0 + 1 * 0 + 1 * 0 = 1$
$c[1] = \sum_m a[m]b[-m] = 0 * 0.25 + 1 * 0.5 + 0.5 * 1 + 1 * 0 + 1 * 0 = 1$
$c[2] = \sum_m a[m]b[-m] = 1 * 0.25 + 0.5 * 0.5 + 1 * 1 + 1 * 0 + 1 * 0 = 1.5$
$c[3] = \sum_m a[m]b[-m] = 1 * 0 + 0.5 * 0.25 + 1 * 0.5 + 1 * 1 = 1.625$
$c[4] = \sum_m a[m]b[-m] = 1 * 0 + 0.5 * 0 + 1 * 0.25 + 1 * 0.5 + 0 * 1 = 0.75$
$c[5] = \sum_m a[m]b[-m] = 1 * 0 + 0.5 * 0 + 1 * 0 + 1 * 0.25 + 0 * 0.5 * 0 * 1 = 0.25$
As you can see that is exactly what we get on the plot $c[n]$. So we shifted around the function $b[n]$ over the function $a[n]$.
2D Discrete Convolution
For example, if we have the matrix in green
with the convolution filter
Then the resulting operation is a element-wise multiplication and addition of the terms as shown below. Very much like the wikipedia page shows, this kernel (orange matrix) $g$ is shifted across the entire function (green matrix) $f$.
You will notice that there is no flip of the kernel $g$ like we did for the explicit computation of the convolution above. This is a matter of notation. This should be called cross-correlation, it is not a true convolution. However, computationally this difference does not affect the performance of the algorithm because the kernel is being trained such that its weights are best suited for the operation, thus adding the flip operation would simply make the algorithm learn the weights in different cells of the kernel to accommodate the flip. So we can omit the flip. |
H: No accuracy in Keras RNN Model with Bitcoin Data
I am very new to machine-learning and have made an RNN-LSTM model with no accuracy. My data has been normalized with MinMaxScaler from Sklearn and has a shape of has an input of shape (3, 2)...
My normalization steps:
def get_data(currency):
url=f'https://coinmarketcap.com/currencies/{currency}/historical-data/?start=20130428&end={time.strftime("%Y%m%d")}'
data=pd.read_html(url, flavor='html5lib')[0]
data=data.assign(Date=pd.to_datetime(data['Date']))
data['Volume']=(pd.to_numeric(data['Volume'], errors='coerce').fillna(0))
data.columns=[col.lower() for col in data.columns]
data.columns=[col.strip('*') for col in data.columns]
return data
df=get_data('bitcoin')
df=df.sort_values(by='date')
def split_data (data, trainsize):
return np.array(data[:int(trainsize*len(data))]), np.array(data[int(trainsize*len(data)):])
scaler=MinMaxScaler(feature_range=(0,1))
def create_inputs(data, window):
inputs=[]
for i in range(len(data)-window):
inputs.append(data[i:(i + window)].values)
close,volume=[],[]
for x in range(len(inputs)):
close.append(inputs[x][:,0])
volume.append(inputs[x][:,1])
close=np.array(close)
close=scaler.fit_transform(close)
volume=np.array(volume)
volume=scaler.fit_transform(volume)
inputs=[]
for i in range(len(close)):
rows=[]
for x in range(len(close[i])):
row=[close[i][x], volume[i][x]]
rows.append(row)
inputs.append([rows])
inputs=np.vstack(inputs)
return inputs
def create_outputs(data, window):
return scaler.fit_transform(data['close'][window:].values.reshape(-1,1))
# VARIABLES
df=df.filter(['date', 'close', 'volume'], axis=1)
df=df.sort_values(by='date')
df[df.columns] = df[df.columns].apply(pd.to_numeric, errors='coerce')
train,test=split_data(df,0.8)
train=pd.DataFrame(train, columns=df.columns)
test=pd.DataFrame(test, columns=df.columns)
train=train.drop('date',1)
test=test.drop('date',1)
xtrain,ytrain=create_inputs(train, 3), create_outputs(train, 3)
xtest,ytest=create_inputs(test, 3), create_outputs(test, 3)
Here is part of my training data (1607, 3, 2) fetched from CoinMarketCap's Bitcoin History after scaling:
[[[0.01363717 0. ]
[0.01577874 0. ]
[0.01463021 0. ]]
[[0.01577874 0. ]
[0.01463021 0. ]
[0.01006721 0. ]]
[[0.01463021 0. ]
[0.01006721 0. ]
[0.00762504 0. ]]...]
My model has 3 Layers with 1024 LSTM Cells, and Dense layer with 1 neuron:
model = keras.models.Sequential()
model.add(keras.layers.CuDNNLSTM(1024, input_shape=(3,2), return_sequences=True, name='input'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.CuDNNLSTM(1024, return_sequences=True, name='lstm1'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.CuDNNLSTM(1024, name='lstm2'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(1, activation='tanh', name='output'))
# Compile model
model.compile(
loss='mse',
optimizer='adam',
metrics=['accuracy'],
)
history=model.fit(xtrain, ytrain, batch_size=64, epochs=1000, validation_data=(xtest, ytest), verbose=1)
AI: Remember, accuracy is a classification measure. That is, it's used to evaluate models that attempt to predict membership in one of a few discrete values. No accuracy in this case means that you haven't predicted and values exactly correctly- a pretty common occurance in regression problems- that is, problems that measure the scale of a phenomenon or "how much" of something happens.
A better metric to evaluate your regression model might be MSE, such as used for your function.
Try:
# Compile model
model.compile(
loss='mse',
optimizer='adam',
metrics=['mse'],
) |
H: Can continuous variables decrease classification model accuracy?
I have been playing around with titanic dataset. Here Fare column is a continous variable. I've read people stating that in a classification model it's best to have categorical variables than continous features. So I was wondering if I convert the age into categorical bins will it improve my model accuracy?
AI: There's no concrete property of classifiers in general that lend themselves to using categorical variables rather than continuous variables.
You can definitely try binning the variables, but many classifiers (particularly, tree-based classifiers) will implicitly bin the variables optimally within the algorithm itself.
If you add the continuous variable, and the performance of your classifier doesn't improve, it's very possible that the variable is not predictive of the target or that all of the information (in the information-theory sense) that that variable carries is already expressed by other variables already in the model. |
H: Example data source for educaional use
I'm doing project on subject of affinity analysis for my statistical class in college.
In order to complete it, I have to acquire sales database with at least 200-300 records, each containing list of products bought by single client.
Are there any example sales databases available for free to use for such educational purposes?
AI: The description of the needed data is quite general, but I have found this dataset that describes sales during Black Friday; it is possible to identify products bought by the same customer using customer_id column.
Also, this dataset describes transactions from a bakery. The products that were bought in the same transaction (and so by the same customer) are identifiable as they share the same #Transaction value.
Generally below are 2 sources where someone can find free datasets that can be used for training/ educational purposes:
Kaggle: https://www.kaggle.com/datasets (both datasets that I suggested were found here)
EU Open Data Portal: https://data.europa.eu/euodp/en/data |
H: how to take CSV file input in list of tuples
I have a .txt(data.txt) file containing csv data like:
X Class
15.0001 Yes
18.00 NO
17.07 Yes
I need to make a function to return a list of tuples of each samples. So far I did:
import csv
def readAllData(str):
with open(str,'r') as f:
f.readline()
data=[tuple(line) for line in csv.reader(f)]
return (data)
Output:
[('15.001\tYES',),
('18.00\tNo',),
('17.07\tYes',),]
I want:
[(15.001, Yes), (18.00, No), (17.07, Yes)]
AI: Don't bring everything into memory at once...
It's won't work for many cases as well..
csv's are , separated files and tsv's are tab(\t) separated..
They fall in the same category though..
tqdm is optional..
import csv
from tqdm import tqdm_notebook
with open('sample.txt','rb') as tsvin, open('new.csv', 'wb') as csvout:
tsvin = csv.reader(tsvin, delimiter='\t')
csvout = csv.writer(csvout)
for row in tqdm_notebook(tsvin):
csvout.writerows([row[:] for _ in range(2)]) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.