text
stringlengths
83
79.5k
H: How to tell a boosting model that 2 features are related and should not be interpreted stand-alone? I am using XGBoost for a machine learning model that learns from tabular data. XGBoost uses boosting method on decision trees. When I look at the decision-making logic of decision trees, I notice the logic is based on 1 feature at one time. In real life, certain multiple features are related to each other. Currently, when I feed data to the model, I simply feed all the features to it without telling the model how certain features are related to each other. Let me describe a hypothetical example to be clearer. Suppose I have 2 features - gender and length of hair. In this hypothetical problem, I know from my domain knowledge that if gender is female, length of hair matters in determining the outcome. If gender is male, length of hair is irrelevant. How do I tell the machine learning model this valuable piece of information so that the model can learn better? I am using XGBoost on python 3.7 AI: I am going to talk about some ways you could do it later but first I want to talk about whether you should! If the relation that you describe exists XGB will be able to learn and detect it! There is no real benefit in "hard-coding" a rule into the algorithm, it won't speed up the training, it won't benefit accuracy, etc. Simply put the benefit of ML algorithms is that they are able to detect exactly these relationships and model them in the best possible way. Now if you still insist that this is something that must be done, you can. The easiest way to achieve this would be feature engineering: Introduce NAs -simply leave out hair length of male respondents and fill with NA Create interaction factors - instead of having hair length and gender as a simple variables you could also code it in a way that represents the known interaction like this: gender_hair = [male, female_short,female_medium,female_long] # example factor levels But again if you compare models with those engineered features to a simpler model you will see no benefit I'd wager.
H: CNN always predicts either 0 or 1 for binary classification I am using a Kaggle dataset on stress characteristics, derived from ECG signals, and I would like to train a CNN to recognize stress/non-stress situations. I have built a model in Keras: model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(U1, (3, 1), activation = 'relu', input_shape = (num_features, 1, 1)), tf.keras.layers.Conv2D(U2, (3, 1), activation = 'relu'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(U3, activation = 'relu'), tf.keras.layers.Dense(1, activation = 'sigmoid') ]) where U1, U2 and U3 are the parameters I have been changing to find the right combination to ensure the best performance. What I did is, specifically: divide the samples in training and test set (I don't have a validation set as the number of available samples is small); normalize both training and validation set by dividing them by the max value found in the samples; train the network with various combinations of U1, U2 and U3 to find the ones ensuring the best performance. The training is done as follows: model.compile(loss = 'binary_crossentropy' , optimizer = 'adam' , metrics = ['accuracy']) history = model.fit(x_train, y_train, epochs = 50, batch_size = 160, validation_data = (x_test, y_test)) score = model.evaluate(x_test, y_test, batch_size=128) The network performs really well on both the training and test set, achieving an accuracy of 99.3% on test set (and 99.5% on training set). However, when applied on real data (by taking one's ECG, computing the features and normalizing them by the same normalization value used on training and test set above), the network is always predicting: a label of 0.0 for "normal" ECGs; a label of 1.0 for noisy ECGs (which are taken as stressed ECGs). It bugs me that no other labels rather than 0.0 or 1.0 are never returned. It is true that sometimes the network predicts labels of exactly 0.0 or exactly 1.0 also for samples in the training and validation set, but never seeing a label different from 0.0 and 1.0 in real-world data sounds strange. Is this a problem of the network? Or a problem of the dataset? Maybe could this be related to the fact that the real-world ECG data I am using are not extracted from the same distribution of the Kaggle dataset? I see that, for instance, real-world data have profound differences in values with respect to the data in the Kaggle dataset (so much difference that even after normalization values are not really normalized), but I don't know if this is a valid reason that justifies the problem I see. AI: Sigmoid functions might have saturation problems. The values that it receives are probably too far from zero, and the sigmoid is returning 'extreme' results (i.e.: 0 or 1). I suggest you to keep your signal zero centered by putting BatchNormalization() layers between each of your layers (certainly between Dense() ones, but also between conv layers if you prefer). Additionally, sigmoid functions are thought to perform not very well in general. The main configuration for classifiers is to put one output node for each target category (in this case, 2) and use softmax activation. Your loss at this point would be BinaryCrossentropy(). This requires one-hot encoding of your target variable. I would also try other regularization tricks, such as dropout and l1-l2 regularization, or different initializations. There no rule of thumb here, you have to play with these hyperparameters and figure out what's the best configuration for your task.
H: What is seed in Keras? I have seen 'seed' parameter in many codes of Keras. I have a code example here: validation_generator = data_generator.flow_from_directory( train_data_dir, target_size = (img_width, img_height), batch_size = batch_size, shuffle = True, class_mode = 'categorical', seed = 42, subset = 'validation') Can you explain what is it? And when to use it? AI: As most of statistical models (regression, neural networks etc.) use probabilistic frameworks in their optimazation process (E.g. softmaxfrom a random distribution). You need a seedto reproduce your results. A seed is a fixed "value set" drawn from a random distribution. If you would run your model several times you will get slightly different estimates as the starting values from your optimazation will slightly differ every time. EDIT Some additional information on the optimazation process when dealing with Neural Networks. What you are optimizing in neural networks are the weights (connections) between the Neurons. Connections that have strong predictable character get somewhat stronger (more weight) relative to other connections with less predictive power. This process is done when you "train" your data. However to train your network you need some "inital weights" which are normally assigned by random (there are different approaches, but this is the most common one). Like previously said - The inital weights will have impact on your optimazation and the results. That other users can check your model / reproduce your results you need to save the inital weights. - Therefore you use a seed
H: Would keeping all punctuation make any sense in word2vec? I am trying to learn how word2vec works to get to more complicated stuff like LSTMs. Because I will use the same training data (so with the same vocabulary) and I want to predict punctuation too, I decided to keep it. Punctuation is obviously limited to .!,?, anything else is discarded (The sentence itself to avoid loss of meaning). Also, every symbol is converted into a string-like representation to avoid any kind of problems with encoding. Now, does it make sense to keep punctuation? Should I discard it completely (if I do discard it, how can I generate punctuation for seq2seq model later?)? AI: You are correct to keep punctuation if you want to be able to predict it. Tokenization of your input should actually work for any character, be it a letter or punctation. In fact there have already been exmaples of people modelling mathematics using Word2Vec and then generating very realistic maths, via $\LaTeX$ ! Look at the subsection called Algebraic Geometry in Karpathy's now famous blog post. There is a good note on the matter here, whcih a specific example given in seq2seq learning (basically translation within the realm of NLP). Be sure to read the comments on the accepted answer there. to answer your final question, I don't think it would be possible to use your generated model to place to required punctuation back into your model, as something like an LSTM would not have a representation for, say a comma, as it had never seen one.
H: Which tool should I use for combining this large dataset? This is my first foray into data science and I've hit a snag before even getting to the analysis. I have 40 CSV files; each one contains 2 columns - a time and value column. I would like to consolidate this into one table by doing an outer join of the time column for all files, so that the final file would contain 1 time column and 40 value columns. I attempted this using pandas merge method, but my local machine ran out of memory before it could complete. I made sure there was nothing fundamentally wrong with the code by simply combining 8 of the 40 files and got the desired result. At that point I decided to spin up a more powerful cloud compute instance on AWS; I chose one with absurdly high RAM so I'm working with 190 GB rather than 8. It got further, but got the same memory error around the 30th file. I should also mention that each file as quite a few rows - around 180K. At that point I decided that I must be going about this the wrong way. I don't think pandas is meant as a tool for combining such large datasets. In my last job I used SQL pretty extensively and something like that seems more equipped. My next idea was to try to do it in AWS Athena, which is a SQL like service that can integrate with the csv files in S3. I assume there is no standard solution to this issue, but I just want to see if I'm way off base or going in the right direction. Thanks!! AI: I would suggest trying to do it in batches. The underlying issue could well still be memory related in some way, as the merge method makes copies of its input and so is not memory efficient at all. As an example, you could read in 10 files, create the desired output, as you have done already. Repeat this for files 10-20, then 20-30 then 30-40. The finally four the four files that you have created. It is a bit of an annoyance, but sometimes these little workarounds get the job done. [EDIT] Another option might be to use the more involved memory management during reading, option via the chunksize argument of pd.read_csv(). This will read parts of the file into memory in chunks, as the name suggests. If you do this in a loop, it should put an upper limit on the memory usage. For example (untested): chunksize = 50e6 # 50 Mb for single_file in list_of_file_paths: for i, chunk in enumerate(pd.read_csv(single_file, chunksize=chunksize): if i == 0: result = chunk else: result = pd.merge((result, chunk), copy=False, how= ...) You may need to do something with the indivdual chunks before merging them. Additionally, note that I set the copy argument in merge to False, which might help - the documentation is a little vague as to how it saves memory.
H: How to resolve too many indices for array Index Error I'm performing a binary classification in Keras and attempting to plot the ROC curves. When I tried to compute the fpr and tpr metrics, I get the "too many indices for array" error. Here is my code: #declare the number of classes num_classes=2 #predicted labels y_pred = model.predict_generator(test_generator, nb_test_samples/batch_size, workers=1) #true labels Y_test=test_generator.classes #print the predicted and true labels print(y_pred) print(Y_test) '''y_pred float32 (624,2) array([[9.99e-01 2.59e-04], [9.97e-01 2.91e-03],...''' '''Y_test int32 (624,) array([0,0,0,...,1,1,1],dtype=int32)''' #reshape the predicted labels and convert type y_pred = y_pred.argmax(axis=-1) y_pred = y_pred.astype('int32') #plot ROC curve fpr = dict() tpr = dict() roc_auc = dict() for i in range(num_classes): fpr[i], tpr[i], _ = roc_curve(Y_test[:,i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) fig=plt.figure(figsize=(15,10), dpi=100) ax = fig.add_subplot(1, 1, 1) # Major ticks every 0.05, minor ticks every 0.05 major_ticks = np.arange(0.0, 1.0, 0.05) minor_ticks = np.arange(0.0, 1.0, 0.05) ax.set_xticks(major_ticks) ax.set_xticks(minor_ticks, minor=True) ax.set_yticks(major_ticks) ax.set_yticks(minor_ticks, minor=True) ax.grid(which='both') lw = 1 plt.plot(fpr[1], tpr[1], color='red', lw=lw, label='ROC curve (area = %0.4f)' % roc_auc[1]) plt.plot([0, 1], [0, 1], color='black', lw=lw, linestyle='--') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristics') plt.legend(loc="lower right") plt.show() The shape of y-pred and Y_test are: y_pred float32 (624,2) array([[9.99e-01 2.59e-04], [9.97e-01 2.91e-03],... Y_test int32 (624,) array([0,0,0,...,1,1,1],dtype=int32) AI: Your code is broken in two places. The first is because you took the argmax of your class probabilities from y_pred. The line y_pred = y_pred.argmax(axis=-1) reshapes your prediction vector into (624,) to match your vector of classes. Thus, when you try to slice your array later with y_pred[:,i] it's going to bark since you no longer have a second dimension. This isn't really the behavior you want either, since the roc_curve function is interested in the exact class probabilities your model produces! The second is for the same reason, attempting to index the second dimension of a one dimensional numpy array, but for the Y_test vector. So if you're interested in capturing TPR/FPR for both classes by treating each as the positive class, you need to drop these lines #reshape the predicted labels and convert type y_pred = y_pred.argmax(axis=-1) y_pred = y_pred.astype('int32') and you need to change the first line of your for loop to: fpr[i], tpr[i], _ = roc_curve(Y_test, y_pred[:, i]) hope this helps
H: Multivariate and multi-series LSTM I am trying to create a pollution prediction LSTM. I've seen an example on the web to cater for a Multivariate LSTM to predict the pollution levels for one city (Beijing), but what about more than one city? I don't really want a separate network for every city, I'd like a single generalised model/network for all x cities. But how do I feed that data into the LSTM? Say I have the same data for each city, do I... 1) Train on all data for one city, then the next city, and so on until all cities are done. 2) Train data for all cities on date t, then data for all cities on t+1, then t+2 etc. 3) Something completely different. Any thoughts? AI: I would say that option 1 will not work out too well: in my experience, the model will either only be good for the first or last model you train, depending on how much freedom you give the algorithm to change weights as time goes on (e.g. with the learning rate). You really need to decide what you are going to be predicting. Is it the pollution level for a single city: Which features do you have for each city? It could can make sense to train all cities at the same time if the features you have are also general ones that really can explain the target variable. So if you have temperature, humidity, some transport statistics for that city etc. then training everything together could make sense. I would think about each sample leading to one target pollution level, and if that sample has enough information (based on the features) to distinguish itself from samples of the other cities, the model should pick up on and leverage those subtleties in the data.
H: Understanding input of LSTM I am a little confused with the input of LSTM. Basicaly my train input data is of shape (53394, 3). I reshaped my 2D data into 3D data in order to set it according to the input of LSTM. I have two configurations: trainX = (53394, 3, 1) model.add(LSTM(32,input_shape =(trainX.shape[1], trainX.shape[2])) trainX = (53394,1, 3) model.add(LSTM(32,input_shape =(trainX.shape[1], trainX.shape[2])) I want to understand how input is feed into the LSTM in both the cases. like one neuron is taking one column value or one complete row with three columns is going as input to one neuron in input layer. AI: LSTM layers work on 3D data with the following structure (nb_sequence, nb_timestep, nb_feature). nb_sequence corresponds to the total number of sequences in your dataset (or to the batch size if you are using mini-batch learning). nb_timestep corresponds to the size of your sequences. nb_feature corresponds to number of features describing each of your timesteps. Thus an LSTM layer will work like this : Let $t_i$ be the $i^{th}$ timestep of sequence $seq_j$, with $i \in [0, nb\_timestep], \ j \in [0, nb\_sequence]$. An LSTM layer will make a prediction $p_i$ according to the $nb\_feature$ descriptors of $t_i$ with respect to its hidden state which is a representation of the timesteps $t_0$ to $t_{i-1}$. Now, let's see what this means for your two configurations. For the sake of the explanation, I will suppose that we have sequences of words For (53394, 3, 1) the LSTM will work on 53394 different sequences. Each sequence is 3 words long and each word is described through one feature only. For the first word of each sequence, the LSTM will make a prediction on its sole descriptor. For the second word, the prediction will be done from the unique descriptor with respect to what the first word was. Finally, for the third and last word in the sequence, the LSTM will emit a prediction from the unique descriptor with respect to what the two previous words were. Then, the LSTM begins the process anew for the following sequence. For (53394, 1, 3), your sequences contain only one word which is described through 3 features. Sequences of one word are not really sequences, so the LSTM layer will not be useful in this case. Hope it clears up how data are fed to an LSTM ! NB. Not related to the question but it may help : from your original shape, it seems your dataset contains 53394 words described with 3 features. If I am right, you would need a 3D shape like (53394, nb_timestep, 3) but with $nb\_timestep \neq 1 $. What you need then, is to define some window instead of reshaping your data.
H: Types of Recurrent Neural Networks I have a question about types of RNN. Ian Goodfellow in his book Deep Learning writes: Some examples of important design patterns for recurrent neural networks include the following: • Recurrent networks that produce an output at each time step and have recurrent connections between hidden units, illustrated in figure 10.3. • Recurrent networks that produce an output at each time step and have recurrent connections only from the output at one time step to the hidden units at the next time step, illustrated in figure 10.4 • Recurrent networks with recurrent connections between hidden units, that read an entire sequence and then produce a single output, illustrated in figure 10.5. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS Next I read about RNN article by Andrej Karpathy http://karpathy.github.io/2015/05/21/rnn-effectiveness/ and his describe rnn architecture like relation model. My question is: description of types RNN by Ian Goodfellow are equal to Andrej Karpathy? If not what it the diffrence beetween this descriptions? AI: They match up fairly well. The first Goodfellow description is Karpathy's final "many to many" image. The output at each time step is based on the previous hidden state of the net and the input. The third Goodfellow description directly corresponds to Karpathy's "many to one" image. This model reads an entire input sequence, and then produces one output. The only difference is that the second description from Goodfellow's text isn't captured by Karpathy's image. Here's my rendition of what that description states.
H: Anomaly Detection for Large Time Series Data I am working on detecting anomalies within a large time series data set. It is updated on a regular basis and consists of more than 30 parameters. I am using R as a reference language. It is a first for me working on this type of projects and I am unfamiliar with most of the techniques. I have 6 weeks to implement a good analytical toolbox to enhance the quality of the control checks on the production line. I have found a couple of potential methods to analyze it including statistical machine learning, deep learning using auto-encoded neural networks or clustering approaches. The purpose of the chosen method is to detect the anomalies/outliers by itself. It doesn't really need to be real-time analysis. What approach would you recommend to implement for the scope of the project, given the structure of the data? AI: Following J.Tukey, you should plot, draw graphs, visualize, etc... until you have a solid pack of examples. Then make Tukey' fences on each of the 30 parameters. Let $q_1$ and $q_3$ be the 1st and 3rd quartiles, $d=q_3-q_1$ the inter-quartile distance, and define as outlier any observation outside the interval $q_1-k\cdot d < x < q_3+k\cdot d$, where $k$ is a constant. Traditionally, $k=1.5$ indicates an outlier and $k=3$ indicates the data is far out. However, the real value of $k$ should be tested against your examples. Then make a cluster analysis (with a k-nearest neighbor) and define as outlier any point isolated in one cluster. Again, use your example to test various values of $k$.
H: Do we really need tokens? I am wondering, do we really need <unk> tokens? Why do we limit our vocabulary? Is it for speed? Accuracy? If we disable all limitations, what do you predict happens? AI: The <unk> tags can simply be used to tell the model that there is stuff, which is not semantically important to the output. This is a choice made via the selection of a dictionary. If the word is not in the dictionary we have chosen, then we are saying we have no valid representation for that word (or we are simply not interested). Other tags are commonly used to groups such thing together, not only (j)unk. For example <EMOJI> might replace any token that is found in our list of deined emojis. We are keeping some information, i.e. that there is a symbol representing emotion of some kind, but we are neglecting exactly which emotion. You can think of many more examples where this might be helpful, or you just don't have the right (labelled) data to make the most of the contents semantically.
H: Is ensemble learning using different classifier combination another name for Boosting? For implementation I am following the Matlab code for AdaBoost. Based on my understanding, AdaBoost uses weak classifiers known as base classifiers and creates several instances of it. For example, a weak classifier is a decision tree. So, AdaBoost can create maximum N decision trees (where N = number of samples) and combine the prediction results. This is a Homogeneous boosting method. But I have seen some examples such as this one in Matlab and ensemble-toolbox which have confused me. Can somebody please explain the following concepts with respect to the implementations and what is going on in the code? 1) Does the Matlab code for AdaBoost combine different classifiers? The combination method is unclear to me - -whether they do sum or majority voting or something else. If they are combining several classifiers, then technically it is a Heterogenous ensemble method and the term for it is stacking and not boosting. Please correct me where I am wrong. In Boosting methods, the based classifiers are the same. But the given in Matlab code for AdaBoost combines different classifiers, I am not sure. 2) Is ensemble learning or the example in the ensemble toolbox the same as the Adaptive Boosting Matlab code (second link)? Is ensemble learning the same as Adaptive boost? AI: Boosting is a type of Ensemble Learning, but it is not the only one. Apart from stacking, bagging is also another type of Ensemble Learning. Ensemble Learning is the combination of individual models together trying to obtain better predictive performance that could be obtained from any of the constituent learning algorithms alone. Boosting involves incrementally building an ensemble by training each new model instance to emphasize the training instances that previous models mis-classified. It is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. Boosting in general decreases the bias error and builds strong predictive models. Sometimes they may over fit on the training data. Stacking involves training a learning algorithm to combine the predictions of several other learning algorithms. Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. In generalized bagging, you can use different learners on different population. As you can expect this helps us to reduce the variance error.
H: How can this CNN for the portfolio management problem be implemented in keras? In the paper "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem" the CNN in the picture below is applied on the portfolio management problem. I am trying to understand how this network works and can be implemented in keras. What I don't understand is, why do we have 20+1 feature maps after the second convolution? And how I am supposed to include the portfolio w from last period in keras? My current implementation is as follows: def network(self): input1 = Input(shape=(3, 50, 4)) conv1 = Conv2D(2, (1, 3), activation='relu',)(input1) conv2 = Conv2D(10, (1, 48), activation='relu')(conv1) conv3 = Conv2D(1, (1, 1), activation='relu')(conv2) flat1 = Flatten()(conv3) preds = Activation('softmax')(flat1) model = Model(input1, preds) model.compile(optimizer='rmsprop', loss='mse') # todo loss function return model Thanks for any help. AI: You can concatenate your tensors in Keras. I have written the model below from keras.models import Sequential, Model from keras.layers import Concatenate, Dense, LSTM, Input, concatenate, Flatten from keras.optimizers import Adagrad price_history = Input(shape=(11, 50, 3)) feature_maps = Conv2D(2, (1, 3), activation='relu')(price_history) feature_maps = Conv2D(20, (1, 48), activation='relu')(feature_maps) # Add the w from the last period w_last = Input(shape=(11, 1, 1)) feature_maps = concatenate([feature_maps, w_last], axis = -1) feature_map = Conv2D(1, (1, 1), activation='relu')(feature_maps) feature_map = Flatten()(feature_map) # Add the cash bias cash_bias = Input(shape=(1,)) feature_map = concatenate([feature_map, cash_bias], axis = -1) w = Activation('softmax')(feature_map) model = Model(inputs=[price_history, w_last, cash_bias], outputs=w) print(model.summary()) model.compile(optimizer=ada_grad, loss='binary_crossentropy', metrics=['accuracy']) I am not sure about the loss function, I have not read the paper in detail. But this model describes what they have in the paper.
H: Is there an analog to SQL's STRING_AGG (or FOR XML PATH) function in Python? Asked this in SE but maybe this is too data-oriented so trying to post it here. I am trying to find the analog to the SQL function STRING_AGG so that I can concatenate across columns and rows of a table (or dataframe). Here is an example of what I am trying to achieve: input: output: With SQL I can easily group by the ID_No and also specify the order by via the RUN_No. The syntax for achieving what I want would be: SELECT ID_NO, STRING_AGG(CONCAT('(', RUN_No, ') ', Start, ' to ', Stop)) WITHIN GROUP (ORDER BY RUN_No ASC) AS "Sequence" FROM X_TBL GROUP BY ID_NO So what would be the way to achieve the same grouping, concatenating and ordering in Python? I do have my data stored as a dataframe. I am able to concatenate across columns using the following code, but then wasn't sure how to group by the "ID_No", or concatenate across the rows within each ID_No. sample['Merge'] = sample['Start'].map(str) + ", " + sample['Stop'] AI: Here a versatile solution. As you can see, you can modify the aggregation function in order to format the data as you want. #the original DataFrame df=pd.DataFrame({'ID_NO': [20, 20, 30, 30, 30], 'RUN_NO': [1,2,1,2,3], 'START': ['F2','F3','F9','F11','F14'], 'STOP': ['F3','F2','F11','F14','F6',]}) #convert 'RUN_NO' to string. This will make the aggregation formula easier to read df['RUN_NO']=df['RUN_NO'].apply(str) #The aggregation function def agg_f(x): return pd.Series(dict(Sequence = "%s" % ' '.join('(' +x['RUN_NO']+ ') ' + x['START'] +' to ' + x['STOP']))) df_agg=df.groupby('ID_NO').apply(agg_f) The output will be:
H: how to input the data set in to a word2vec by keras? I am new in using word2vec model, as a result, I do not know how I can prepare my dataset as an input for word2vec? I have searched a lot but the datasets in tutorials were in CSV format or just one txt file, but my dataset is in this structure: 2 folders one of these is blood cancer and the other one is breast cancer. each folder contains 1000 txt files which contain 40 sentences. I do not have any idea about I can create a vocabulary as an input for the word2vec model in keras with tensorflow backend? I use python 3.5 in ubuntu 17.10 Any guidance will be appreciated. AI: I have already searched for the solution and found this statements which are proper for this kind of datasets: at first, you should concatenate 2 folders in one folder to apply the code below: import os import gensim, logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) class MySentences(object): def __init__(self, dirname): self.dirname = './Dataset#2_BinaryClassClassification/lupus_alldataset/' def __iter__(self): for fname in os.listdir(self.dirname): for line in open(os.path.join(self.dirname, fname)): yield line.split() sentences = MySentences('./DatasetBinaryClassClassification/alldataset/') # a memory-friendly iterator model = gensim.models.Word2Vec(sentences, size=200, window=10, min_count=2, workers=5) I found the 'class MySentences' in this site:enter link description here I hope it is helpful.
H: Does it make sense to visualize data with a linear relationship using tSNE? I have used tSNE several times to visualize high dimensional data for cluster analysis, and it has always worked quite well when data falls into clusters. However, does it make any sense to use tSNE to visualize the independent variables in a dataset which has a linear relationship $y = X \theta + b$, where $y$ is the target value and $X$ are the independent variables (b is the bias)? As I understand tSNE, it is used more for capturing local structures in high dimensional data, so it is probably not suited for visualizing data for linear regression - is this a wrong assumption? AI: Your assumption is right, the results are in general misleading. Suppose your (linearly correlated) data have missing points in some range: Than for t-SNE the two subset of data will be two different clusters, even if they lie on the same linear distribution: But, if you are actually interested in the fact that those two structures are separated, then t-SNE is a good choice to visualize it. So a proper answer should be: it depends on what you need. P.S. Here the code used for this example: import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from sklearn.manifold import TSNE def function(x, y): return 4+3*x + 4*y x1=np.random.rand(100) x2=np.random.rand(100)+1.2 X=np.concatenate([x1, x2]) Y=0.5+2*X Z=function(X, Y) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(X, Y, Z, c='r', marker='o') plt.show() data=[] for x, y, z in zip (X, Y, Z): data.append([x, y, z]) data_embedded = TSNE(n_components=2).fit_transform(data) plt.scatter([x for x, y in data_embedded], [y for x, y in data_embedded], color='r') plt.show()
H: When we should use binning to redure noise? or How we find out we have noise? I read several times which binning is helpful for reducing the noise of data. But how can we find out our data has noise? What if our data is clean and we reduce the accuracy of data? Is there any method to measure noise of an attribute? When should we do binning? AI: You can plot your datapoints and see if there are many outliers to the dataset. Making frequncy plots and heatmaps helps a lot in this case.Binning helps to remove smaller erroneous data points. There are other methods of reducing noise in a dataset like applying IQR. If you want to reduce noise, apply Inter Quartile Range after applying binning only if you have a large dataset. : https://www.wikihow.com/Find-the-IQR
H: Why averaging the gradient works in Gradient Descent? In Full-batch Gradient descent or Minibatch-GD we are getting gradient from several training examples. We then average them out to get a "high-quality" gradient, from several estimations and finally use it to correct the network, at once. But why does averaging the gathered gradient work? Each training sample ends up in a distant, completely separate location on the error-surface Each sample would thus have its direction of a steepest descent pointing in different directions compared to other training examples. Averaging those directions should not make sense? Yet it works so well. In fact, the more examples we average, the more precise the correction will be: full-batch vs mini-batch approach AI: Each training sample ends up in a distant, completely separate location on the error-surface That is not a correct visualisation of what is going on. The error surface plot is tied to the value of the network parameters, not to the values of the data inputs. During back-propagation of an individual item in a mini-batch or full batch, each example gives an estimate of the gradient in the same location in parameter space. The more examples you use, the better the estimate will be (more on that below). A more accurate representation of what is going on would be this: Your question here is still valid though: But why does averaging the gathered gradient work? In other words, why do you expect that taking all these individual gradients from separate examples should combine into a better approximation of the average gradient over the error surface? This is entirely to do with how the error surface is itself constructed as the average of individual loss functions. If we note cost function for the error surface as $C$, then $$C(X, \theta) = \frac{1}{|X|}\sum_{x \in X} L(x, \theta)$$ Where $X$ represents the whole dataset, $\theta$ are your model's trainable parameters, $L$ is an individual loss function for $x$. Note I have rolled labels into $X$ here, it doesn't matter for this argument whether loss is due to comparison of model output with some part of the training data - all we care about is finding a gradient to the error surface. The error gradient that you want to calculate for gradient descent is $\nabla_{\theta} C(X, \theta)$, which you can therefore write as: $$\nabla_{\theta} C(X, \theta) = \nabla_{\theta}(\frac{1}{|X|}\sum_{x \in X} L(x, \theta))$$ The derivative of the sum of any two functions is the sum of the derivatives, i.e. $$\frac{d}{dx}(y+z) = \frac{dy}{dx} + \frac{dz}{dx}$$ In addition, any fixed multiplier that doesn't depend on the parameters you are taking the gradient with (in this case, the size of the dataset) can just be treated as an external factor: $$\nabla_{\theta} C(X, \theta) = \frac{1}{|X|}\sum_{x \in X} \nabla_{\theta} L(x, \theta)$$ So . . . the gradient of an average of many functions, is equal to the average of the gradients of those functions taken separately. Taking any completely random subset of $X$ will result in an unbiased estimate of the mean gradient, same as taking a random subset of any variable and taking its mean will give you an unbiased estimate of the population's mean. This will not work if your samples are somehow correlated, hence why you will often see recommendations to shuffle the data prior to training, in order to make it i.i.d. This will also not work if your cost function combines examples in any way other than addition. However, it would be an unusual cost function that combined separate training examples by multiplying their loss functions, or some other non-linear combination.
H: The Area Under an ROC Curve (AUC) vs Confusion Matrix for classifier evaluation? When should I use The Area Under an ROC Curve (AUC) or the Confusion Matrix for classifier evaluation? The clasifier evaluation is for example the prediction of customers for possible future sales. AI: A confusion matrix can be used to measure the performance of a particular classifier with a fixed threshold. Given a set of input cases, the classifier scores each one, and score above the threshold are labelled Class 1 and scores below the threshold are labelled Class 2. The ROC curve, on the other hand, examines the performance of a classifier without fixing the threshold. Given a set of input cases, the classifier scores each one. The ROC curve is then generated by testing every possible threshold and plotting each result as a point on the curve. The ROC curve is useful when you want to test your classifier over a range of sensitivities/specificities. This may or may not be a desirable thing to do. Perhaps you want very high sensitivity and don't care much about specificity - in this case, the AUC metric will be less desirable, because it will take into account thresholds with high specificity. The confusion matrix, on the other hand, could be generated with a fixed threshold known to yield high sensitivity, and would only be evaluated for that particular threshold. A confusion matrix evaluates one particular classifier with a fixed threshold, while the AUC evaluates that classifier over all possible thresholds.
H: Does out-of-bag sampling make Random Forests inherently less robust than other classifiers? All of the popular evaluation metrics (ROC-AUC, Confusion Matrices, etc.) require two lists as parameters: a list of the actual y labels associated with some arbitrary group of training examples (x's), and a parallel list of the predicted labels given to those x's by the model. To construct such lists, you must separate a testing/validation data set from the training set you feed the model on. However, random forests automatically partition 1/3 of the training set you give it to calculate the out of bag score. I don't believe you can stop the bagging process that causes this, since I think it is vital to how the random forest functions. Because the RF model never sees 1/3 of the training set (due to bagging), do RFs create a less thorough image of the dataset then, say, a neural network would whenever a testing set is reserved for evaluation? AI: The out-of-bag-error is calculated from the samples that are not used anyway for the particular tree. The original set of records gets bootstrapped. So, a new set is generated that does not contain all samples anyway. The out-of-bag set can then be used to monitor the performance of the model. When the out-of-bag-error goes up (given it has a significant size) it means that the current tree is over-fitting the training sample. So, out-of-bag sampling can be used to prevent over-fitting and therefore makes the model more robust rather than less robust. So, in fact all samples are used to train the random forest model. Though each tree only uses a subset of the dataset at a time. Don't confuse the RF model with the individual trees! Look here for a more detailed description.
H: Cost function dependence on size - batch gradient descent I am applying the simple least mean square update rule using python but somehow the values of theta, I get, become very high. from pylab import * data = array( [[1,4.9176,1.0,3.4720,0.998,1.0,7,4,42,3,1,0,25.9], [2,5.0208,1.0,3.5310,1.50,2.0,7,4,62,1,1,0,29.5], [3,4.5429,1.0,2.2750,1.175,1.0,6,3,40,2,1,0,27.9], [4,4.5573,1.0,4.050,1.232,1.0,6,3,54,4,1,0,25.9], [5,5.0597,1.0,4.4550,1.121,1.0,6,3,42,3,1,0,29.9], [6,3.8910,1.0,4.4550,0.988,1.0,6,3,56,2,1,0,29.9], [7,5.8980,1.0,5.850,1.240,1.0,7,3,51,2,1,1,30.9], [8,5.6039,1.0,9.520,1.501,0.0,6,3,32,1,1,0,28.9], [9,16.4202,2.5,9.80,3.420,2.0,10,5,42,2,1,1,84.9], [10,14.4598,2.5,12.80,3.0,2.0,9,5,14,4,1,1,82.9], [11,5.8282,1.0,6.4350,1.225,2.0,6,3,32,1,1,0,35.9], [12,5.303,1.0,4.9883,1.552,1.0,6,3,30,1,2,0,31.5], [13,6.2712,1.0,5.520,0.975,1.0,5,2,30,1,2,0,31.0], [14,5.9592,1.0,6.6660,1.121,2.0,6,3,32,2,1,0,30.9], [15,5.050,1.0,5.0,1.020,0.0,5,2,46,4,1,1,30.0], [16,5.6039,1.0,9.520,1.501,0.0,6,3,32,1,1,0,28.9], [17,8.2464,1.5,5.150,1.664,2.0,8,4,50,4,1,0,36.9], [18,6.6969,1.5,6.9020,1.488,1.5,7,3,22,1,1,1,41.9], [19,7.7841,1.5,7.1020,1.376,1.0,6,3,17,2,1,0,40.5], [20,9.0384,1.0,7.80,1.50,1.5,7,3,23,3,3,0,43.9], [21,5.9894,1.0,5.520,1.256,2.0,6,3,40,4,1,1,37.5], [22,7.5422,1.5,4.0,1.690,1.0,6,3,22,1,1,0,37.9], [23,8.7951,1.5,9.890,1.820,2.0,8,4,50,1,1,1,44.5]]) x = zeros( (len(data[:,4]) ,2)) x[:,0] ,x[:,1] = 1, data[:,4] y = data[:,-1] theta = array([100.0,100.0]) alpha = 0.4 iternum = 100 for i in range(iternum): theta -= alpha*dot(transpose(x),(dot(x,theta)-y)) print theta I get the answer to be [7.18957001e+150 1.19047264e+151] which is unrealistic for the given code. However if I alter the internum loop to be for i in range(iternum): theta -= alpha*dot(transpose(x),(dot(x,theta)-y))/size(data[:,4]) #Basically divide by the total number of training examples print theta I get the correct answer. However, as per what I have learned, the cost function does not necessarily depend on training example size. Can somebody point to the source of the problem? Apologies if the explanation of the problem was a little convoluted. AI: Your second change (calculating the average error) is the correct method. Imagine this: if there are billion examples in your training set, even if you make (very) small error on each, the sum total is going to add up to a large number. So the cost function in case of least squared is mean squared error, not just sum.
H: Scikit-learn decision tree in production I'm working at building a decision tree model that will be used in production. In documentation here pickle is used to serialize the model however the concerns about this technique make me think there's maybe a better solution to export a model to production. pickle (and joblib by extension), has some issues regarding maintainability and security. Because of this, Never unpickle untrusted data as it could lead to malicious code being executed upon loading. While models saved using one version of scikit-learn might load in other versions, this is entirely unsupported and inadvisable. It should also be kept in mind that operations performed on such data could give different and unexpected results. So my question is : Using scikit-learn, is there a safe and convenient technique to export the model into production. PS: Converting the dot data to a python function can be a solution but i'm surprised there's no built-in solution for this. AI: Persisting the model parameters is the only out of the box solution in sklearn. You should ensure that: Your model is reproducible from data The versions of dependencies used (incl sklearn) is locked Your QA system checks correctness of your model If you have these things down then the possible changes in model persistence in sklearn is a non-issue. If it happens, you just update sklearn in a controlled manner, retrain and deploy as usual. And you avoid a whole range of other problems, for instance bugs in sklearn/numpy/whatever catching you by suprise. Or new data needs new models trained and deployed. You can of course build your own persistence mechanism but you then need to maintain the compatibility. Better spend the effort one the above.
H: Binary classification to predict various targets I am using keras neural networks for a binary classification task. I have a large dataset with 19 columns. Each of these columns is of binary type, and therefore the entire dataset is just 1s and 0s, like so (obviously this is not the entire dataset, which has 19 columns and many more rows): I want to make 19 predictions, one for each column, where the other 18 are used as independent variables. So essentially, first I'll make col 1 the target, using the other 18 columns to predict it. Then I'll make col 2 the target and use the other 18 to predict it etc etc...until column 19. Is there any way to do this effectively without making 19 separate models for each column? Thanks! AI: If you create 19 models, each model will learn weights corresponding to predictors and target variable values. For example- When you create one model by taking col1 as target, the model will learn weights for giving output col1. When you take another column, lets say col2 as target, if you use the previous model, it won't identify the difference between col1 and col2 (i.e., target) as your data is binary. The model will continue to learn weights for each column, but it won't be the correct one as it will be result of learning all target columns. You could have used multi-label classification if you wanted to predict different targets at same time using same predictors for each target. Since this is not the case, the only way for getting 19 predictions is to create 19 different models, one for each column as target.
H: When should I normalize data? I often see that numeric values in machine learning is scaled to 0-1 range. Why is it better? I have some temperature values in my training set. What if I will have some values to predict that will be outside training set values? I mean that eg in training set I will have range of temperatures like 5-20 and MinMaxScaler will fit to these values, and then I will have 25 to predict. AI: As @Daniel Chepenko pointed out, there are models that are robust w.r.t. feature transformations (like Random Forest). But for model which made operations on the features (like Neural Networks), usually you need to normalize data for three reasons: 1) Numerical stability: computers cannot represent every number, because the electronic which make them exist deals with binaries (zeros and ones). So they use a representation based on Floating Point arithmetic. In practice, this means that the numerical behavior in the range [0.0, 1.0] is not the same of the range [1'000'000.0, 1'000'001.0]. So having two features that have very different scales can lead to numerical instability, and finally to a model unable to learn anything. 2) Control of the gradient: imagine that you have a feature that spans in a range [-1, 1], and another one that spans in a range [-1'000'000, 1'000'000]: the weights associated to the first feature are much more sensitive to small variations, and so their gradient will become much more variable in the direction described by that feature. This can lead to other instabilities: some values of learning rate (LR) can be too small for one feature (and so the convergence will be slow) but too big for the second feature (and so you jump over the optimal values). And so, at the end of the training process you will have a sub-optimal model. 3) control of the variance of the data: if you have skewed features, and you don't transform them, you risk that the model will simply ignore the elements in the tail of the distributions. And in some cases, the tails are much more informative than the bulk of the distributions.
H: Collection of several learners I have few questions for which I could not extract answers from text books and online tutorials. Therefore, will be extremely grateful if the following points are clarified. 1) If I want to apply SVM, MLP and decision trees and combine the prediction results from these learners. So, I will have a mixture of ensemble of SVM, MLP and Decision trees OR I can use one learner only and have an ensemble of decision trees, an ensemble of SVMS and an ensemble of MLPs. Is my understanding correct? How do I combine the prediction results from different models? 2) In ensemble learning be it homogeneous and heterogeneous, each of the learner is trained on the same data set? Is this technique known as Bagging or simply ensemble learning? Would I be using the same subset of data for training each learner or different? 2) I have read in many tutorials that bagging and boosting is performed on similar learners. But ensemble learning can be homogeneous and heterogeneous. As a result, can bagging and boosting be performed on different learner types? 3) In Boosting, the learners are trained sequentially and therefore often only a learner of one kind is used as it is probably difficult to know if the sequence influences the learning procedure. Am I correct? 4) How is stacking different from the rest? AI: Here's my two cents: Yes, if you want to perform enseble each model should be trained on the same set of data. The technique you described is stacking, because you "stack" each prediction and simply use the majority. Ensemble Learning, to me, si more "complex", infact you can combine the different predictions from k models via linear combination, where you estimate the coefficients of the combination itself. More precisely, you have k prediction for each of your test observations, from k models (ie: 3 or more), and you need to find the best combination (for each obs.) of these, to have the best "accuracy" (any kind of metric). Keep in mind that it's best to combine models that have low or even opposite correlation, so that you "diversify" your predictions, that would mean better results. Often peole train lots and lots of different model, than use a simple Neural Network using the predictions as inputs, and the true labels from the test set as the truth, just as a normal class model would do. See the papers in the References here as a starting point. Bagging is basically a special case of stacking via majority vote, see: Bagging. You propose to the same type of model (ie: decision trees) the train dataset multiple times, making new different extraction, with replacement. This is like submitting new entries, but basically you are using the same data all over. Then you average all the predictions again, this helps with overfitting but it's not always the case, because you are using the same model, and so each si highly correlated with the other. Yes, in boosting, just like bagging you use the same model (usually a weak learner). At each stage, the learner learns from the errors made at the previus stage. Here an excellent visual example of what's going on. To be clear, stacking and ensemble learning can be used with lots of different models on the same train set (it's even better if the model in the "ensemble" are quite different). You can combine each prediciton with different techniques, from simple average, to the more complex use of Neural Networks. Bagging and Boosting are based on the use of the same model, one using the same obs. multiple times, the latter uses the same weak learner on the same data, but each makes predictions using the errors made by the previous one, in a loop. Hope this helps you a little.
H: Noise has 0 mean Why do we assume that in a dataset the error, presented as the random error term, has mean 0 ? To me seems impossible that every event we can study in everyday life has an error with 0 mean... AI: In regression settings, we want to approximate the response $y$ by a function $f$ of the input vector $x$ as good as possible by estimating $f$ from the data. This can be written as $$ y = f(x) + \varepsilon $$ where $\varepsilon$ is a random variable with some properties. Assume now that $E(\varepsilon) = \alpha \ne 0$. That is the situation you are fearing. Then we can rewrite above equation as $$ y = \underbrace{f(X) - \alpha}_{g(X)} + \varepsilon' $$ with $E(\varepsilon') = 0$. Most algorithms that assume mean zero noise automatically estimate $g$, not $f$. So in practice, this is not an issue. In linear regression, however, the restriction is much stronger. There one assumes (amongst other) that $E(\varepsilon \mid x) = 0$ (strict exogeneity). This assumption is not automatically fulfilled and requires the right regressors to be present in the model.
H: Using scikit Learn - Neural network to produce ROC Curves I want to verify that the logic of the way I am producing ROC curves is correct. (irrelevant of the technical understanding of the actual code). I have a data set which I want to classify. I am using a neural network specifically MLPClassifier function form python's scikit Learn module. I am passing a training data set to the fit function and then using the predict function with the testing data set. I am then outputting a confusion matrix with a false positive value and a true positive value. what I would like to do is calculate a ROC curve where I need a set of true positive and false positive values. would it make sense to run the neural network (MLPClassifier) multiple times with different targets each time and record the different true positive and false positive values? It seems simple enough for me. Is it the right way to produce a ROC curve with a neural network? AI: You could use sklearn.metrics.roc_curve. Besides, Here is an example of what you want to do. from sklearn.metrics import roc_curve, auc fpr2, tpr2, threshold = roc_curve(y_test, clf.predict_proba(X_test)[:,1])
H: The effect of the image type and the image conversion on deep learning CNN model Does the type of the image affects (jpg, png, bmp) on the CNN deep learning algorithm? Dose converting the image type affects on the CNN deep learning algorithm (ex. converting bmp to jpg or ppm to jpg)? Thanks AI: I haven't seen this discussed anywhere really, so an interesting thought. I think the answer will be that it doesn't matter. I think you should stick to one format (not important which, but don't mix them). Once the images are read into memory for processing and training a model, they are going to just be numbers in an array, regardless of the format they were loaded from. Perhaps the numbers will differ slightly, but we usually normalise the input to something like the range [0, 1], so that wouldn't matter. If your file formats change the image in a way, such that some features of the image are different (like the hue), then perhaps results will differ slightly, but the contents and spatial positioning of objects remain the same relative to themselves. EDIT: I should mention that converting between "image types" is nothing more than using a different compression algorithm to store the data. So when a photo is taken, the camera software compresses the raw sensor data into the target format (jpg, png, ...) and that format will have characteristic artefacts. Perhaps we don't recognise them, but they are there, and there is research done trying to reverse the information loss due to such compression algorithms. Here just a random example.
H: Simple 3 layers Neural Network cannot be trained #3 layers neural network import numpy as np from __future__ import division def nonlin(x,deriv=False): #activation function if(deriv==True): return np.exp(x)/(1+np.exp(x))**2 return 1/(1+np.exp(-x)) X = np.array([ [0,0,1], [0,1,1], [1,1,1], [1,0,0], [0,1,0], [1,1,0]]) Y = np.array([[0,1,0,1,1,0]]).T np.random.seed(1) l0 = X syn0 = 2*np.random.random((3,30))-1 syn1 = 2*np.random.random((30,1))-1 for i in xrange(60000): l0 = X l1 = nonlin(np.dot(l0,syn0)) l2 = nonlin(np.dot(l1,syn1)) l2_error = Y-l2 l2_delta = l2_error*nonlin(l2,deriv=True) l1_error = l2_delta.dot(syn1.T) l1_delta = l1_error*nonlin(l1,deriv=True) syn0 += l0.T.dot(l1_delta) syn1 += l1.T.dot(l2_delta) print l2 I have been messing with the neural network implementation at https://iamtrask.github.io/2015/07/12/basic-python-network/ This is the output of the code: [[1.85572928e-04] [9.99755942e-01] [5.21248255e-09] [9.99767481e-01] [9.99963580e-01] [2.07334909e-04]] I expect something like Y [[0] [1] [0] [1] [1] [0]] What could have possibly gone wrong here? AI: The results you are getting are the following: [[1.85572928e-04] = 0.000185572928 ~ 0 [9.99755942e-01] = 0.999755942 ~ 1 [5.21248255e-09] = 0.000000000521248255 ~ 0 [9.99767481e-01] = 0.999767481 ~ 1 [9.99963580e-01] = 0.999963580 ~ 1 [2.07334909e-04]] = 0.000207334909 ~ 0 These are indeed very close to your expected results. You are computing and predicting floating point numbers, and not binary zeros and ones. You could for example add a simply rule that will accepts values below a threshold to be zero and above the threshold to be one.
H: Terminology - cross-validation, testing and validation set for classification task Confusion1) If k=10 then does this mean that 90% is for training and 10% for testing? So always we have k% for testing? Confusion2) In the following code I have used 10-fold cross-validation for training a Support Vector Machine (SVM). In general a data set will be split into (a) Training set, meas(trainIdx,:) (b) Testing set, meas(testIdx,:) c) Validation set. In the cross-validation approach I am building the SVM learner by training and validating inside the loop. Based on my understanding, the validation data must be completely different from the training and testing. But, in many online resources it is said that after cross-validation, one must re-train on the entire data set which in this example would be the meas(:,1:end). If so, then the learned model svmModel inside the cross-validation is lost. Have I misunderstood completely wrong? Can somebody please show what is the next step in the classification once the cross-validation is over? AI: Confusion 1) From wikipedia : k-fold cross validation is a technique for assessing how the results of a statistical analysis will generalize to an independent data set They also say : In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used,[7] but in general k remains an unfixed parameter. So you now see that you don't have k% for the testing but you always use 1/k % of the your dataset as test set. Note : you can choose to keep 2/k or more but it will be a lot more complicated to code. Confusion 2) In the scikit learn they above all refer to this tool as an "evaluating performance tool" counter to what wikipedia authors may suggest. The point is that CV allows you to assess how robust and reliable your prediction will be according to your initial dataset. The final mean obtained at the end of the CV is a mean score on the k testing sets. It is often good to have a look at all the intermediary results to assess the variances of them that can be a good explanatory estimation in case of bad generalization capability of your model. Edit : why running another training after CV Cross validation can also be used as an optimization tool to find the best hyper-parameters of your model. In this case, you should take the better hyper-parameters (among the k different parameters; one for each fold) and use them to do prediction on the full set to see if the optimized (i.e. choosen from CV) hyper-parameters are good on the full dataset. The notion of "best" parameter can be seen as the hyper-parameters from the model who gave the best score in your CV process. Note you can still put aside a validation set from the dataset, on which you won't do CV. This validation set can be used as the last test of your model prediction's quality. See also here Finally, you can use each k-fold model to predict an estimation and then take the mean of them as the final prediction of your model as Wikipedia authors have suggested, but this idea is closer to ensemble learning or a kind ofBootstrap method without replacement, than to CV hope it helps
H: How to scatter plot with each dimension having its own color I want to get insight for some values. I have two vectors of 300 dimensions, and want to compare them coordinate by coordinate. So I thought to plot each point with a different color, but the color for a dimension to be the same so that I would know which coordinates are lying where. I have the below code, which I modeled on some results at stackoverflow. colors = itertools.cycle(["r", "g", "b"]) for d_in in inp_samples: clr = next(colors) plt.scatter(len(inp_samples), d_in, c=clr) colors = itertools.cycle(["r", "g", "b"]) for d_out in out_samples: clr = next(colors) plt.scatter(len(out_samples), d_out, c=clr) But the plot I am getting is very weird. I was expecting it would be scattered but it is something like this: I also tried this: colors = cm.jet(np.linspace(0, 1, 300)) for d_in, d_out, c in zip(inp_samples, out_samples, colors): plt.scatter(len(inp_samples), d_in, c=c) plt.scatter(len(out_samples), d_out, c=c) Can anyone help in understanding what I am doing wrong? AI: In scatter in Matplotlib, the first two arguments must be the $x$ and $y$ values that you want to combine. In your code, the first value ($x$) is len(inp_samples) (or len(out_samples)) which is the number 300, so all your points line on the $x=300$ vertical line. In order to combine the inp_samples with the out_samples, I too would recommend using zip, as in your second sample of code. Just replace plt.scatter(len(inp_samples), d_in, c=c) plt.scatter(len(out_samples), d_out, c=c) with plt.scatter(d_in, d_out, c=c) (and see if it works). Edit So I thought to plot each point with a different color, but the color for a dimension to be same, so that i would know, which dimensional points are lying where. In a scatter plot, there is no need for any special colouring -- by two values forming one pair of $(x_i,y_i)$, it is already given which point belongs to which. You could also try to plot $(i,x_i)$ and $(i,y_i)$ in the same plot. In this case, points above each other belong together and you still don't need any special colouring. colors = itertools.cycle(["r", "g", "b"]) for i in range(300): clr = next(colors) plt.scatter(i, d_in[i], c=clr) plt.scatter(i, d_out[i], c=clr)
H: Difference between Sum of Squares and Maximum Likelihood Linear Regression I'm new in Machine Learning and one of the first arguments I'm studying is linear regression. I understood that , in few words , the idea to use the Linear Regression is to learn an hypothesis that can map a new input x in a good approximation of y. In order to do this , if my hypothesis is : h(x) = wx + w0 I have to update my parameters minimizing an error function like Least Squares and optimize the w vector with the help of an optimization algorithm like Gradient Descent. I understood how this works , but sometimes I see this "Maximum Likelihood Estimation" and I did not understand if it is another way to estimate the w parameters or something else. AI: Suppose you construct a probabilistic model, where $y_i$ are believed to be related to your $x_i$ under the formula; $$ y_i = w_1 x_i + w_0 + \epsilon_i $$ So $w_1$ and $w_0$ are your target parameters from before and $\epsilon_i$ is an error term which you expect follows some probability distibution, e.g. $\mathcal{N}(0,\sigma^2)$. It is important that its expectation is zero. Given your data you want to maximise the probability of returning every $y_i$ given your data $x_i$ subject to your model. The probability of getting $y_i$ from $x_i$ is equal to $f_{\epsilon}(y_i-w_1x_i-w_0)$, where $f_{\epsilon}$ is the probability density function of $\epsilon$, i.e. a normal distribution. And the likelihood of getting every $y_i$ is the product of them all, $$ L = \prod_i f_{\epsilon}(y_i-w_1x_i-w_0) $$ You want to maximise this value by tweaking $w_1$ and $w_0$, hence the name maximimum likelihood estimation. Note that this is equivalent to maximising the log likelihood, so; $$ \log L = \sum_i \log f_{\epsilon}(y_i-w_1x_i-w_0)$$ And if you look at the normal distribution density function you will see that (after ignoring some constants) this reduces to the problem of maximising.. $$ - \sum_i (y_i-w_1x_i-w_0)^2 $$ or in other words minimising the sum of squares akin to OLS. But like using a different distance function in OLS you can parametrise a different error distribution in MLE.
H: Last layers of YOLO I would like if someone could explain to me something. The architecture in YOLO from the Figure 3 in your YOLO paper https://pjreddie.com/media/files/papers/yolo.pdf is like this: (448,448,3), (112,112,192), (56,56,256), (28,28,512), (14,14,1024),(7,7,1024),(7,7,1024), Dense(4096), (7,7,30) I don't understand how to implement the last three parts, bolded ones If it is not the problem, I would appreciate if you help me understand that part. I use Keras and everything is OK for me to implement except those parts. I really don't know how to pass from (7,7,1024) to (7,7,1024) and also from Dense to (7,7,30). AI: You can use the Flatten and Reshape layers to go to Dense and back to HWC format. The last layers in keras would look like this: 7_7_1024_1 = ... # The first (7,7,1024) x = keras.layers.Conv2D(1024, 3, padding='same')(7_7_1024_1) x = keras.layers.Flatten()(x) x = keras.layers.Dense(4096)(x) x = keras.layers.Dense(7 * 7 * 30)(x) x = keras.layers.Reshape((7, 7, 30))(x)
H: 2D Pytorch tensor doesn't have independent random values I've written some Python to create a pytorch tensor of random values, sampled from a Student's t distribution with 10 degrees of freedom: t = torch.Tensor(()) def random_from(shape): return torch.distributions.StudentT(10, t.new_zeros(shape), t.new_ones(shape)).sample() If shape is of the form $(n, m)$, all values in the resulting 2D tensor are identical. I don't understand why. I did try reading Pytorch's documentation, but I couldn't find anything that helped me understand what would be a better syntax. I suppose I could create separate samples then concatenate them, but apart from the speed implications I'd like to know where my existing syntax goes wrong. AI: You don't use the parameters "loc" and "scale" the right way. They are not suppose to be tensors. Bellow the right syntax : dist = torch.distributions.StudentT(10, 0, 1) dist = torch.distributions.StudentT(10) # 0 and 1 are the default parameters Then you can sample multiple values like that : t = dist.rsample(torch.Size([n,m])) t = dist.rsample(torch.Size([n]))
H: Is there any named entity reconginition algorithm trained for the french language? I am trying to implement a utility for my mobile application to perform some actions based on user questions. I need an algorithm to extract named entities from a text string (French grammar). I have used nltk's interface to Stanford's NER models but it works only for English (A subset of other languages is supported but i can't find French). I have also used Polyglot but it seems that it doesn't do the work very well (Maybe the models I am using are not very well trained). I don't know if there is any free REST API that can do NER for the French language or any other algorithm or even an already trained model for nltk/Stanford NER. AI: Yes, there is a french model free and ready to use via the spaCy package! Here are the small amd medium sized models, that should be ready to go. Here is the basic summary of the dataset, shown at the spaCy website:
H: Significance of comparing Receiver Operating Characteristic (ROC) curves An ROC curve plots the true positive rate (sensitivity) as function of the false positive rate (100-specificity) for different cut-off points of a parameter. Each point on the ROC curve represents a sensitivity/specificity pair corresponding to a particular decision threshold. This would permit one to compare the goodness of the model when varying the given parameter (e.g. number of trees in a Random Forest) to help optimize that particular classifier's parameters. Although should one be using ROC curves when comparing different types of classifiers (e.g. Random Forest vs Neural Net vs Logistic Regression)? I came across an instance of such an example on the second last slide here and was trying to understand the significance of these curves. As far as I am aware, the graph displayed appears to not be a necessarily fair comparison when comparing the area under the curve while tuning different parameters in different types of classifiers. AI: Although should one be using ROC curves when comparing different types of classifiers (e.g. Random Forest vs Neural Net vs Logistic Regression)? Yes, because you can clearly see which model is performing best overall. The more you get close to the top left corner of the graph, the better your model is. (the blue line in your example, Random Forest) If you compute the Area under the ROC curve in the example, you'll get: AUC(RF) > AUC(MLP) > AUC(SVM) So AUC is a good index to show which curve is "higher". Now, sometimes you could be interested in a model that has a "steeper" ROC curve in the bottom left corner (higher chance of improving TPR, with low cost in term of FPR), and maybe that model isn't the best in term of AUC. (Not the case in your example) In that case you would not choose AUC as a term of model choice, because it doesn't select the "best" model for you. Overall the ROC curve between different models helps you a lot, showing not only the performance over different thresholds, but also comparing different solutions simultaneously. Edit: Each model gives a number for each observation that is the probability to belong in one class (a number p between 0 and 1). Say you get p = 0.3 for one observation, then with a threshold of 0.5 you will label that observation to one of the two classes (assuming there are only two labels). Your rule is: If p < 0.5 (threshold) then obs is labeled to class "one" If p >= 0.5 (threshold) then obs is labeled to class "zero" With ROC curve, you basically have a "high" number of thresholds, say going from 0.01 to 0.99. For each model. So for each model, you will assign each observation to the respective class, using for t = 0.01, then t = 0.02, and so on till t = 0.99. (t = threshold)
H: Why are only 3-4 features important in my random forest? I am running a random forest regression with Python's Scikit-Learn, code's below (X - features, y - to be predicted). # Splitting the dataset into the Training set and Test set from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 1) # Scaling from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() X_train = sc_X.fit_transform(X_train) X_test = sc_X.transform(X_test) # Random forest from sklearn.ensemble import RandomForestRegressor rf =RandomForestRegressor(max_depth=2, n_estimators = 100, random_state=0) rf = rf.fit(X_train,y_train) pred_train = rf.predict(X_train) pred_test = rf.predict(X_test) I am running this code for randomly sampled 100k dataset, that has 60+ features. Each time when I check feature importance I get 3 to 4 variables as important (with one of them holding over 80% of importance), and others' importance is set to 0. It is not reasonable to me that only these are important for prediction and the rest is rubbish. var_num = X_train.shape[1] plt.barh(range(var_num), rf.feature_importances_, align='center') plt.yticks(np.arange(var_num), variable_names) plt.xlabel('Variable Importance') plt.ylabel('Variable') plt.show() Is it possible that I am missing something? That some other parameters needed to be defined? Could this be caused by a high correlation between variables themselves? Or is it really that the rest of my features are useless..? AI: RandomForestRegressor has a parameter called max_features, which is the number of features to consider when determining the optimal split. You haven't explicitly specified this, so Python will use the default (auto) and consider all features. Given that your trees are very shallow and you are considering all of the features to split, it would not surprise me that the strongest 3-4 are consistently popping up (the bagging process in random forests will cause some variation within this). Decreasing max_features and/or increasing max_depth may yield a greater variety of "important" features.
H: Padding in Keras with output half sized input Here is my Keras model I'm working on: model = Sequential() model.add(Conv2D(64, kernel_size=(7, 7), strides = 2, padding = 3, input_shape=input_shape)) # (224,224,64) model.add(MaxPooling2D(pool_size=(2, 2), strides = 2)) # (112,112,64) model.add(Conv2D(192, kernel_size = (3,3), padding = 1)) #(112,112,192) model.add(MaxPooling2D(pool_size = (2,2),strides = 2)) #(56,56,192) model.add(Conv2D(128, kernel_size = (1,1))) #(56,56,128) model.add(Conv2D(256, kernel_size = (3,3), padding = 1)) #(56,56,256) model.add(Conv2D(256, kernel_size = (1,1))) #(56,56,256) model.add(Conv2D(512, kernel_size = (3,3),padding = 1)) #(56,56,512) model.add(MaxPooling2D(pool_size = (2,2), strides = 2)) #(28,28,512) model.add(Conv2D(256, kernel_size = (1,1))) #(28,28,128) model.add(Conv2D(512, kernel_size = (3,3), padding = 1)) #(28,28,512) model.add(Conv2D(256, kernel_size = (1,1))) #(28,28,128) model.add(Conv2D(512, kernel_size = (3,3), padding = 1)) #(28,28,512) model.add(Conv2D(256, kernel_size = (1,1))) #(28,28,128) model.add(Conv2D(512, kernel_size = (3,3), padding = 1)) #(28,28,512) model.add(Conv2D(256, kernel_size = (1,1))) #(28,28,128) model.add(Conv2D(512, kernel_size = (3,3), padding = 1)) #(28,28,512) model.add(Conv2D(512, kernel_size = (1,1))) #(28,28,512) model.add(Conv2D(1024,kernel_size = (3,3), padding = 1)) #(28,28,1024) model.add(MaxPooling2D(pool_size = (2,2), strides = 2)) #(14,14,1024) model.add(Conv2D(512, kernel_size = (1,1))) #(14,14,512) model.add(Conv2D(1024,kernel_size = (3,3), padding = 1)) #(14,14,1024) model.add(Conv2D(512, kernel_size = (1,1))) #(14,14,512) model.add(Conv2D(1024,kernel_size = (3,3), padding = 1)) #(14,14,1024) model.add(Conv2D(1024, kernel_size = (3,3), padding = 1)) #(14,14,1024) model.add(Conv2D(1024, kernel_size = (3,3), strides = 2, padding = 3)) #(7,7,1024) model.add(Conv2D(1024,kernel_size = (3,3), padding = 1)) #(7,7,1024) model.add(Conv2D(1024, kernel_size = (3,3), padding = 1)) #(7,7,1024) model.add(Flatten()) model.add(Dense(4096)) model.add(Dense(7*7*30)) model.add(Reshape(7,7,30)) When I compile it, I got an error for padding because Keras knows only 'same', valid' and 'casual'. I understand these, but I really need padding somewhere to be equal to 3 because my output should be a half of input (we have strides equal to 2). I really don't know how to fix it. How to do padding if we want to half size the input with strides 2? AI: If you goal is simply to halve the size of your filters, you could think about using some different methods other than padding, such as dilated convolutions. Have a look at this paper for some ideas and nice explanations with pictures. Just thinking about your dimensions quickly, I am not sure you could go from 14 to 7 very easily. getting to 8 or 5 is simple enough though. One thing that isn't always obvious if you just started learning KEras, is that you can mix in Tensorflow operations directly from the Tensorflow library in with your Keras code. In TF there is a function called pad, which allows you to specify the padding manually on each side of a tensor. There are also options to say whether the padding is done with zeros, or if the values inside your original tensor are repeated/mirrored (using the mode argument). You could try using this to pad the layers. I can show you how to pad the tensors to get the effect you want: from keras.models import Sequential, Model from keras.layers import (Input, Conv2D, MaxPooling2D, Flatten, Dense, Reshape, Lambda) import tensorflow as tf input_shape = (448, 448, 3) batch_shape = (None,) + input_shape raw_input = tf.placeholder(dtype='float16', shape=batch_shape) paddings = tf.constant([[0, 0], # the batch size dimension [3, 3], # top and bottom of image [3, 3], # left and right [0, 0]]) # the channels dimension padded_input = tf.pad(raw_input, paddings, mode='CONSTANT', constant_values=0.0) # pads with 0 by default padded_shape = padded_input.shape # (454, 454, 3) because we add 2*3 padding input_layer = Input(padded_shape, batch_shape, tensor=padded_input) layer0 = Conv2D(192, kernel_size = (3,3), padding='valid')(input_layer) layer1 = MaxPooling2D(pool_size=(2, 2), strides = 2)(layer0) layer2 = Conv2D(192, kernel_size = (3,3), padding='valid')(layer1) layer3 = MaxPooling2D(pool_size = (2,2),strides = 2)(layer2) layer4 = Conv2D(512, kernel_size = (3,3), padding='valid')(layer3) layer5 = MaxPooling2D(pool_size = (2,2), strides = 2)(layer4) layer6 = Conv2D(256, kernel_size = (1,1), padding='valid')(layer5) # layer6.shape --> [Dimension(None), Dimension(55), Dimension(55), Dimension(256)] layer6_padded = tf.pad(layer6, paddings, mode='CONSTANT') layer6_output = Input(input_shape=layer6_padded.shape)(layer6_padded) # This will end up giving this error at compilation: # RuntimeError: Graph disconnected: ..., layer7 = Conv2D(1024, kernel_size=(3, 3), strides=2)(layer6_output) layer8 = Flatten()(layer7) layer9 = Dense(4096)(layer7) layer10 = Dense(7*7*30)(layer8) output_layer = Reshape((7, 7, 30))(layer10) # The following both fail to get the graph as we would like it model = Model(inputs=[input_layer], outputs=[output_layer]) #model = Model(inputs=[input_layer, layer6_output], outputs=[output_layer]) model.compile('adam', 'mse') model.summary() I have been unable to then bring this tensor back into the Keras model (as a layer, which is required) because the standard way of using the Input object forces it to be the entry point of the computational graph, but we want the padded tensors to form an intermediary layer. If you don't force the padded tensors into a Keras layer, attributes will be missing: # AttributeError: 'Tensor' object has no attribute '_keras_history' Which you can hack by just adding the attribute from the layer before we padded: #layer6_output._keras_history = layer6._keras_history Unfortunately, I still ran into other errors. Perhaps you can post a new question on StackOverflow asking how to to this, if you can find anything. I did have a quick try using the idea of creating two graphs and then joining them, but didn't succeed.
H: Different number of images in classes I am working on a deep learning CNN project. The dataset contains more than 500 classes and the classes have different numbers of items (images). For example, some of the classes have 5 images and some of the classes have 10 and some of the classes have 20 images and some of the classes have more then 20 images. Can I use this dataset to create the CNN model? Should the number of the images in each class be the same number? Note: I will use VGG to train the model. AI: Frankly, even 50 images will not be sufficient if you are going to create and use a CNN model. If you think you want more images for you model training, then go for data augmentation. It is a process of transforming an image by a small amount (be it height, width, rotation etc or any combination of these). In this way, an image and its augmented image will differ slightly. You can find relevant article here- https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced To answer the part that should there be same number of images in each class, there should be approximately same number. This problem is a general problem while working on classification task and there are several ways to deal with it, including simulating the data (augmentation). I would suggest that first create a separate test set, then on the remaining train set, use data augmentation and finally create the model. EDIT Using a pretrained convnet is also an option, as stated in a deep learning book- A common and highly effective approach to deep learning on small image datasets is to use a pretrained network. A pretrained network is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. If this original dataset is large enough and general enough, then the spatial hierarchy of features learned by the pretrained network can effectively act as a generic model of the visual world, and hence its features can prove useful for many different computer vision problems, even though these new problems may involve completely different classes than those of the original task. For instance, you might train a network on ImageNet (where classes are mostly animals and everyday objects) and then repurpose this trained network for something as remote as identifying furniture items in images.
H: How to compare fast the value with the ones of neighbours in a 3D array Question: I want to compare every item in a 3D array with its first neighborhoods. It's really slow when I have a 500x500x500 (e.g. with only values of 0, 1, 2) ndarray. I post the principle lines here: import numpy as np # Create a list to stock all the neighbours' coordinations of the voxel wanted def check_neighbor(array, x, y, z): #top t = array[x, y , z + 1] #down d = array[x, y , z - 1] #left l = array[x, y - 1 , z] #right r = array[x, y + 1 , z] #front f = array[x - 1, y , z] #back b = array[x + 1, y , z] return [t, d, l, r, f, b] # Check the voxel with all its neighborhood def compare_neighbor(array, Value2match, Value2Bmatched): for index in np.argwhere(array==Value2match) output[index] = 1 if Value2BMatched in checkneighbor(array, index[0],index[1], index[2]) else 0 return output # Main array = np.random.randint(3, shape=(500, 500, 500)) output = compare_neighbor(array, 1, 2) This code take me hours only for the n=1 neighbour! Is there an efficient way which can also check the 2, 3... nearest neighbours? Can somebody help me? Solution 1: Based on jayprich's answer and n1k31t4's comments, I fused the both function into one and replace the argwhere() by where(). The advantage of this code is that we don't iterate voxel by voxel but do it in a vectorized way: import numpy as np # Build a helper function to SHIFT(not roll) a 3Darray def shift_helper(array, neib_value, shift=0, axis=0): #Roll the 3D array along axis with certain unity _array = np.roll(array == neib_value, shift=shift, axis=axis) # Cancel the last/first slice shifted to the first/last slice if axis == 0: if shift >= 0: _array[:1, :, :] = 0 else: _array[-1:, :, :] = 0 return _array elif axis == 1: if shift >= 0: _array[:, :1, :] = 0 else: _array[:, -1:, :] = 0 return _array elif axis == 2: if shift >= 0: _array[:, :, :1] = 0 else: _array[:, :, -1:] = 0 return _array def compneib(array, that_value, neib_value): _array = np.zeros(array.shape) _array[np.where((array == that_value) & (shift_helper(array, neib_value, shift=-1, axis=0) | shift_helper(array, neib_value, shift=1, axis=0) | shift_helper(array, neib_value, shift=-1, axis=1) | shift_helper(array, neib_value, shift=1, axis=1) | shift_helper(array, neib_value, shift=-1, axis=2) | shift_helper(array, neib_value, shift=1, axis=2) ))] = 1 return _array # Main array = np.random.randint(3, shape=(500, 500, 500)) output = compneib(array, 1, 2) Solution 2: If there would still be a faster way. AI: NumPy performs operations in a vectorised manner, you should try and operate on the whole array and avoid explicit loops. e.g. np.argwhere( (test == 1) & np.roll( test==2 , shift=-1 , axis=0 ) ) can be done for different shift and axis values and results combined
H: Scatter plot for binary class dataset with two features in python I have my dataset that has multiple features and based on that the dependent variable is defined to be 0 or 1. I want to get a scatter plot such that all my positive examples are marked with 'o' and negative ones with 'x'. I am using python and here is the code for the beginning. import numpy as np import matplotlib.pyplot as plt import pandas as pd # Importing the dataset dataset = pd.read_csv('/home/Dittu/Desktop/Project/creditcard.csv') now I know how to make scatter plots for two different classes. fig = plt.figure() ax1 = fig.add_subplot(111) ax1.scatter(x[:4], y[:4], s=10, c='b', marker="s", label='first') ax1.scatter(x[40:],y[40:], s=10, c='r', marker="o", label='second') plt.show() but how to segregate both class of examples and the plot them or plot them with distinct marks without separating? AI: Found the answer. Thank you @Aditya import seaborn as sns sns.lmplot('Time', 'Amount', dataset, hue='Class', fit_reg=False) fig = plt.gcf() fig.set_size_inches(15, 10) plt.show() where Time and Amount are the two features I needed to plot. Class is the column of the dataset that has the dependent binary class value. And this is the plot I got as required.
H: Conflicting directions of weights gradients in gradient descent? In a typical ANN backpropagation setting, we have multiple weights and we try to reduce the loss function by calculating the gradient of the function with respect to the weights let's say w1, w2, w3 to eventually update them. We calculate ∂Err/∂w1, ∂Err/∂w2, ∂Err/∂w3 and update the weights as wi = wi + ∂Err / ∂wi for each of the weights so that we move towards the direction where the loss function reduces in measure. The problem that I see is that there might be situations all the time when some of the weight delta directions conflict in terms of the loss function. That is, maybe Err reduces when w1 goes towards ∂Err/∂w1 alone, but it might well be the case that Err actually increases when w1 is updated along with w2, that is when we are taking steps together in direction of all these weights, we might actually not go down the Err. Isn't this the case? What am I missing? AI: That is, maybe Err reduces when w1 goes towards ∂Err/∂w1 alone, but it might well be the case that Err actually increases when w1 is updated along with w2, that is when we are taking steps together in direction of all these weights, we might actually not go down the Err. Isn't this the case? Not exactly. Using back propagation, the weight gradients may be calculated precisely (for the given training data). It doesn't matter that there are many of them updating at the same time. The gradients don't "conflict" as such, they are 100% compatible. But they are only valid locally to the current values of $w_i$ An update to the all the weights at once in the opposite direction to the gradient is guaranteed to reduce the error value for that training data, with an important caveat that it is only fully guaranteed to make an infinitesimal small improvement, when the step size is also infinitesimal. If you make an update step that is too large (for some value of "too large" which varies a lot depending on context), then the curve may not remain consistent over the step and your error value could increase. This problem is not directly related to updating multiple weights at once. It can occur even when you have a single weight. However, when there is a more complex function, with more weights all changing at once, there can be more places where the function does this. In practice, infinitesimal updates would take too long to train, so you need to find a larger value - but not so large it causes problems - as a compromise. In addition: The training data you have will usually allow you at best to create a rough approximation to the "true" function you are searching for. So you don't really want to find an actual global minimum in the error function for your training data set, as it would overfit. Typically, using mini-batches, the gradient is also only a rough approximation, so updates only go roughly in the right direction (this can sometimes be good as it can help escape from local minima and saddle points)
H: How (and why) are random forests able to represent both linear and non linear data From reading articles I've read that random forests are suited for representing both linear and non linear data quite well, but I can't find an explanation as to how and why it has his flexibility? Any information to clarify this would be great! AI: Decision Trees are able to create both linear and nonlinear boundaries and so are Random Forests. This is because of how they cluster the data based on nested "if-else" statements. These statements draw vertical/horizontal lines between the samples and cluster them in rectangles. Consecutively, rectangles of the same class can be far away from each other (with other class rectangles in between) but still they belong to the same class. This is how nonlinear relations are modeled.
H: How to train a machine learning model if there is a relationship between two different data points? How to train a machine learning model if there is a relationship between two different data points? I mean not relationship between two variables among the point but that between two different data examples or points, so that i want to classify similar such set of datapoints into one single class . Which algorithm is most suited in this case . I want to write the algorithm on my own. AI: This is a very broad question and really depends on the type of data you have and how it is distributed. There are many types of classifiers that you can use. This link, from scikit-learn shows you a comparison between many algorithms that could help you choose which one to pick. I understand you want to write the algorithm yourself, but I would recommend you looking at scikit-learn.org and trying out different algorithms. Once you have tried and wanted to implement yourself, you can have a look at blogs, like this one, which explains in details the intrinsic works of models, like k-nearest-neighbours for example.
H: Do we need the Preprocessing step on both Test and Train data sets? I have seen many people handle the missing or inconsistent data in both their test and train data sets. Sometimes they handle only the train data sets and sometimes they merge the train and test data sets and handle the missing parameter. So, what is the best approach and What's the difference of these two? Does any approach effect the predictive model or it's just optional or good practice to handle missing data in both data sets. AI: There are a few things that you need to be careful with here. You can do certain things when preprocessing data or performing data augmentation that can be applied across an entire dataset (train and validation). The main idea is not to allow the model to gain insight from the test data. Time-series example Missing data can be managed in many ways, such as simple imputation (filling the gaps). This is very common in time-series data. In your training data, you can fill the gaps using the previous value, the following value, the average of the data or something like the moving average. Where you must be careful is with violating the information flow through time. For example, in your test data, you should not fill gaps using a method that looks at data points in front of the emtpy time slot. This is because, at that point in time, you will not be able to do the same as you shouldn't know the future values. Image data example Looking instead at image data, there are data-preprocessing steps such as normalisation. This means just scaling the image pixel values to a range like $[-1, 1]$. To do this, you must compute the population mean and variance, which you then use to perform the scaling. When computing these two statistics, it is important not to include the test data. The reason is that you would be leaking information the dataset that is then used to train a model. Your technically knows things that it shouldn't; in this case, clues regarding the mean and variance of the target distribution. People might also consider "missing data" to include imbalanced datasets; i.e. there are cases that you know of, but just don't appear in your dataset very often. There are some tricks to help with this, such as stratified sampling or cross-validation. The optimal solution would, of course, be to gather a dataset that more closely represents the problem at hand.
H: Mini batch size and reset states 3 is a big file, but I would like to reset the state after mini_batch_size of 50. n_epoch=10000 n_batch=50 # create and fit the LSTM network model = Sequential() model.add(LSTM(3,batch_input_shape =(n_batch,trainX.shape[1], trainX.shape[2]),stateful=True)) model.add(Dense(1)) model.add(Activation("linear")) model.compile(loss="mse", optimizer="adam") model.summary() #fitting model for i in range(n_epoch): history=model.fit(trainX, trainY, epochs=1, batch_size=n_batch,verbose=2, shuffle=False) model.reset_states() I am getting the error: value error: In a stateful network, you should only pass inputs with a number of samples that can be divided by the batch size. Found: 63648 samples< How can I train a LSTM in mini_batch size of 50 (a number which is not divisible by trainX)? AI: You can simply make your training data a shape that is divisible by 50. In [1]: sample_size = 63648 In [2]: to_remove = sample_size % 50 In [3]: end_index = sample_size - to_remove In [4]: model.fit(trainX[:end_index], trainY[:end_index], ... What are the dimensions of trainX and trainY? You are not specifying a validation dataset, is that intentional? You say you want to reset the state after a mini-batch, but you are resetting the state after each epoch (one loop through your entire dataset). One more comment: the n_epochs = 10000 seems excessive.
H: PCA - Error minimization and Variance Maximization I'm studying the PCA algorithm and the theory behind it. I think I understood how does it work and the idea of dimension reduction of the data in order to find a new feature (component) that maximizes the variance of the data and minimizes the error. My question is : in this algorithm , are the maximum variance and the minimum error reached at the same moment ? In this example the magenta/black line is the solution of my PCA. So I find the 1 dimension vector that reduces my 2 dimensions dataset. I found this vector because the error (length of the red lines) is minimized and the variance (distance among the red projected points) is maximized. So if I need to apply this algorithm , if I use either only the avg error minimization , will I reach the same result of the case in which I use the variance maximization principle ? Thanks. AI: Yes, the two formulations where you either capture the maximum amount of variance of the data while reducing the dimension or where you try to minimize the distance of the data from the selected subspace (i.e. reconstruction error) are equivalent.
H: Normalizing test data I have a problem in data normalization. I have data for which I need to create an SVM. I will be using the model for real-time predictions. I know that the test tuples should be normalized using the exact same values as per the training data. However, my test tuples can have values that exceeds the maximum value of the data in the training set. For example, in the training set I have following values for a given feature. Maximum : 20457 Minimum: 3 In the testing tuple, I sometimes get values like 35002. This is present in most of the features. The problem would have been solved if I knew the maximum and minimum values for all feature, but it no possible. The maximum value can go up to any value. How do I do data normalization in a scenario like this? Can someone please help me with this? AI: Judging from your question you are probably using this formulae for normalisation: (x - x_mean)/(x_max - x_min). This is just an approximation of the real normalization formulae. The real one would be: where mu is the mean and sigma is standard deviation. If your data trends is same throughout, then you can expect the mean and standard deviation to be approximately same, and thus give you a more uniform representation. Check this Wikipedia article, Feature scaling which says about the schemes used in different ML techniques. Hope this helps!
H: 1d time series to time series approximation using deep learning I have a basic question about choosing right architecture of my deep learning task. I have input signal $X(t)$ as a function of time when it is fed to the system, it generates output $Y(t)$ which is also a function of time. I have a bunch of experiments performed with many input signals $X_1, X_2, X_3 ,...$ and many output signals $Y_1,Y_2,Y_3,...$. Instead of performing more experiments I would like to generate neural net where I can feed input signal X (t) and get output Y (t). I want to use already measured data as training data. To me, this looks like like regression problem but involves time series data. One of the neural network architectures that come to mind is RNN LSTM OR CNN. People mainly use LSTM for forecasting problem (knowing history to predict the future) not regression. So can I use CNN, use 1D filters and then some pooling and fully connected layers? Will this work? What kind of set up should I use? AI: Yes, you can use one-dimensional convolutions but you have to consider the fact that the size of your inputs should be the same. In Tensorflow you can exploit tf.nn.conv1d for your purpose. RNN networks like LSTMs and GRUs are also other architectures that can be used. If your input has a patter that may happen different times in a single signal try to use conv1d. If you want to use RNNs you have to find which task you have such as a one-to-one or one-to-many-relation. For convolutional architecture, take a look at here and here, and take a look at [here] for RNNs.
H: Tflearn "nan" weight matrices I wanted to build a DQN. So I followed this code and watched some videos about the idea of DQN. My Code is this (mine is written in tflearn and his in keras): import tflearn as tfl import numpy as np import gym from collections import deque import random class DeepQ(): def __init__(self,game="SpaceInvaders-v0"): self.game=game self.env=gym.make(game) self.storage=deque() self.filter_size=[4,4] self.itertime=1000 self.random_move_prop=0.8 np.random.seed(1) self.minibatch_size=250 self.discounted_future_reward=0.9 def Q_Network(self,learning_rate=0.0000001,load=False,model_path=None,checkpoint_path="X://xxx//xxx//Documents//GitHub//Deeplearning_for_starters//Atari_modells//checkpoint.ckpt"): if load==False: net=tfl.layers.core.input_data(shape=[None,210,160,3])# rework this stuff net=tfl.layers.conv.conv_2d(net,nb_filter=3,filter_size=self.filter_size,activation='relu') net=tfl.layers.conv.conv_2d(net,nb_filter=3,filter_size=self.filter_size,activation="relu") #net=tfl.layers.fully_connected(net,20,activation="relu") net=tfl.layers.flatten(net) #net=tfl.layers.fully_connected(net,18,activation="relu") net=tfl.layers.fully_connected(net,10,activation='relu') net=tfl.layers.fully_connected(net,self.env.action_space.n,activation="linear") net=tfl.layers.estimator.regression(net,learning_rate=learning_rate) self.modell=tfl.DNN(net,checkpoint_path=checkpoint_path) else: net=tfl.layers.core.input_data(shape=[None,210,160,3]) net=tfl.layers.conv.conv_2d(net,nb_filter=3,filter_size=self.filter_size,activation='relu') net=tfl.layers.conv.conv_2d(net,nb_filter=3,filter_size=self.filter_size,activation="relu") #net=tfl.layers.fully_connected(net,20,activation="relu") net=tfl.layers.flatten(net) #net=tfl.layers.fully_connected(net,18,activation="relu") net=tfl.layers.fully_connected(net,10,activation='relu') net=tfl.layers.fully_connected(net,self.env.action_space.n,activation="linear") net=tfl.layers.estimator.regression(net,learning_rate=learning_rate) self.modell=tfl.DNN(net) self.modell.load(model_path,weights_only=True) def Q_Learning(self): observation=self.env.reset() for i in range(self.itertime): #self.env.render() observation=observation.reshape(1,210,160,3) if np.random.rand()<=self.random_move_prop: #print("Random step") action=np.random.randint(low=0,high=self.env.action_space.n) else: #print("Random prediction") #for debugging usefull action=self.modell.predict(observation) action=np.argmax(action) new_observation, reward, done, info=self.env.step(action) self.storage.append((observation,action,reward,new_observation,done)) observation=new_observation if done: self.env.reset() print("###############################################") print("Done with observing!") print("###############################################") minibatch=random.sample(self.storage,self.minibatch_size)# take random observations from our data x=np.zeros((self.minibatch_size,)+observation.shape) y=np.zeros((self.minibatch_size,self.env.action_space.n)) for i in range(0,self.minibatch_size): Observation=minibatch[i][0] Action=minibatch[i][1] Reward=minibatch[i][2] New_observation=minibatch[i][3] done=minibatch[i][4] print("Processing batch data... (step:"+str(i)+" from "+str(self.minibatch_size)+")") x[i:i+1]=Observation.reshape((1,)+observation.shape) y[i]=self.modell.predict(Observation) Q_sa=self.modell.predict(Observation) if done: y[i,action]=reward else: y[i,action]=reward+self.discounted_future_reward*np.max(Q_sa) self.modell.fit_batch(x,y) self.modell.save("X://xxx//xxx//xxx//SpaceInvaders1.tfl") print("") print("Modell fitting acomplished!") print("") def Q_predict(self,model_path="Your path here"): self.Q_Network(load=True,model_path=model_path) observation=self.env.reset() observation=observation.reshape((1,)+observation.shape) done=False total_reward=0.0 while not done: self.env.render() Q=self.modell.predict(observation) print(Q) action=np.argmax(Q) print(action) new_observation,reward,done,info=self.env.step(action) observation=new_observation observation=new_observation.reshape((1,)+observation.shape) total_reward+=reward print("Game ends with a score of: "+str(total_reward)) print("") The problem is that, if I run the predict function the network does nothing. I figured out that all weights are filled with nan. What I have read is that it can depend on the learning rate, so I have lowered the rate from 1e-3 to the actual one, but this changed nothing. AI: So, I figured it out. The problem was the loss function. I found a similar problem here. So because I am a noob in tflearn and I have no idea, if you can change the loss function to a custom one (I guess you can). I used mean_squared (Mean Squared Error) instead. This fixed my problem. I would appreciate if someone could explain the problem, so I can better understand it.
H: Recommending products to buy I have the following dataset where each row represents a customer, and each column is a product. So for example, the first customer (row 0) buys product 2 and product 7. The second customer (row 1) buys products 3,6 and 7. And so on... Basically I want to recommend customers who are already buying certain products, some other products. Here is an example: Let's say I choose all customers who are buying products 1 and 4. How would I recommend them the next best...let's say 3... products based on what other 1 and 4 customers are buying? I'd like to use keras neural networks or random forests for this kind of task. Thanks :) product 1.......product 2.........product 3....... product 4...... product 5........ product 6..... product 7 AI: This is a classic case of recommendation problem. There are a couple of steps in which a recommendation is made: Candidate Generation - What to show to a customer? Candidate Ranking - How to show the items to the customer? Personalisation - How to show the items to each customer? Step 1 and 2 are general that means they talk about the overall trend and customer base, meaning we perform these steps by saying in general what would happen. Step 3 is more personalised, where the ranking (order) in which the items are shown can be different for different customers. For starters you should experiment with Singular Value Decomposition and Matrix Factorisation for recommendation. If you have already tried your hand at these, then you should go with word2vec or doc2vec for building the step 1 and 2 and some kind of neural network at the end that ties to it in the end.
H: How to use multiple encoders(one-hot and numerical) together for PCA I want to implement PCA on a dataset(retail) but the data is categorical. One-hot encoding on some columns like Gender, Fabric, Brand makes sense but on other features like price range, size, I would like the encoded values to have some numeric significance, i.e. higher value actually means something. Any suggestions on implementing both these encodings together for PCA? AI: There is a method that can handle multiple types of data simulataneously called Generalized Low Rank Models - actually one paper that deals with it is called PCA on a Data Frame. GLRMs in Python (and not only Python) are implemented in H2O. Other than that you could try encoding your categorical data as numeric. There are multiple approaches to this. One example is mean encoding - see this answer for details. For implementation see Category Encoders. BTW if your task is totally unsupervised (you don't have any target) you can choose any continuous feature you have for mean encoding (so you can produce many columns from each categorical column).
H: Tools and Techniques for Analyzing German Automotive Discussion Forum Posts I work for a German online disussion forum around all things automotive, a bit like a “StackOverflow for cars”, if you will. We would like to train a model using TensorFlow with our high quality content, to be able to evaluate the quality of new content our users post on our platform. Our ultimate goal is to be able to put a link to the best answer to a question on our discussion forums. We (two backend Java developers and myself, a JavaScript frontend web developer) are very new to the field of data science and machine learning, currently going through the tutorials from Google and trying to figure out where to start. Which tools and technologies would you recommend to use for this project? How can we train a model to suit our needs? Are there any tutorials that demonstrate how to train a model to use German text as input? AI: You could go with different approaches: Lemme point few, You could just extract the keywords or tokenize out of the query using libraries like spacy or nltk, both support german languages. Then go for like page ranking approach, based on query optimization: Uber. You could go for Attention based seq2seq model, where you feed the inputs as questions and the answers as output. More like how you train chatbots and language translation models. This is popularly known as Neural Machine translation. Tensorflow has an open source implementation. nmt The second one is feasible because it has tonnes of examples out there. But try to use spacy and nltk for tokenizing according to German language and also try to use German word embeddings: example, which contains pre-trained weights on huge corpora representing each German word. Hope this helps.
H: panda grouping by month with transpose Based on the following dataframe, I am trying to create a grouping by month, type and text, I think I am close to what I want, however I am unable to group by month the way I want, so I have to use the column transdate. However, when I transpose this, I lose the order df = pd.DataFrame({'date':['6/2/2017','5/23/2017','5/20/2017','6/22/2017','4/21/2017','7/2/2017','5/23/2017','5/20/2017','8/22/2017','2/21/2017'],'rev':[100,200,300,400,500,-70,-250,-200,400,500],'text':['Car','House','Car','Truck','House','Car','House','Car','Truck','House']}) df['date'] = pd.to_datetime(df['date']) df = df.sort_values('date') #New Column df['transdate'] = pd.to_datetime(df['date']) df['transdate'] = df['transdate'].dt.strftime('%B - %Y') #second new column df['type'] = np.where(df['rev']>0, 'positive', 'negative') This give me this: Then, I create a pivot table that I am transposing df_pivot = df.pivot_table(index='transdate',columns=['type','text'],aggfunc=sum, fill_value=0).T df_pivot I am wondering how I can sort the first row, starting with feb 2017, then april 2017 and so on? Or, starting the other way around, aug 2017 then july 2017 ... but keeping the order of the months? Or, will be best to do the pivot table with index date and then, do the grouping? If this is the case, how can I do the grouping? AI: # make a month column to preserve the order df['month'] = pd.to_datetime(df['date']).dt.strftime('%m') # create the pivot table with this numeric month column df_pivot = df.pivot_table(index='month',columns=['type','text'],aggfunc=sum, fill_value=0).T # create a mapping between numeric months and the English version mapping = pd.Series(df.transdate.tolist(),index=df.month.values).drop_duplicates() # replace the columns according to the mapping df_pivot.columns = [i for i in map(mapping.get, df_pivot.columns)]
H: How handle the add of a new feature to the dataset? Let's say that you trained a model (eg. Random Forest) on a dataset with ten features (or columns). Now you add one or more features to the dataset. You need the information brought from these new features so you want to enrich your model. What happens to the data collected till now? How can I use it? Those data have just ten features. How can I use them to train the new model? AI: What you ask for is known as Transfer Learning in the Machine Learning framework, so you might want to look more into that direction. An interesting publication regarding Transfer Learning in Decision Trees is this.
H: Number of features of the model must match the input. Model n_features is `N` and input n_features is `X`. I am new to data science and trying get some results. I'm applying Decision Tree Classifier. When my train and test datasets' size are not equal I get an error `Number of features of the model must match the input. Model n_features is N (no. of entries in training datasets) and input n_features is X (no. of entries in test datasets). If I have 100 entries in my dataset and parameter for split is test_size=0.30 as: import pandas as pd from pandas import Series, DataFrame import numpy as np from sklearn import tree from sklearn.model_selection import train_test_split data=pd.read_csv("ndata.csv") X_train, X_test, y_train, y_test = train_test_split(data.dis, data.gen, test_size=0.30, random_state=42) c = tree.DecisionTreeClassifier() y_test_size = y_test.size y_train_size = y_train.size X_train = [X_train] y_train = [y_train] X_test = [X_test] y_test = [y_test] c.fit(X_train, y_train) accu_train = np.sum(c.predict(X_train) == y_train)/y_train_size accu_test = np.sum(c.predict(X_test) == y_test)/y_test_size print("Accuracy on Train: ", accu_train) print("Accuracy on Test: ", accu_test) And the error occurs as follows: ValueError Traceback (most recent call last) <ipython-input-33-f6cc77390526> in <module>() 24 25 accu_train = np.sum(c.predict(X_train) == y_train)/y_train_size ---> 26 accu_test = np.sum(c.predict(X_test) == y_test)/y_test_size 27 28 print("Accuracy on Train: ", accu_train) ~/anaconda3/lib/python3.6/site-packages/sklearn/tree/tree.py in predict(self, X, check_input) 410 """ 411 check_is_fitted(self, 'tree_') --> 412 X = self._validate_X_predict(X, check_input) 413 proba = self.tree_.predict(X) 414 n_samples = X.shape[0] ~/anaconda3/lib/python3.6/site-packages/sklearn/tree/tree.py in _validate_X_predict(self, X, check_input) 382 "match the input. Model n_features is %s and " 383 "input n_features is %s " --> 384 % (self.n_features_, n_features)) 385 386 return X ValueError: Number of features of the model must match the input. Model n_features is 70 and input n_features is 30 Link to data file: https://gist.github.com/mutafaf/7715ad67bc3cf4e08985afefcc0ce08a#file-ndata-csv Why do I'm getting this error. Is it necessary to have dataset size of both train and test equal? AI: You are supposed to pass numpy arrays and not lists as arguments to the DecisionTree, since your input was a list it gets trained as 70 features (1D list) and your test had list of 30 elements and the classifier sees it as 30 features. Nonetheless, you need to reshape your input numpy array and pass it as a matrix meaning: X_train.values.reshape(-1, 1) instead of X_train (it should be a numpy array not a list) this is the entire gist: data=pd.read_csv("ndata.csv") X_train, X_test, y_train, y_test = train_test_split(data.dis, data.gen, test_size=0.30, random_state=42) from sklearn import tree c = tree.DecisionTreeClassifier() c.fit(X_train.values.reshape(-1, 1), y_train) accu_train = np.sum(c.predict(X_train.values.reshape(-1, 1)) == y_train)/y_train_size accu_test = np.sum(c.predict(X_test.values.reshape(-1, 1)) == y_test)/y_test_size print("Accuracy on Train: ", accu_train) print("Accuracy on Test: ", accu_test) I'm getting the following output: Accuracy on Train: 0.8857142857142857 Accuracy on Test: 0.7333333333333333 Thanks for sharing the dataset. It was helpful for testing.
H: Same input size but cannot fit the model in keras I am trying to fit this model in keras but getting this error : Namespace(batch_size=32, epoch=10, num_classes=2) Start Train loading <class 'numpy.uint8'> Test loading <class 'numpy.ndarray'> Validation loading done_loading Traceback (most recent call last): File "main.py", line 67, in <module> validation_data=(x_val, Y_valHot) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/engine/training.py", line 1581, in fit batch_size=batch_size) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/engine/training.py", line 1414, in _standardize_user_data exception_prefix='input') File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/engine/training.py", line 153, in _standardize_input_data str(array.shape)) ValueError: Error when checking input: expected data to have shape (None, 224, 224, 3) but got array with shape (9730, 244, 224, 3) The function used to fit the model is model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, shuffle=True, verbose=1, validation_data=(X_valid, Y_valid), ) Can someone tell me where I went wrong. AI: ValueError: Error when checking input: expected data to have shape (None, 224, 224, 3) but got array with shape (9730, 244, 224, 3) it clearly says that while preprocessing that either X_train or X_valid have (9730, **244**, 224, 3) instead of (9730, **224**, 224, 3)
H: High multi class classification with small data set I am working on a multi-class classifier for data set with 240K samples and ~1880 classes, the most populated class is 4% of the dataset and large number of classes are less then 1% of the samples. I am trying to find a method to decide for which class I have enough information to correctly predict or to reject the prediction due to classifier performance for that particular class. Also, are there any good methods to deal with that kind of situations of high multi class with very low sample size? AI: Yeah. There have been scenarios like that. Imagenet has more than 1000 classes. Same could be said for certain cifar datasets. If deep learning, go with one-hot encoding for the labels. Stick with something like multi-class AUC as metric instead of believing on Validation accuracy. Go with a keras DNN, then you could try advanced architectures, with the last Dense net to have 1880 neurons each representing one label. Also stick with categorical_crossentropy for loss. Hope this helps.
H: How to choose the best optimiser in Deep Learning algorithms? I have been training a CNN to classify 4 faulty (acoustic emission/250KhZ) signals. I have no problem in implementing the algorithm using tensorflow libraries, but I am confused with which optimizer to choose. I have been trying out different optimizers, for Adam optimizer and the gradient descent optimizer algorithm gave the same results in test classification accuracy(~84% to 85%) on k-fold cross-validation. Results are same for both optimizers, is it because of the data or because of training on huge data(64000 samples/fault condition)? Are there any specific guidelines to choose a particular optimizer for kind of data? Or else, is it randomly by trial and error? AI: I found this helpful. I usually stick with Adam, and rarely attempt SGD. Simple words: try to go with the recent optimiser implemented on popular datasets and that has been implemented by the framework which claims to be better than others. To specifically say that this optimiser is best for this kind of data is subjective, and I haven't read any relevant material related to that. In future, If anyone finds it, please do comment or edit this post. Hope this helps.
H: How to solve online clustering problem Suppose we have a clustering problem where data sample is of multi-dimension with a mix of numeric and categorical type. If the problem is static i.e. we have all the data, then we can solve this problem by using K-prototype algorithm (variant of K-Means algorithm). But what if data comes dynamically, how can we solve this problem in such cases Possible constraints: Data comes dynamically Number of clusters is not fixed (it will increase with time) If similarity(new_data_sample) < threshold for all the clusters then the new cluster should be created containing new_data_sample AI: See the existing methods on data stream clustering https://en.wikipedia.org/wiki/Data_stream_clustering
H: Rasa_Nlu SpaCy installing dependencies I'm trying to do some intent extraction/recognition. I've installed all dependancies (I believe) but it still gives me the error: File "C:\Users\user\.spyder-py3\chatbot\Outlook\rasa_nlu\components.py", line 65, in validate_requirements "Please install {}".format(", ".join(failed_imports))) Exception: Not all required packages are installed. To use this pipeline, you need to install the missing dependencies. Please install sklearn_crfsuite, spacy Can someone please share their knowledge on how I can resolve this? Here is my pip list: absl-py (0.1.13) asciitree (0.3.3) asn1crypto (0.24.0) astor (0.6.2) atomicwrites (1.1.5) attrs (18.1.0) Automat (0.6.0) beautifulsoup4 (4.6.0) bleach (1.5.0) boto (2.48.0) boto3 (1.5.20) botocore (1.8.50) bz2file (0.98) certifi (2018.4.16) cffi (1.11.5) chardet (3.0.4) cloudpickle (0.5.2) colorama (0.3.9) coloredlogs (9.0) conda (4.4.10) constantly (15.1.0) cryptography (2.1.4) cycler (0.10.0) cymem (1.31.2) Cython (0.27.2) cytoolz (0.8.2) decorator (4.2.1) dill (0.2.8.2) docutils (0.14) duckling (1.8.0) en-core-web-md (2.0.0) en-core-web-sm (2.0.0) et-xmlfile (1.0.1) ftfy (4.4.3) future (0.16.0) gast (0.2.0) gensim (3.4.0) gevent (1.2.2) gitdb2 (2.0.3) GitPython (2.1.9) greenlet (0.4.13) grpcio (1.10.0) html5lib (1.0.1) humanfriendly (4.12.1) hyperlink (17.3.1) idna (2.6) incremental (17.5.0) jdcal (1.3) jmespath (0.9.3) JPype1 (0.6.3) jsonschema (2.6.0) kiwisolver (1.0.1) klein (17.10.0) koala2 (0.0.17) lxml (4.2.2) Markdown (2.6.11) matplotlib (2.1.0) menuinst (1.4.11) mkl-fft (1.0.0) mkl-random (1.0.1) mock (2.0.0) more-itertools (4.2.0) msgpack-numpy (0.4.1) msgpack-python (0.5.4) murmurhash (0.28.0) networkx (1.9) nltk (3.2.5) numpy (1.14.0) oauthlib (2.0.7) openpyxl (2.4.9) packaging (17.1) pandas (0.22.0) pandas-datareader (0.6.0+21.gda18fbd) pathlib (1.0.1) pbr (4.0.4) pip (9.0.1) plac (0.9.6) pluggy (0.6.0) preshed (1.0.0) protobuf (3.5.2.post1) py (1.5.4) py4j (0.10.6) pyasn1 (0.4.3) pyasn1-modules (0.2.2) pycosat (0.6.3) pycparser (2.18) pynput (1.3.10) pyOpenSSL (17.5.0) pyparsing (2.2.0) pyreadline (2.1) PySocks (1.6.7) pyspark (2.3.0) pytest (3.6.2) python-crfsuite (0.9.5) python-dateutil (2.7.2) pytz (2018.4) pywin32 (223) PyYAML (3.12) rasa-nlu (0.13.0a2, c:\users\users\rasa_nlu) regex (2017.4.5) requests (2.18.4) requests-file (1.4.3) requests-ftp (0.3.1) requests-oauthlib (0.8.0) ruamel-yaml (0.15.35) s3transfer (0.1.13) scikit-learn (0.19.1) scipy (1.1.0) selenium (3.11.0) service-identity (17.0.0) setuptools (38.4.0) simplejson (3.13.2) six (1.11.0) sklearn-crfsuite (0.3.6) smart-open (1.5.7) smmap2 (2.0.3) spacy (2.0.11) tabulate (0.8.2) tensorboard (1.7.0) tensorflow (1.7.0) termcolor (1.1.0) textblob (0.15.1) thinc (6.10.2) toolz (0.9.0) tqdm (4.19.5) tweepy (3.6.0) Twisted (18.4.0) typing (3.6.2) ujson (1.35) urllib3 (1.22) wcwidth (0.1.7) webencodings (0.5.1) Werkzeug (0.14.1) wheel (0.30.0) win-inet-pton (1.0.1) wincertstore (0.2) wrapt (1.10.11) xlrd (1.1.0) XlsxWriter (1.0.2) zope.interface (4.5.0) AI: I found an answer that actually ended up helping me from GitHub. Unfortunately I just lost the link. But here is the code I used. It basically kept telling me which dependencies were still missing. I'm not sure how they weren't captured in the install...but I was able to get them and put them in the lib. import importlib package='spacy' importlib.import_module(package)
H: samples for different objects with unique labels Lets say i have a samples taken from veterinary hospital , one of my feature will be the type of the animal and some other features such as fever , size,symptom etc.. , my labels are the medicine given to that animal. if each medicine is unique to that type of animal (Medicine A should be given to animal A , and there are no Animal B that taken Medicine A) . What will be the cons and pros to building one classifier for the whole data set vs split classifier for each animal (since there wont be valid generalization between animals) AI: Have run into this problem, when we wanted to run with patient-based deep learning models or individual observations (the same patient could have come several times) based deep learning models. In your case, they could be analogous to Animal-based model vs general-one-for-all model which has all the animals. Pros of animal-based: If you are going to build an animal-level classifier, it certainly is going to generalize better when compared to one-model for every animal. Cons of animal-based: If you don't have enough data for one particular animal, you can't help out much, for that case. This is slightly advantageous as well, I can't say for veterinarian examples, but if you are going to use a general-one-for-all model it might misclassify the prediction as well (because of lack of data for that particular example). I would suggest you try both. There is nothing certain in this world. Only hunches and guesses from past experiences. Hope this helps.
H: Distance between very large discrete probability distributions I have 192 countries where each country has some value for 1 million attributes which sum up to 1 (a discrete probability distribution). For any one country most of the values for the attributes are 0. Now I am trying to find the distance/similarity between those countries using these attributes. I know we can use Jensen Shannon Divergence between two discrete probability distributions to get a distance measure, but the caveat is that all the values have to be non-zero. Given that there are zero valued attributes for the countries, is there any other suitable statistical distance measure that can help me to cluster these countries using these 1 million attributes? AI: Yes, plenty. Get the book "encyclopedia of distances". For example, you can use Histogram Intersection distance. Since your data is already normalized, that reduces to Manhattan distance, if I am not mistaken. Yes: this can be appropriate for distributions.
H: Reading accuracy in Keras from the saved model Yesterday I trained the model in Keras all night, but when I tried to see the results, my computer was not working. When I turned it ON, then I saw 'model.h5' file built already, but I couldn't see the results of accuracy of my training and testing. I saved the model with 'model.save', but how can I now see the accuracy of the model only from that saved model? AI: According to the official Keras website, you have to use: keras.models.load_model(filepath) Example: model = load_model('my_model.h5') This will load your saved H5 model to 'model' and then you can try: model.evaluate(x,y) To return the loss value & metrics values for the model in test mode.
H: How does a Decision Tree handle an unknown category in a split? I have in mind a situation where a Decision Tree is trained with a dataset where one category has got just three possible values: A, B and C. So as I understand the node for this category will have three splits: A, B and C. What will happen in case an observation with a null for this feature arrives, or if in the testing set appears a value D for this category? Does Decision Tree leave one of the splits as default to handle this situations? AI: There are two perspectives to this question, from a mathematical or machine learning perspective and from a technical perspective. From a technical perspective this depends on the implementation of the decision tree. An unseen value is not that different from a missing value, and sklearn for example does not deal with unknown values well and will fail on unseen or unknown values. Other tree implementations might deal better with this. From a mathematical or ML perspective, without making some assumptions you cannot solve this problem of course. However an assumption you could make is that unknown or unseen values are average values. With that assumption you could follow both paths on the split where the unknown value occurs. Then you collect both results from the split and weigh them by the number of training samples that happened there. That way you can still make a prediction for these unknown values.
H: How to provide classified feature to a neural network Let say I have a feature that may have one of 4 values, 1,2,3,4. I want to provide it as a NN input, what is proper way to do that? I can map it like 1 -> -1.0 | 2 -> -0.3 | 3 -> 0.3 | 4 -> 1.0 , or something similar to have mean of 0.0 and std near 1.0. But in this example 1 is much different than 4 compared to difference of 3 and 4, and I don't want such discrimination cos 1 and 4 are equally different to me as 3 and 4. Another way is to have 4 features where each of them relates to a class 1,2,3,4 and has value of 1 if initial feature has value that matches it, and has value of 0 if it doesn't match it. Like this 1 -> [1,0,0,0], 2 -> [0,1,0,0], 3 -> [0,0,1,0], 4 -> [0,0,0,1] But I don't like here the fact that this feature gets to much weight, especially if it has many classes. I was thinking to make separated layer just for this feature, do you have some better solution? AI: In your problem, the label is a categorical variable (you cannot infer relation between classes just from the label value) and not ordinal (value shows relation/distance between classes). The solution that you propose: 1 -> [1,0,0,0], 2 -> [0,1,0,0], 3 -> [0,0,1,0], 4 -> [0,0,0,1] is called One-Hot Encoding. This is one of the most popular ways of encoding the classes during preprocessing, in order to feed them to a classifier, thus I recommend you of doing so. You mention that you are afraid that the feature gets to much weight because of the number of samples in the dataset. This is called class imbalance. A way of circumventing it is to pre-weight your samples for your classifier, please take a look at this method implementation provided by sklearn package.
H: "residual error" of LSTM during backprop vs usual "error" What does the residual error mean when we are talking about LSTM? Taken from the middle of section 3 of this paper, where it says: "...of the residual error $\epsilon$" Where $s_0$ is the initial state of the RNN network. Question: how is a residual error different to a usual error? Why to use such a term? AI: Residual errors are the errors that remain after a model has tried fitting to some data. It is the error which resides. People use that term, along with just error or residuals interchangeably, but after a model has been tested, it just means how much of the data cannot be explained by the model. The letter $\epsilon$ is commonly used to explain stochastic noise inherent in (co)-variates of a model, i.e. noise.error that we cannot explain with the given data.
H: Binary Classification I wanted to start off by saying this is not an exact duplicate of the other question. I checked it and it didn't have what I urgently need. So here is the problem. I have a dataset with 30 features and 100000 rows of train data. I want to make a Binary classification model which determines whether the person is eligible or not for membership at our club. I am a rookie data scientist, and binary classification is a first for me. So please help me and tell me which model would be the most accurate for this purpose. Also, the time taken by the model to train doesn't matter... Thank you so much update: Ok, I have used logistic regression and instead of giving some members as accepted(1) it is showing me al members do not get the membership...So I thought there might be a mistake in choosing the Model, so I am looking for other models... AI: As your data is highly imbalanced, and as per your task, it is a case of anomaly detection. Anomaly detection is a case where your data has one kind of examples in very low number and other in very high number, like your membership division here. Other examples are like detecting flaw in car engines- out of 10000 engines, you get flaw in 40 only. Similarly, members compared to non-members are very less. So treat those person who are members as anomaly. https://www.allerin.com/blog/machine-learning-for-anomaly-detection As you can check in above link, there are both supervised and unsupervised methods available for these kind of tasks. I suggest you try those methods. Also you can check this link for some more explanation- https://www.datascience.com/blog/python-anomaly-detection
H: Implementing simple linear regression using a neural network I have been trying to implement simple linear regression using neural networks in Keras in hope of understanding how to work in the Keras library. Unfortunately, I am ending up with a very bad model. Here is the implementation: from pylab import * from keras.models import Sequential from keras.layers import Dense #Generate dummy data data = data = linspace(1,2,100).reshape(-1,1) y = data*5 #Define the model def baseline_model(): model = Sequential() model.add(Dense(1, activation = 'linear', input_dim = 1)) model.compile(optimizer = 'rmsprop', loss = 'mean_squared_error', metrics = ['accuracy']) return model #Use the model regr = baseline_model() regr.fit(data,y,epochs =200,batch_size = 32) plot(data, regr.predict(data), 'b', data,y, 'k.') The generated plot is as follows: Can somebody point out the flaw in the above definition of the model (which could ensure a better fit)? AI: Your code works perfectly. The only problem is that the learning of the parameters is not finished. If you try with 10000 epochs this will works but this is way too much for this problem. As a matter of fact you can see that the loss is diminishing very slowly. Solution : Increase the learning rate. I set the batch size to one because this penalize the convergence speed here. Increasing the batch size is useful when you need to avoid overfitting. Here you want to overfit your data. Moreover I choose to use a simpler update rule with the SGD optimizer. With those changes, you will see that only 4 epochs are necessary to fit perfectly your data. from pylab import * from keras.models import Sequential from keras.layers import Dense from keras import optimizers #Generate dummy data data = data = linspace(1,2,100).reshape(-1,1) y = data*5 #Define the model def baseline_model(): model = Sequential() model.add(Dense(1, activation = 'linear', input_dim = 1)) sgd = optimizers.SGD(lr=0.2) model.compile(optimizer = sgd, loss = 'mean_squared_error', metrics = ['accuracy']) return model #Use the model regr = baseline_model() regr.fit(data,y,epochs = 4,batch_size = 1) plot(data, regr.predict(data), 'b', data,y, 'k.')
H: My Neural network in Tensorflow does a bad job in comparison to the same Neural network in Keras I am trying to predict "sales" from this dataset: https://www.kaggle.com/c/rossmann-store-sales There are>1,000,000 rows, I use 10 features from the dataset to predict sales I merged two datasets into one in advance. I created a code in Keras to predict "sales". Firstly I created some new variables, threw away some unneeded data. Then I applied one hot encoding on categorical variables, split the dataset into train and test parts, scaled variables of X_train and X_test with StandardScaler. After that, I created a Keras model that looks like this: model = Sequential() model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu', input_dim = 31)) model.add(Dropout(p = 0.1)) model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu')) model.add(Dropout(p = 0.1)) model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu')) model.add(Dropout(p = 0.1)) model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu')) model.add(Dropout(p = 0.1)) model.add(Dense(units = 1, kernel_initializer = 'uniform', activation='linear')) model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape']) history = model.fit(X_train, y_train, batch_size = 10000, epochs = 15) It is a pretty basic model: 4 layers, each has 64 neurons, small dropout to prevent overfitting, relu as an activator, mean squared error as loss function, adam as an optimizer, 15 epochs. The results of this model: R-squared: 0.86 MSE: 20841 MAE: 103 I suppose it is doing a good job, this is a comparison of real and predicted values y_test final_preds 0 1495.0 1737.393188 1 970.0 763.265747 2 660.0 696.281006 3 695.0 884.019226 4 802.0 620.464294 5 437.0 413.590912 6 599.0 564.844177 7 426.0 507.872650 8 1163.0 934.790405 9 563.0 591.833313 10 798.0 729.736572 11 507.0 422.795746 12 447.0 546.338440 13 437.0 437.536194 14 599.0 643.752441 15 607.0 667.271423 16 836.0 793.968872 17 568.0 599.968262 18 522.0 508.874084 19 350.0 395.198883 20 1160.0 1277.464111 I tried to "mimic" the same structure of neural network with the same configurations in Tensorflow by using DNNRegressor. The results were not even close to what Keras achieved. My code for TF is: Creating feature columns DayOfWeek_vocab = [4, 3, 1, 5, 6, 2, 7] DayOfWeek_column = tf.feature_column.categorical_column_with_vocabulary_list( key="DayOfWeek", vocabulary_list=DayOfWeek_vocab Open_vocab = [1] Open_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Open", vocabulary_list=Open_vocab) Promo_vocab = [1,0] Promo_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Promo", vocabulary_list=Promo_vocab) StateHoliday_vocab = ['0', 'b', 'a', 'c'] StateHoliday_column = tf.feature_column.categorical_column_with_vocabulary_list( key="StateHoliday", vocabulary_list=StateHoliday_vocab) SchoolHoliday_vocab = [1, 0] SchoolHoliday_column = tf.feature_column.categorical_column_with_vocabulary_list( key="SchoolHoliday", vocabulary_list=SchoolHoliday_vocab) StoreType_vocab = ['a', 'd', 'c', 'b'] StoreType_column = tf.feature_column.categorical_column_with_vocabulary_list( key="StoreType", vocabulary_list=StoreType_vocab) Assortment_vocab = ['a', 'c', 'b'] Assortment_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Assortment", vocabulary_list=Assortment_vocab) month_vocab = [10, 3, 4, 2, 9, 6, 5, 7, 1, 8, 12, 11] month_column = tf.feature_column.categorical_column_with_vocabulary_list( key="month", vocabulary_list=month_vocab) Season_vocab = ['Autumn', 'Spring', 'Winter', 'Summer'] Season_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Season", vocabulary_list=Season_vocab) feature_columns = [ tf.feature_column.indicator_column(DayOfWeek_column), tf.feature_column.indicator_column(Open_column), tf.feature_column.indicator_column(Promo_column), tf.feature_column.indicator_column(StateHoliday_column), tf.feature_column.indicator_column(SchoolHoliday_column), tf.feature_column.indicator_column(StoreType_column), tf.feature_column.indicator_column(Assortment_column), tf.feature_column.numeric_column('CompetitionDistance'), tf.feature_column.indicator_column(month_column), tf.feature_column.indicator_column(Season_column), ] The model itself input_func = tf.estimator.inputs.pandas_input_fn(x=X_train,y=y_train ,batch_size=10000,num_epochs=15, shuffle=True) model = tf.estimator.DNNRegressor(hidden_units=[64,64,64,64],feature_columns=feature_columns, optimizer=tf.train.AdamOptimizer(learning_rate=0.0001), activation_fn = tf.nn.relu) model.train(input_fn=input_func,steps=1000000) The structure is the same as in Keras, 4 layers, 64 neurons. relu, adam and mse as cost (it is a default for DNNRegressor), but tf does not work as good as Keras Results are a mess, MSE is 44303762026251.3, MAE is 3809120.3086946052, R-squared is even negative, -4598900.028032559 What did I do wrong here? Did I forget something in Tensorflow? Keras is using TF, so I suppose that results should be similar if the model is tuned in the same way. I randomly put numbers in layers, neurons, learning rate, epochs, but it does not work as well Thank you in advance! edit1 Thanks for your comments! I tried to apply what you recommended. I totally abanded DNNRegressor and tried to "manually" create everything with tf.layers.dense. I, again, copied the structure of keras (changed to glorot in keras as well). Thats how it looks now: import tensorflow as tf import numpy as np import uuid x = tf.placeholder(shape=[None, 30], dtype=tf.float32) y = tf.placeholder(shape=[None, 1], dtype=tf.float32) dense = tf.layers.dense(x, 30, activation = tf.nn.relu, bias_initializer = tf.zeros_initializer(), kernel_initializer = tf.glorot_uniform_initializer()) dropout = tf.layers.dropout(inputs = dense, rate = 0.1) dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu, bias_initializer = tf.zeros_initializer(), kernel_initializer = tf.glorot_uniform_initializer()) dropout = tf.layers.dropout(inputs = dense, rate = 0.1) dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu, bias_initializer = tf.zeros_initializer(), kernel_initializer = tf.glorot_uniform_initializer()) dropout = tf.layers.dropout(inputs = dense, rate = 0.1) dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu, bias_initializer = tf.zeros_initializer(), kernel_initializer = tf.glorot_uniform_initializer()) dropout = tf.layers.dropout(inputs = dense, rate = 0.1) dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu, bias_initializer = tf.zeros_initializer(), kernel_initializer = tf.glorot_uniform_initializer()) dropout = tf.layers.dropout(inputs = dense, rate = 0.1) output = tf.layers.dense(dropout, 1, activation = tf.nn.sigmoid) cost = tf.losses.absolute_difference(y, output) #mae optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost) init = tf.global_variables_initializer() tf.summary.scalar("cost", cost) merged_summary_op = tf.summary.merge_all() with tf.Session() as sess: sess.run(init) uniq_id = "/tmp/tensorboard-layers-api/" + uuid.uuid1().__str__()[:6] summary_writer = tf.summary.FileWriter(uniq_id, graph=tf.get_default_graph()) x_vals = X_train y_vals = y_train #for step in range(673764): for step in range(673764): _, val, summary = sess.run([optimizer, cost, merged_summary_op], feed_dict={x: x_vals, y: y_vals}) if step % 20 == 0: print("step: {}, value: {}".format(step, val)) summary_writer.add_summary(summary, step) TF model is slower, so I cannot check precisely the output, but first steps of TF are close to results of a first epoch of keras: Epoch 1/15 673764/673764 [==============================] - 13s 19us/step - loss: 57019592.1866 - mean_squared_error: 57019592.1866 - mean_absolute_error: 6883.4074 - mean_absolute_percentage_error: 2668499.3291 TF: step: 0, value: 6957.24365234375 step: 20, value: 6957.2373046875 step: 40, value: 6957.23583984375 step: 60, value: 6957.22998046875 So MAE of both models are close, around 6900. I suppose that the issue is solved now. I just have one question left, how to apply batches in this type of tensorflow? It is the first time I ever built tf like this and I haven't found an obvious solution online. Thanks! AI: You can try these: a) In the tensorflow implementation you have used learning rate of 0.0001 while in keras the default value of learning rate is 0.001. You have set different learning rates in the two implementations. Try the same learning rate and the results you will get should be close enough. b) The initializations you have used in keras is 'uniform' while in tensorflow the default initialization is glorot uniform. Use the same in both implementation. c) Try same dropout. In keras implementation you have used dropout after every hidden layer. Try the same in tensorflow implementation. Tip:- Try same random initializations for both the implementation. Edit 1: I have never implemented the procedure you have asked. A bit of googling has led me to some useful pointers. You can have a look at this link on SO for training the network in batches. You have to iterate and train the network over the samples in the last part of your code. For each iteration you have to train the model over a number of data points which is basically the batch size. creating an object that will feed in the number of data points at every iteration to the optimizer should do the trick, I guess.
H: sklearn .fit error I am trying to copy some code from a video to do a decision tree program, which will predict if a student will pass or not depending on 30 parameters given. I did exactly as written but get an error as below: import pandas as pd import numpy as np from sklearn import tree import graphviz #importing the data set d = pd.read_csv('student-por.csv', sep= ';') #Each 'G' grade is out of 20 #Setting the pass mark as 35 /60 d['pass'] = d.apply(lambda row: 1 if (row['G1']+ row['G2']+ row ['G3']) >= 35 else 0 , axis=1) d = d.drop(['G1', 'G2','G3'], axis=1 ) #shuffle rows d = d.sample(frac=1) #split traning and test d_train = d[:500] d_test = d[500:] # to be used in .fit d_train_att = d_train.drop(['pass'], axis=1) d_train_pass= d_train['pass'] #I don't know why he did this one d_test_att = d_test.drop(['pass'], axis=1) d_test_pass= d_test['pass'] d_att = d.drop(['pass'], axis=1) d_pass = d['pass'] #Calculating how many students passed print ('passing: %d out of %d (%.2f%%)'%(np.sum(d_pass), len(d_pass), 100*float(np.sum(d_pass)/len(d_pass))) ) t = tree.DecisionTreeClassifier(criterion ='entropy', max_depth = 5) t= t.fit (d_train_att, d_train_pass) # To visualize the decision tree dot_data = tree.export_graphviz(t,out_file = None, label ='all', imputiry=False, proportion= True, feature_names=list(d_train_att), class_names=['fail', 'pass'], filled = True, rounded=True) graph = graphviz.Source (dot_data) And the output is: Traceback (most recent call last): File "students.py", line 29, in <module> t= t.fit (d_train_att, d_train_pass) File "/home/mohamed/.virtualenvs/cv/lib/python3.5/site-packages/sklearn/tree/tree.py", line 790, in fit X_idx_sorted=X_idx_sorted) File "/home/mohamed/.virtualenvs/cv/lib/python3.5/site-packages/sklearn/tree/tree.py", line 116, in fit X = check_array(X, dtype=DTYPE, accept_sparse="csc") File "/home/mohamed/.virtualenvs/cv/lib/python3.5/site-packages/sklearn/utils/validation.py", line 433, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: could not convert string to float: 'no' If possible, could you please explain to me how to fix it? edit: solved it after the explanation of pcko1. I used pd.get_dummies to get rid of floats. AI: I cannot see your data (included in student-por.csv) but I suspect that it includes strings (maybe for student names). You should either drop the string variables or convert them to categorical (one character for each different value of the variable). This means that if you have a column with subject names, you should convert "Maths" to "0", "History" to "1", "Biology" to "2" and so on. A very convenient way of doing this is with the sklearn.preprocessing.LabelEncoder, please check this. In the end you should either feed continuous or categorical values to your Decision Tree during fit, no strings. Hope it helps :)
H: The Bias-Variance Trade-Off I'm reading "An Introduction to Statistical Learning: With Applications in R". In the Paragraph 2.2.2 The Bias-Variance Trade-Off, the authors say: I'm not able to understand why the bias tends to initially decrease faster than the variance increases. Can you help me ? Both rigorous and intuitive explanations are greatly appreciated AI: One way to look at this is through the idea of under-/overfitting First off, here is a sketch of the generally observed relationship between bias and variance, in the context of model size/comlpexity: Say you have a model which is learning quite well, but your test accuracy seems to be pretty low: 80%. The model is essentially not doing a great job of mapping input features to outputs. We have a high bias. But for a wide variety of input (assuming a good test set), we consistently obtain this 20% error; we have a low variance. We are underfitting Now we decide to use a bigger model (e.g. a deep neural network), which is able to capture more details of the feature space and so maps inputs to outputs more accurately. We now have an improved test accuracy: 95%. At the same time, we notice that several runs of the model produce different results; sometime we have 4% error, and sometimes 6%. We have introduced a higher amount of variance. We are perhaps somewhere around the optimum model complexity shown on the graph above. You say ok... let's create a monolithic neural network. It totally nails training and ends with a perfect accuracy: 100%. However, the test accuracy now drops to 90%! So we have zero bias, but a large variance. We are overfitting. The model is almost as good as a look-up table for training data, but doesn't generalise at all when it sees new samples. Intuitively, that 10% error corresponds to a difference in distribution between the training and test sets used $\rightarrow$ the model knows the training distribution in extreme detail, some of which do not apply to the test set (i.e. the reality). In summary: The bias tends to decrease faster than the variance increases, because you can likely still make a more competitive model for your dataset; the model is underfitting. It is like the low-hanging fruit that you can easily get - so an incremental improvement on the red curve above gives a big decrease in bias (increase in performance). Obviously that pattern cannot go on indefinitely, with each increment in model complexity, you get a lower increase in performance; i.e. you have diminishing returns. Furthermore, as you begin to overfit, the model becomes less able to generalise and so exhibits larger errors on unseen data; variance is creeping in. For some more intuition between bias/variance in machine learning, I'd recommend this talk by Andrew Ng. There is also a text summary of the talk, for a quicker overview. For a brief but more mathematical explaination, head over to this post of Cross-Validated. The second answer there is very recent and is perhaps better than the (old) accepted answer.
H: music recommender system i want to build a music recommender system. i have user_id,song,play_counts triplets as my data.I want to do it with collaborative filtering in with i will have USERS row and SONGS columns.It is very similar to movie recommendation but instead of rating,i now have play counts.So, cn i use play count as my matrix values...i will be then using linear regression to predict features of both users and songs simultaneously. Will it work with play counts to predict music? Please answer ...i am confused! AI: Short answer Yes, you can use the counts as the target you are trying to predict. I would say that you should probably normalize this, so that your target is between 0 and 1. This will help collaborative filtering converge faster. Long answer Given the input data you describe, you have to remember the concept of "Garbage in, garbage out" Now I am not saying that your input is garbage, far from that. What I mean is that if all you know is how many times somebody played a song, this is also all you will be able to learn from and eventually predict. In your case, it's more of a matter of interpreting the output. Let's imagine that your model works extremely well and is able to predict how many times someone will listen to a song. In that case, it would indeed make sense to recommend them songs that they will indeed listen to a lot because the assumption is that you will listen to songs more often if you like them. But of course, you have to remember that this might not be entirely true
H: regex to remove repeating words in a sentence I am new to regex. I am working on a project where i need to replace repeating words with that word. for example: I need need to learn regex regex from scratch. I need to change it to: I need to learn regex from scratch. I can identify the repeating words using the following regex \b(\w+)\b[\s\r\n]*(\l[\s\r\n])+ For substituting it, I need the word in the repeated word phrase. pattern.sub(sentence, <what do i write here?>) AI: Since you were working with RegEx, I will ofer a RegEx solution. I will also show that you need to also take care to first remove punctuation. (I will not go down the rabbit-hole of re-sinserting the punctuation back where it was!) A RegEx solution: import re sentence = 'I need need to learn regex... regex from scratch!' # remove punctuation # the unicode flag makes it work for more letter types (non-ascii) no_punc = re.sub(r'[^\w\s]', '', sentence, re.UNICODE) print('No punctuation:', no_punc) # remove duplicates re_output = re.sub(r'\b(\w+)( \1\b)+', r'\1', no_punc) print('No duplicates:', re_output) Returns: No punctuation: I need need to learn regex regex from scratch No duplicates: I need to learn regex from scratch \b : matches word boundaries \w : any word character \1 : replaces the matches with the second word found - the group in the second set of parentheses The parts in parentheses are referred to as groups, and you can do things like name them and refer to them later in a regex. This pattern should recursively catch repeating words, so if there were 10 in a row, they get replaced with just the final occurence. Have a look here for more detailed definitions of the regex patterns. The more pythonic (looking) way It has to be said that the groupby method has a certain python-zen feel about it! Simple, easy to read, beautiful. Here I just show another way of removing the punctuation, making use of the string module, translating any punctuation characters into None (which removes them): from itertools import groupby import string sentence = 'I need need to learn regex... regex from scratch!' # Remove punctuation sent_map = sentence.maketrans(dict.fromkeys(string.punctuation)) sent_clean = sentence.translate(sent_map) print('Clean sentence:', sent_clean) no_dupes = ([k for k, v in groupby(sent_clean.split())]) print('No duplicates:', no_dupes) # Put the list back together into a sentence groupby_output = ' '.join(no_dupes) print('Final output:', groupby_output) # At least for this toy example, the outputs are identical: print('Identical output:', re_output == groupby_output) Returns: Clean sentence: I need need to learn regex regex from scratch No duplicates: ['I', 'need', 'to', 'learn', 'regex', 'from', 'scratch'] Final output: I need to learn regex from scratch Identical output: True Benchmarks Out of curiosity, I dumped the lines above into functions and ran a simple benchmark: RegEx: In [1]: %timeit remove_regex(sentence) 8.17 µs ± 88.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) groupby: In [2]: %timeit remove_groupby(sentence) 5.89 µs ± 527 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) I had read that regex would be faster these days (using Python3.6) - but it seems that sticking to beautiful code pays off in this case! Disclaimer: the example sentence was very short. This result might not scale to sentences with more/less repeated words and punctuation!
H: Local minima in Stochastic Gradient Descent I am running a logisitic regression model using Keras on a dataset. However, I am often running in a problem in which the model settle at an validation accuracy value lower than a theoretical value. The hypothesis is confirmed by the fact that if I run the model again (without changing anything), the model settles at a different value (often higher) for validation. I presume such a problem is due to the gradient descent encountering a local minimum and when I run it again, due to random fluctuations, I sometimes end up skipping the local minima. So any method, to prevent such a problem? def base(): model = Sequential() model.add(Dense(20,activation = 'linear', input_dim = 12288)) model.add(Dense(7,activation = 'relu')) model.add(Dense(5, activation = 'linear')) model.add(Dense(1,activation = 'sigmoid')) model.compile(optimizer = optimizers.SGD(lr=0.0075), loss = 'binary_crossentropy', metrics= ['accuracy']) return model model = base() model.fit(train_x,train_y, epochs = 2400, batch_size =10,validation_data = [test_x, test_y], verbose = 2) AI: One of the major advantage of stochastic gradient descent (SGD) over batch descent is the ability to explore different local minima's. But due to stochastic nature, you might end up with different results every time you repeat. In all regression the most important task is to perform regularization to achieve a generalized fit for better cross validation scores. Your issue might be of over fitting rather than stochastic nature of optimizer. In your model.add method, add an argument for kernel_regularizer=L1L2(l1=0.0, l2=0.15). Try with different values for shrinkage of L1, L2 norms. Also, If you are looking for stability in accuracy you can try batch/ vanilla gradient descent. Also, I would recommend you to try different learning rates and other optimizer like Adam that make use of adaptive learning rates.
H: sklearn.neighbors.NearestNeighbors - knn for unsupervised learning? From basic theory I know that knn is a supervised algorithm while for example k-means is an unsupervised algorithm. However, at Sklearn there are is an implementation of KNN for unsupervised learning (http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html#sklearn.neighbors.NearestNeighbors). What is exactly this unsupervised version of knn at SkLearn? Is this a knn algorithm? If yes how it is unsupervised since by definition knn is supervised? If no what is it then? AI: The unsupervised version simply implements different algorithms to find the nearest neighbor(s) for each sample. The kNN algorithm consists of two steps: Compute and store the k nearest neighbors for each sample in the training set ("training") For an unlabeled sample, retrieve the k nearest neighbors from dataset and predict label through majority vote / interpolation (or similar) among k nearest neighbors ("prediction/querying") The unsupervised version is basically only step 1, the training phase of the kNN algorithm. (This is useful because if your dataset is large, a pairwise comparison for all samples (algorithm='brute') is often infeasible. Therefore two alternative algorithms for the training stage are implemented that make use of previous comparisons to reduce the number of distance calculations. See the documentation here.)
H: Dueling DQN - can't understand its mechanism I am trying to understand the purpose of Dueling DQN. According to this blogpost: our reinforcement learning agent may not need to care about both value and advantage at any given time - this seems to be what I can't understand. Let's assume we are in state $S_t$ and we select an action which has the highest score. This is the promised total reward that we will get in the future if we take the action. Notice, we don't yet know the V or A of the future state $S_{t+1}$ (where we will end up after taking this best action), so decoupling V and A in any state, including S_{t+1} seems unnecessary. Additionally, once we do get to work with them we still seem to recombine them during $S_{t+1}$ into a single Q-value, just as was noted in the blog post. So, to complete my thought: V and A seem to be a "hidden intermediate step", that still gets combined into Q, so we never know it's even there. Even if network somehow benefits from one or the other, how does it help if both streams still end up as Q? A slightly unrenated thought, 'V' is the score of the current state only. 'A' is the total future expected Advantage, for a particular action, right? Can someone provide a different example to the sunset? Edit after answer was accepted: Found a friendly explanation about this architecture here. Also, if someone struggles to understand what V, Q and A is, read this answer, and my comment under it. AI: A slightly unrelated thought, 'V' is the score of the current state only. 'A' is the total future expected Advantage, for a particular action, right? Not quite. $V$ is the total (discounted) future expected reward, assuming starting in state $s$, following current policy (in a control problem, usually best guess so far at the optimal policy) into the future. That includes selecting current action, $a$, according to the policy being assessed. The advantage function $A(s,a)$ (blog post has the arguments wrong for $A$) is the difference in value between selecting $a$ according to the current policy and selecting a specific action $a$. The value of $A$ also sums all future rewards assuming that the current policy is then followed into the future after this, maybe different, selection. Note that when the policy is the optimal one, and $V(s)$ is accurate, then $A(s,a)$ should always be zero or negative; optimal actions score zero, non-optimal ones will be negative. Even if network somehow benefits from one or the other, how does it help if both streams still end up as Q? ... Can someone provide a different example to the sunset? The more prosaic explanation is that the function decomposition is always technically correct (for an MDP). Coding the network like this incorporates known structure of the problem into the network, which otherwise it may have to spend resources on learning. So it's a way of injecting the designer's knowledge of reinforcement learning problems into the architecture of the network. Conceptually this is similar to designing CNNs for computer vision with local receptive fields because we know edges and textures can be detected this way in images. Although CNNs have more than just that benefit, one of the positive aspects of the design for vision tasks, is that they structurally match known traits of the problem being solved. Value-based RL control methods (as opposed to policy-gradient methods) work due to "generalised policy iteration", where the agent is constantly assessing the current values of a policy, then using those value estimates in order to make improvements. The split between $V$ and $A$ functions fits very well with that conceptually. The $V$ function is generally being adjusted to assess the current policy as accurately as possible, whilst positive values in $A$ function identify likely changes to the policy.
H: Graph & Network Mining: clustering/community detection/ classification I am working on graphs/networks where nodes and edges have some attributes. I want to know what algorithm exist for: 1) clustering a graph to k groups: depend only on the structure (edge attribute only) 2) Community Detection: ( same as graph clustering) but the number of communities is unknown. 3) Classification: a supervised method where I have labels and I want to classify the nodes based on their attributes and their connections (edges). 4) Page Rank: detecting the most important nodes in a group (community, cluster) based on their connection thank you very much. AI: Well ... Some points. Networked data is modeled with graphs. When you have different attributes you have Property Graph. For clustering, you can extract the topology of the subgraph based on desired attributes and then use any Modularity-based algorithm (most recommended is Blondel algorithm). In Blondel algorithm you don't need to know the number of communities in advance. Have a look at Network Science book by Barabasi to get more insight to networks. For classification you may extract features from graphs and use common classification algorithms or use graph kernels and feed it to kernel methods such as SVM. Follow this. Page rank is one of the methods for ranking but you have simpler choices according to your problem. See Centrality measures from the book above. There you can also see details of different ranking algorithms. If you need more info you may drop a comment here. Hope it helped. Good Luck!
H: cross_val_score meaning I'm studying the following code, which cross_val_score_ was used as well as .mean() and .std(). I read many documentation of the meanings, but didn't get what each of the above does. import pandas as pd import numpy as np from sklearn import tree import graphviz from sklearn.model_selection import cross_val_score #importing the dataset d = pd.read_csv('student-por.csv', sep= ';') d['pass'] = d.apply(lambda row: 1 if (row['G1']+ row['G2']+ row ['G3']) >= 35 else 0 , axis=1) d = d.drop(['G1', 'G2','G3'], axis=1 ) #Doing one-hot encoding d=pd.get_dummies(d, columns =['sex','activities','school', 'address', 'famsize','Pstatus','Mjob','Fjob','reason','guardian','schoolsup','famsup','paid','nursery','higher','internet','romantic']) #shuffle rows d = d.sample(frac=1) #split traning and test d_train = d[:500] d_test = d[500:] d_train_att = d_train.drop(['pass'], axis=1) d_train_pass= d_train['pass'] d_test_att = d_test.drop(['pass'], axis=1) d_test_pass= d_test['pass'] d_att = d.drop(['pass'], axis=1) d_pass = d['pass'] t = tree.DecisionTreeClassifier(criterion ='entropy', max_depth = 5) t= t.fit (d_train_att, d_train_pass) #to export the tree dot_data = tree.export_graphviz(t,out_file = 'students-tree.png', label ='all', impurity=False, proportion= True, feature_names=list(d_train_att), class_names=['fail', 'pass'], filled = True, rounded=True) t.score (d_test_att, d_test_pass) scores = cross_val_score(t, d_att,d_pass, cv=5) print ('Acuracy %0.2f (+/- %0.2f)' % (scores.mean(), scores.std() *2)) in short this is what I need to know: scores = cross_val_score(t, d_att,d_pass, cv=5) print ('Acuracy %0.2f (+/- %0.2f)' % (scores.mean(), scores.std() *2)) one more thing, am I suppose to get the same score as in the original code publisher? because I didn't. AI: The source, around line 274 is where the default scoring for cross_validation_score gets set, if you pass in None for the scorer argument. For classifiers, the usual default score is accuracy. For regression, it's rmse, IIRC. So, since you're applying a decision tree classifier, cross_val_score splits the data into 5 equalish sized pieces, trains on each combination of 4 and gives back the accuracy of the estimator on the 5th. The mean and std of these accuracies presumably tells one something about the performance of the family of decision tree classifiers on your dataset, but I would take it with a grain of salt.
H: Can a single-layer ANN get XOR wrong? I'm still pretty new to artificial neural networks. While I've played around with TensorFlow, I'm now trying to get the basics straight. Since I've stumbled upon a course which explains how to implement an ANN with back propagation in Unity, with C#, I did just that. While test-running the ANN with one hidden layer containing 2 neurons, I noticed, that it doesn't always get XOR right. No matter how many epochs it runs or how the learning rate was set. With some settings it happens more often than with other setting. Usually I get something like this: +---+---+------+ | 0 | 0 | 0.01 | +---+---+------+ | 0 | 1 | 0.99 | +---+---+------+ | 1 | 0 | 0.99 | +---+---+------+ | 1 | 1 | 0.01 | +---+---+------+ But in other occasions it looks more like this: +---+---+------+ +---+---+------+ +---+---+------+ | 0 | 0 | 0.33 | | 0 | 0 | 0.01 | | 0 | 0 | 0.33 | +---+---+------+ +---+---+------+ +---+---+------+ | 0 | 1 | 0.99 | | 0 | 1 | 0.99 | | 0 | 1 | 0.33 | +---+---+------+ or +---+---+------+ or +---+---+------+ | 1 | 0 | 0.66 | | 1 | 0 | 0.50 | | 1 | 0 | 0.99 | +---+---+------+ +---+---+------+ +---+---+------+ | 1 | 1 | 0.01 | | 1 | 1 | 0.50 | | 1 | 1 | 0.33 | +---+---+------+ +---+---+------+ +---+---+------+ I've noticed that in every case, the sum of the outputs is ~2. It also doesn't happen most of the time but still quite often. Depending on what settings I use it happens every two or three runs, or it happens only after 10 or 20 runs. For me it seems more like a mathematical quirk in the stochastic nature of neural networks. But I'm not good enough with math to actually figure this one out by myself. The question is: Assuming the implementation is as simple as possible, with no advanced concepts, is it likely for something like this to happen or is it definitely an error in the implementation? If it's not an error in the implementation, what is going on here? Is it because of the very symmetrical nature of an XOR? Which is the reason a single neuron can't handle it, as far as I understood. I know I could post the source code as well, but I already double and triple checked everything, since I had a mistake in it with the bias calculation. Back then the values were completely off all the time. Now I'm just wondering if this sort of thing could actually happen with a correct implemented neural network. AI: Assuming the implementation is as simple as possible, with no advanced concepts, is it likely for something like this to happen or is it definitely an error in the implementation? In my experience, using the simplest possible network, and simplest gradient descent algorithm, then yes this happens relatively frequently. It is an accident of the starting weight values, and technically a local minimum of the cost function, which is why it is so stable when it happens. In the basic implementation you have then there are only 6 starting weights. If they are selected randomly, the chances of a "special" pattern (such as the weights to hidden layer being all positive or all negative) are relatively high (1 in 8 for all positive or all negative weights between input and first hidden layer). This is also why the values sum to 2 - given the the network is stuck on the wrong part of the error surface, it will still minimise the cost function as best it can given that constraint, and this will usually end up with compromise values that still meet statistical means overall in the predictions. If you doubled up some, but not all of the input/output pairs (e.g. a training set of 6 inputs $\{(0,0:0), (0,1:1), (1,0:1), (1,1:0), (0,1:1), (1,0:1)\}$, then the network may converge to different wrong mean value when it failed. Is it because of the very symmetrical nature of an XOR, which makes it impossible to handle for a single neuron? You don't have a single neuron here. Unless you mean in the output layer? In which case, no, this is not to do with having a single neuron in the output layer. Pretty much any more advanced NN feature, or simply more randomness, will stop this problem happening. E.g. make the middle layer have 4 neurons instead of 2, use momentum terms, a larger dataset with random "mini-batches" sampled from it. In general this kind problem does not seem to happen on larger, more complex datasets and larger more complex networks. These can have other problems, but getting stuck in a local minimum far away from the global minimum tends not to happen. In addition for those scenarios, you typically don't want to converge fully into a global minimum for your dataset and error function, but are looking for some form of generalised model (that can predict from input values that you have not seen before). On a practical note, if you want to add an automated test showing your NN implementation can solve XOR, then either use fixed starting weights or a RNG seed that you know works. Then your test will be reliable, even if the NN is not in all cases.
H: How to factorize the Matrix in TensorFlow? (Recommender System) Given a user ratings matrix which is $n \times p$, where $n$ users rate $p$ movies, I already have a row matrix $n \times 10$ which characterises the user. I ideally wanted to use the TF was method for optimisation, https://www.tensorflow.org/versions/master/api_docs/python/tf/contrib/factorization/WALSMatrixFactorization but it looks like it creates the row matrix itself. What I need is to create the column matrix - which is $10 \times p$ (not both), containing the relationship between hidden characteristics (10) to the movies (p). How can I do this in TF? AI: If R is the rating matrix, U is the user matrix and M is the movie matrix, then note that there is almost certainly no matrix M that satsifes $R = UM$. U and M are too low rank. However you should be able to find the matrix M that minimizes $|R - UM|$. There is no need to use an optimizer, though you could I guess, because this is a convex problem. You're just solving a large linear system. This is indeed just what ALS does repeatedly. You've found an ALS solver and if you just need to solve one step and have the user matrix already, I think you just supply it as row_init and run one iteration? haven't used it, but conceptually that's all you are doing. You don't need weights either.
H: How is error back-propagated in a multi-layer RNN Let's say I have a 2 layer LSTM cell, and I'm using this network to perform regression for input sequences of length 10 along the time axis. From what I understand, when this network is 'unfolded', it will consist of 20 LSTM cells, 10 for each layer. So the 10 cells corresponding to the first layer receive the network input for t = 1 to 10, whereas the 10 cells corresponding to the second layer receive the first layer's output for t = 1 to 10. In other words, the output from the cell in layer 1 corresponding to t = 1 goes to (1) the 'next' cell in layer 1 corresponding to t = 2, and (2) the cell in layer 2 corresponding to t = 1. So when the error is back-propagated, will there not be two derivatives coming into each cell in layer 1? If so, how is weight update performed? Is the sum or mean of both derivatives used or is there something else going on? AI: As you can see here, derivatives will be propagated by the chain rule although they are stacked. Actually, there will be two main paths. The first one will be backpropagation through time and the next one will be the backpropagation from the output of each unrolled cell which can directly be connected to the output or can be connected to the stacked unrolled cells. Also, take a look at here.
H: Training Encoder-Decoder using Decoder Outputs I am trying to build an encoder-decoder model for a text style transfer problem. The problem is I don't have parallel data between the two styles so I need to train the model in an unsupervised setting. Some papers I have seen use an auto-encoder to train the encoder and decoder components separately. By setting the problem as an auto-encoder, they can train the decoder by passing the target sequence (equal to the input sequence) into the decoder. (Here are some examples, https://arxiv.org/pdf/1711.06861.pdf, https://arxiv.org/pdf/1804.04003.pdf) Instead of an auto-encoder, I would like to know if it's possible to train a decoder by feeding its predictions at time, t-1, into the input at time-step t. I would pass the generated output into a classifier to check the style and to obtain a training signal. Is this sensible and what are the pros / cons of doing so? Thanks. AI: I would like to know if it's possible to train a decoder by feeding its predictions at time, t-1, into the input at time-step t. Yes, it is possible to do it. But I don't see why you would do it. Υou will have accumulated error propagated and amplified in every new prediction, making your prediction to diverge from the ground truth sooner or later.
H: Mean error (not squared) in scikit-learn cross_val_score I need to know if the values generated by each fold of cross_val_score have a distribution which is centered on zero. Something as simple as the median or mean of y_true - y_predicted would suffice. All I see in the available options are absolute and squared. I've looked into make scorer but can't see how to code the simple mean error and then call it as the scoring argument in cross_val_score. AI: from sklearn.datasets import load_diabetes from sklearn.metrics import make_scorer from sklearn.linear_model import LinearRegression from sklearn.model_selection import cross_val_score def mean_error(y, y_pred): # assuming y and y_pred are numpy arrays return np.mean(y_pred - y) X, y = load_diabetes(return_X_y=True) mean_error_scorer = make_scorer(mean_error, greater_is_better=False) regr = LinearRegression() cross_val_score(regr, X, y, scoring=mean_error_scorer)
H: Properly using activation functions of neural network I'm trying to understand the hidden layers of neural networks. Input layer section covers the steps that I use before passing information to hidden layer where concerns appear. Input Layer: From my understanding the first step in the neural network is to weight inputs by picking the "best" linear function. Methods used for finding optimal weights (may not be relevant to problem): For finding optimal weights solving quadratic minimization problem is what I usually do, perhaps by finding global minimum of the convex (quadratic error $||Ax-b||^2$ in this case, $A$ being basis matrix and $b$ being ideal vector) function (since every critical point is global minimum there) or orthogonal projection equations (since error is orthogonal to the column space of basis, and their dot product gives us the equation - $A^T(Ax-b$), equating it to $x$: $x = (A^TA)^{-1}A^Tb$. Once best linear coefficients are found (say $m$ as slope and $c$ as bias), next step is to weight the inputs by evaluating them into the function $f(x)=mx+c$. Hidden Layer: After all inputs are weighted, they must be summed up which gives us a constant number. I understand that this constant number must then be inputted to some activation function (perhaps sigmoid: $f(x)=\frac{1}{(1+e^{-x})}$ or reLU: $f(x) = max(0, x)$. Representation of simple neural network with single hidden layer having sigmoid as activation function: Picture reference Problem: But constants that are evaluated in these activation functions obviously output constants again, so how can they be utilized for data prediction? For example, say sum of all weighted inputs is $15$, then its evaluation in sigmoid function will be $\frac{1}{(1-e^{-15})} = 0.99999969409$. That constant number can't be utilized for data prediction (classification/regression) so what are next steps to take? If activation function returns a constant number, how can the data be predicted for different input variables? Am I having incorrect perspective for activation functions? AI: How is data predicted from activation functions? (considering that it returns constants on weighted input sum). You should consider the fact that the label of your input data is going to be predicted by the network. Moreover, the outputs of the network usually represent the probability of belonging to each class. The label is not predicted by the activation function, it is predicted by the network. Take a network as a mapper which can map the inputs to outputs. The activations are used to add non-linearity in order to approximate complicated non-linear mappings. What's the point of multiple hidden layers? As you can read here, the purpose of adding those is that you can learn different complicated regions. Another interpretation can be this that by adding more layers you can learn more complicated mappings which have non-linear behaviour. About the global minimum, because you can not see the cost function which is the error based on the weights, due to the fact that it is not possible to visualise it because it has so many dimensions, you should use gradient-descent based algorithms.
H: Linear Regression Optimization I am learning linear regression right now. In the most of the examples of implementation of this method, which I found, gradient descent is used. Is there a better way to optimize linear regression than gradient descent? AI: Is there a better way to optimize linear regression than gradient descent? If by better you mean finding the better separator, no you can't due to the fact that the cost function for linear regression is convex which means there is just one optimal point. If you want to optimize using different algorithms, there are different kinds of solutions. Gradient-based algorithms like Adam and RMSProp are of those. You also can use normal equation.
H: Predicting object by features probabilities I have the definition of an object provided as features probability. Each object has it's own feature importance and probabilities. For example for object "X", I have "color" feature (with the weight of 0.8) - the object can be blue in 80% of cases and black in 20% of cases. And "shape" feature (with the weight of 20%) - square in 30% and round in 70%. I'm trying to create a "predictor", so if I'm observing something blue and round - (0.8 x 0.8) x (0.2 x 0.7) - probability for object X. Does it make any sense mathematically? If this method sounds reasonable enough, how should I handle really small numbers (I can have a really long vector of features, the final number will be really small)? AI: What you are attempting to do is so much like the Bayes decision theory; you can find the math behind it here. In probability theory and statistics, Bayes’ theorem (alternatively Bayes’ law or Bayes' rule, also written as Bayes’s theorem) describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if cancer is related to age, then, using Bayes’ theorem, a person’s age can be used to more accurately assess the probability that they have cancer, compared to the assessment of the probability of cancer made without knowledge of the person's age. One of the many applications of Bayes' theorem is Bayesian inference, a particular approach to statistical inference. When applied, the probabilities involved in Bayes' theorem may have different probability interpretations. With the Bayesian probability interpretation the theorem expresses how a subjective degree of belief should rationally change to account for availability of related evidence. Bayesian inference is fundamental to Bayesian statistics. Take a look at here and here too.
H: 1 - Nearest Neighbor , dealing with same distance I know that as a question it may seem stupid, but in case it is applying K NN with k = 1 and I have two neighbors at the same distance , what is the best approach to carry out the classification ? AI: The best approach would really depend on your application and what is important too you. However, things you can try include: Increase K until there is no tie anymore - if you increase to 2, you will likely have another tie, since they are at the same distance already. So, 3 or higher should do the trick. Include another feature to your classifier - adding another dimension to your space could solve the problem if the values for both these data points are different. Choose another distance metric - you could have a preferred way of measuring distance, but for ties, chose another metric that will break it. Establish a rule for breaking ties, e.g. Pick the class with the most observed data points Randomly assign a class More information on your application would help to choose a best approach.
H: Loss function for an RNN used for binary classification I'm using an RNN consisting of GRU cells to compare two bounding box trajectories and determine whether they belong to the same agent or not. In other words, I am only interested in a single final probability score at the final time step. What I'm unsure about is how to formulate the loss function in this case. I see two options: 1) force the network to output the correct label at every time step i.e. if I am providing a positive training sample whose output should be 1, then my loss function would be a vector of ones subtracted by the network's output at each time step 2) only check the output at the final time step and use only that in my loss function. Intuitively, the second option makes more sense, but I'm sure there are other factors that come into play too. AI: The second option is the good one. You can select the last output that correspond to a not padded input and using it for your loss. Or you can directly express that explicitly : in keras set the flag return_sequence to False and your RNN layer will only give you the last output that correspond to a not padded input. If I were you I would try to put a dense layer between the RNN layer and the last output. Don't forget to use a softmax activation function for the last layer in order to get probabilities.
H: Appending to numpy array for creating dataset I want to create a dataset from three numpy matrices - train1 = (204,), train2 = (204,) and train3 = (204,). Basically all sets are of same length. I am applying a sliding window function on each of window 4. Each set become of shape =(201,4) I want a new array in which all these values are appended row wise. Like for first train1 then train2 then train3. And final output set is of size =(603,4). This is a sliding window function which converts array of shape (204,) to (201,4) def moving_window(x, length, step=1): streams = it.tee(x, length) return zip(*[it.islice(stream, i, None, step) for stream, i in zip(streams, it.count(step=step))]) Create dataset fucntion is: def create_dataset(dataset1,dataset2): dataX=[] x=list(moving_window(dataset1,4)) x=np.asarray(x) dataX.append(x) y=list(moving_window(dataset2,4)) y=np.asarray(y) dataX.append(y) return np.array(dataX) data_new=create_dataset(train1,train2) It is returning a dataset of shape 0(2,201,4). I think this is appending differently, but I want row wise appending. so that the new _dataset is of shape= (402,4) with two sets and (603,4) with three sets. I want to generalize as well like if I want for 10 training sets or twenty training sets. How can I do that? AI: I think its because of the way you are appending the datasets in list and then converting it to numpy array. Solution 1 One quick solution is to reshape your array as - data_new = data_new.reshape(data_new.shape[0]*data_new.shape[1], data_new.shape[2]) So, your data of shape (2,201,4) will become (2*201,4) = (402,4). Solution 2 Another solution is to append the arrays in the function you have defined, instead of returning np.array(dataX), use - return np.append(x, y, axis = 0) So, you don't have to use dataX anywhere.
H: Confusion regarding classification accuracy calculation and result The total number of data points for which the following result is obtained = 1500. Out of which, I have 1473 labelled as 0 and the remaining 27 as 1 . As can be seen from the confusion matrix, out of 27 data points belonging to class 1, I got only 1 data point misclassified as 0 . So, I calculated the accuracy for individual classes and got Accuracy for class labelled as 0 = 98.2% and for the other as 1.7333%. Is this calculation correct? I am not sure...I did get a pretty good classification for the class labelled as 1 so why the accuracy for it is low? The individual class accuracies should have been 100% for class0 and around 98% for class1 Does one misclassification reduce the accuracy of class 1 by so much amount? This is the how I calculated the individual class accuracies in MAtlab. cmMatrix = 1473 0 1 26 acc_class0 = 100*(cmMatrix(1,1))/1500; acc_class1= 100*(cmMatrix(2,2))/1500; AI: You might want to take a look at the confusion matrix wiki page. Accuracy = (TP + TN) / (P + N) = (26 + 1473)/1500 = 99.9% I guess, if you want the break down by the class, it would have been: 1473/1473 and 26/27, but you have 1500 as the denominator in both classes. Even though I think conventionally most people report on the entire model rather than just on a particular class.
H: Why is this Random Forest perfect? I'm learning Random Forest Classifier from a video, where the instructor got a score of 0.44, while I'm getting 0.9985 ( But actually it's perfect). Did I overfit it? If so what is the next step? Shouldn't it 'forget' and 'relearn' every time I compile the code again? Please check the code below: import pandas as pd from sklearn.ensemble import RandomForestClassifier bird = pd.read_csv('image_attribute_labels.txt', sep= '\s+', header=None, error_bad_lines=False , warn_bad_lines = False, usecols= [0,1,2], names =['imgid', 'attid', 'present']) bird2 = bird.pivot(index = 'imgid', columns= 'attid', values ='present') imglbl = pd.read_csv('image_class_labels.txt', sep=' ', header=None, names= ['imgid', 'label']) imglbl =imglbl.set_index('imgid') #merging the two arrays df= bird2.join(imglbl) #Shuffling the array df= df.sample(frac=1) df_att = df.iloc[:,:312] df_lbl = df.iloc[:,312:] #choosing the train and test data df_train_att = df_att[:8000] df_train_lbl = df_lbl [:8000] df_test_att = df_att[8000:] df_test_lbl = df_lbl[8000:] df_train_lbl = df_train_lbl['label'] df_test_lbl = df_test_lbl ['label'] clf= RandomForestClassifier(max_features=50 , random_state= 0, n_estimators= 100) clf = clf.fit( df_train_att, df_train_lbl) print(df_train_lbl.head()) print(clf.predict(df_train_att.head())) score = clf.score (df_train_att, df_train_lbl) print (score) The output for three times was as follow: 1: imgid #] 4499 78 #| 7442 127 #| 3200 56 #| The actual data 7271 125 #| 2601 46 #] Name: label, dtype: int64 [ 78 127 56 125 46] # The predicted ones 0.99875 # The score 2: imgid 10982 187 11632 198 1536 28 5449 94 8503 145 Name: label, dtype: int64 [187 198 28 94 145] 0.9985 3: imgid 6782 116 10906 186 2465 43 6660 114 9257 158 Name: label, dtype: int64 [116 186 43 114 158] 0.9985 AI: instead of this: print(clf.predict(df_train_att.head())) score = clf.score (df_train_att, df_train_lbl) better try this: from sklearn.metrics import accuracy_score test = clf.predict(df_test_att) score = accuracy_score(test, df_test_lbl, normalize=False)
H: Street address clustering? I have a huge dataset of addresses. I have another data stream that contains addresses that I need to match against those in the original dataset. As all the addresses are user-provided, matching them is not trivial. For example, 10 John Smith Square could be represented as 10 J.S. Square, J. Smith Sq 10, possibly even as only John Smith (which obviously misses the number and on top of that can collide with John Smith Street). I am wondering if there are some known ways to reducing this ambiguity. My intuition is that some clustering should be possible, allowing me to match incoming addresses from the second dataset to the clusters of the first. I want to get to clusters that separate different street numbers, rather than put all addresses on a single street together. I'm recently considering employing some sort of (possibly sparse) autoencoders, but was not able to find any literature to suggest that anyone else has tried a similar approach. So my question is, how can I employ autoencoders to cluster street address labels if anyone has tried anything of the sort? Otherwise, what alternative approaches would you suggest. I've already have a dictionary to reduce the vocabulary by trivial substitutions (e.g. Square to Sq), but it remains to be tested whether this actually improves results or not, depending on what further analysis is applied after that. My current representation of the address strings is with bag-of-words, as I don't think word order changes much. AI: No, clustering will not help you much here. Handling the ambiguity of short strings requires a carefully supervised approach. Don't expect unsupervised approaches to "magically" do what you want them to do... Anything unsupervised will cause undesired merges. For example, the words "fog" and "dog" are highly similar on an unsupervised way, but humans will consider them to be very different. So "Fog Road" and "Dog Road" will not be likely confused. But as you noted, "John Smith Square" and "John Smith Street" may end up being confused, but for an unsupervised approach like Levenshtein distance they would be very dissimilar - much more than Dog vs. Fog that differ only by a single letter.
H: Using an autoencoder for anomaly detection on categorical data Say a dataset has 0.5% of its features continuous and 99.5% categorical (binary) with ~2400 features in total. In this dataset, each observation is 1 of 2 classes - Fraud (1) or Not Fraud (0). Furthermore, there is a large class imbalance with only 2.6% of examples being Fraud, and the other ~97% of examples being Not Fraud. Say we want to to predict whether a given example is Fraud or Not Fraud, and we take an anomaly detection approach using autoencoders. Given the mixed data types in the dataset, in general, will an autoencoder, trained on only the Non Fraud examples, perform well in predicting Fraud examples? Is there any literature to suggest what architectures work best / if some preprocessing should be performed beforehand (scaling and PCA)? I ask because I feel an autoencoder may be hard to train with binary features. AI: In general an autoencoder should perform well, when it comes to detect fraud examples. Fraud examples should have in theory a much higher reconstruction error. When it comes to train the autoencoder on binary data, I agree with you that it can be quite challenging. I suggest to take a look at this blog: https://blog.evjang.com/2016/11/tutorial-categorical-variational.html
H: Machine Learning: Classify Array of Numbers based on Patterns I have experimented with various regression/classifier libraries, they accept training input like "5 is bad" and "10 is good" so they can tell you 7 is bad and 8 is good. Now imagine a more complicated example: I have heart rate data and there are certain heart rate patterns that are good and other patterns that are bad. In this case it's nonsense to just say "heart rate 120" is good or bad and so forth. How do you tackle a problem like this? Are there any known ML/AI algorithms that can intelligently recognize patterns? A good start could be a classifier that I can at least train with a whole array of numbers instead of a single number? AI: Actually you want to classify time series. Going from here you have basically two options: Build features from this time series, i.e. RMS, Peak Values, etc.. and classify them with "classical" predictors like SVM or Random Forests. Use neural networks, or more specifically use LSTM in order to feed directly your "array of numbers", which is a time series.
H: Feature selection Is it possible that out of several attributes $p$, only one attribute could be selected by a model in the feature selection and training phase? Then basically we are fitting a line. Basically, I was thinking that one should select the model with lesser feature set over a model with large number of feature. If one feature ($p =1$) gives better performance then should I not select that particular feature only? AI: You could have a look at an R package called mboost (documentation), which performs standard boosting (fitting a linear model using some features you give it) and the performs a coefficient update for only the feature that contributes to the largest reduction in error. All coefficients start at zero, so after many iterations, this results in some coefficient with large values, some with small coefficients and normally some with coefficients equal to zero... they were not selected at all. This means you have inherent feature selection during training. Check out the tutorial paper, which is very helpful in getting started. Here is an image, which shows the coefficient development during training: it shows the names of the features on the right... you can see that some values are still at zero once training has finished. The package has built in functionality for cross-validation, plotting and so on. EDIT: You can think of the training process as follows: run a regression on the data measure which feature was best able to fit (e.g. had smallest error) this feature "won the round" and gets its coefficient in the final equation increased by an amount (e.g. 0.001) repeat steps 1-3 until a threshold/criterion is met All features that didn't win a single round can be removed How many features do you have? If it isn't too many, you can simply run the model many times, adding/removing a single feature each time. You could also try using some metrics such as BIC (Bayesian Information Criterion) to decide which model explains the data best with the given features.
H: How to arrange the image dataset in CNN? How do I arrange the image dataset in CNN? Should I put each image category in a separate folder? Or all of them in the same folder? Should the image name be the category name? I would like to see an example for an image dataset (other than MNIST). Thank you. AI: Directory structure like in dogscats/.(atleast I kept it this way) dogscats     |-- train           |-- cats                 |-- catpic0, catpic1, …           |-- dogs/                 |-- dogpic0, dogpic1, …     |-- valid           |-- cats                 |-- catpic0+x, catpic1+x, …           |-- dogs                 |-- dogpic0+x, dogpic1+x, …     |-- test            |-- catpic0+x+y, catpic1+x+y, dogpic0+x+y, dogpic1+x+y Becareful With the naming of the files also.. Also note that you will need a mapping of image names and classes as well like in a CSV or something... Also there is no globally accepted directory structure, it completely depends on the API you will use... What is a good train/validation/test split?(depends on your dataset size) can do $80/20$ (train/validation) if you have or are creating a 'test' split, use for (train/validation/test): can do $80/15/5$ can do $70/20/10$ can do $60/20/20$ Remembering that sole aim is to generalize eventually on test sets...
H: Interpretation of variable or feature importance in Random Forest I'm currently using Random Forest to train some models and interpret the obtained results. One of the features I want to analyze further, is variable importance. The thing is I am not familiar on how to do a proper analysis of the results I got. Let's say I have this table: | Predictor | Importance | ---------------------------- | var_1 | num_1 | ... | var_n | num_n | What is a proper analysis that can be conducted on the values obtained from the table, in addition to saying which variable is more important than another? I was suggested something like variable ranking or using cumulative density function, but I am not sure how to begin with that. AI: I would be reluctant to do too much analysis on the table alone as variable importances can be misleading, but there is something you can do. The idea is to learn the statistical properties of the feature importances through simulation, and then determine how "significant" the observed importances are for each feature. That is, could a large importance for a feature have arisen purely by chance, or is that feature legitimately predictive? To do this you take the target of your algorithm $y$ and shuffle its values, so that there is no way to do genuine prediction and all of your features are effectively noise. Then fit your chosen model $m$ times, observe the importances of your features for every iteration, and record the "null distribution" for each. This is the distribution of the feature's importance when that feature has no predictive power. Having obtained these distributions you can compare the importances that you actually observed without shuffling $y$ and start to make meaningful statements about which features are genuinely predictive and which are not. That is, did the importance for a given feature fall into a large quantile (say the 99th percentile) of its null distribution? In that case you can conclude that it contains genuine information about $y$. If on the other hand the importance was somewhere in the middle of the distribution, then you can start to assume that the feature is not useful and perhaps start to do feature selection on these grounds. Here is a simulation you can do in Python to try this idea out. First we generate data under a linear regression model where only 3 of the 50 features are predictive, and then fit a random forest model to the data. Now that we have our feature importances we fit 100 more models on permutations of $y$ and record the results. Then all we have to do is compare the actual importances we saw to their null distributions using the helper function dist_func, which calculates what proportion of the null importances are less than the observed. These numbers are essentially $p$-values in the classical statistical sense (only inverted so higher means better) and are much easier to interpret than the importance metrics reported by RandomForestRegressor. Or, you can simply plot the null distributions and see where the actual importance values fall. In this case it becomes very obvious that only the first three features matter where it may not have been by looking at the raw importances themselves. import numpy as np import pandas as pd from sklearn.ensemble import RandomForestRegressor # number of samples n = 100 # number of features p = 50 # monte carlo sample size m = 100 # simulate data under a linear regression model # the first three coefficients are one and the rest zero beta = np.ones(p) beta[3:] = 0 X = pd.DataFrame(np.random.normal(size=(n, p)), columns=["x" + str(i+1) for i in range(p)]) y = np.dot(X, beta) + np.random.randn(n) # fit a random forest regression to the data reg = RandomForestRegressor() reg.fit(X, y) # get the importances var_imp = (pd.DataFrame({"feature": X.columns, "beta": beta, "importance": reg.feature_importances_}). sort_values(by="importance", ascending=False). reset_index(drop=True)) # fit many regressions on shuffled versions of y sim_imp = pd.DataFrame({c: np.empty(m) for c in X.columns}) for i in range(m): reg.fit(X, np.random.permutation(y)) sim_imp.iloc[i] = reg.feature_importances_ # null distribution function def dist_func(var, x): return np.mean(sim_imp[var] < x)
H: What is the most straightforward way to discover clusters in data? I'm planning on extracting a number of word vector distances from a data set, and I want to be able to detect clusters within that set, with an undefined number of clusters that are dynamically defined based on a distance variable. In general terms, what are my options I can look into? AI: You can try k-means algorithm. All you need to tune there is the distance function and the number of clusters. It is pretty simple to understand too.
H: why always predict same class for all input import numpy as np from sklearn import datasets as ds iris = ds.load_iris() x = iris.data y1 = iris.target x = x / x.max() y1 = np.matrix(y1) np.random.seed(1) y = np.zeros((y1.size, y1.max() + 1)) y[np.arange(y1.size), y1] = 1 class NeuralNetwork(object): def __init__(self): self.inputSize = 4 self.outputSize = 3 self.hiddenSize = 5 self.W1 = np.random.randn(self.inputSize, self.hiddenSize) * 0.01 self.W2 = np.random.randn(self.hiddenSize, self.outputSize) * 0.01 self.b1 = np.random.randn(1, self.hiddenSize) self.b2 = np.random.randn(1, self.outputSize) def forward(self, x): self.z = np.dot(x, self.W1) + self.b1 self.z2 = self.sigmoid(self.z) self.z3 = np.dot(self.z2, self.W2) + self.b2 o = self.sigmoid(self.z3) return o def sigmoid(self, s): return 1 / (1 + np.exp(-s)) def sigmoidPrime(self, s): return s * (1 - s) def backward(self, x, y, o): l = 0.2 self.dz2 = y - o self.dw2 = (1 / 150 * self.dz2.T.dot(self.z)).T self.db2 = 1 / 150 * np.sum(self.dz2, axis=0).reshape(1, 3) self.dz1 = 1 / 150 * self.W2.dot(self.dz2.T).T * self.sigmoidPrime(self.z) self.dw1 = (1 / 150 * self.dz1.T.dot(x)).T self.db1 = 1 / 150 * np.sum(self.dz1, axis=0).reshape(1, 5) self.W1 = self.W1 - l * self.dw1 self.W2 = self.W2 - l * self.dw2 self.b1 = self.b1 - l * self.db1 self.b2 = self.b2 - l * self.db2 def train(self, x, y): o = self.forward(x) self.backward(x, y, o) when I run this code it predicts always same class but when I change backward function to this one: def backward(self, x, y, o): self.o_error = y - o self.o_delta = self.o_error * self.sigmoidPrime(o) self.z2_error = self.o_delta.dot(self.W2.T) self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2) self.W1 += x.T.dot(self.z2_delta) self.W2 += self.z2.T.dot(self.o_delta) It predicts correctly. Why first function doesn't work correctly (backward)? AI: I solve this problem with adjusting learning rate and adding regularization and some optimization