text
stringlengths 83
79.5k
|
---|
H: What's the difference of Symbolic translation approach and statistical translation?
Based on what I know, symbolic computing is based on hard rules in the code. Statistical translation is based on statistic and probabilities. If so, could you give an easy to understand example to demonstrate this difference? If it is not correct, please give a correct explanation of the differences.
AI: By symbolic translation, I assume that you meant rule-based translation. As a simple example, consider translating the English sentence "I eat apples" to Spanish.
Rule-based translation works by first tagging the words with their part of speech ("I" as noun/subject, "eat" as verb, "apples" as noun/object). Then, create an English parse tree and transform that into a Spanish parse tree (this requires knowledge of how English phrases map to Spanish phrases). Lastly, translate each English word into its corresponding Spanish word, and use the Spanish parse tree to determine the structure of the sentence.
The simplest statistical machine translation system are n-gram models based on n-gram frequencies. For an explanation of the mathematics behind statistical translation, see this paper: http://www.mitpressjournals.org/doi/pdf/10.1162/coli.2006.32.4.527. |
H: Does bias have multiple meanings in Data Science?
What are the meanings of Bias?
And is Under fitting, which is used in machine learning contexts, the same as "Bias"?
I have faced biased data in sampling in statistics but it seems this is a different thing to bias in learning concepts.
I have heard that some data sets are biased, also have heard the model (for example neural network) has low bias or e.g. 'high bias' problem. Are these uses of bias different?
AI: Bias can mean different things in statistics:
If your model is biased, it's likely your model is under-fitting.
Some data set is biased in sample collection. For instance, if you assume your sample responses are independent, but somehow it's not, this is a bias in your data set. If you want to sample everybody in the country, but you skip some cities for no reason, this causes bias in your data set.
Your estimators could be biased - the expectation of your estimator is not equal to the true value in the population.
"Bias" is also used to describe an learnable offset parameter when using transfer functions, e.g. in neural networks when calculating activation of an artificial neuron. |
H: Gradient boosting vs logistic regression, for boolean features
I have a binary classification task where all of my features are boolean (0 or 1). I have been considering two possible supervised learning algorithms:
Logistic regression
Gradient boosting with decision stumps (e.g., xgboost) and cross-entropy loss
If I understand how they work, it seems like these two might be equivalent. Are they in fact equivalent? Are there any reasons to choose one over the other?
In particular, here's why I'm thinking they are equivalent. A single gradient boosting decision stump is very simple: it is equivalent to adding a constant $a_i$ if feature $i$ is 1, or adding the constant $b_i$ if feature $i$ is 0. This can be equivalently expressed as $(a_i-b_i)x_i + b_i$, where $x_i$ is the value of feature $i$. Each stump branches on a single feature, so contributes a term of the form $(a_i-b_i)x_i + b_i$ to the total sum. Thus the total sum of the gradient boosted stumps can be expressed in the form
$$S = \sum_{i=1}^n (a_i-b_i) x_i + b_i,$$
or equivalently, in the form
$$S = c_0 + \sum_{i=1}^n c_i x_i.$$
That's exactly the form of a final logit for a logistic regression model. That would suggest to me that fitting a gradient boosting model using the cross-entropy loss (which is equivalent to the logistic loss for binary classification) should be equivalent to fitting a logistic regression model, at least in the case where the number of stumps in gradient boosting is sufficient large.
AI: You are right that the models are equivalent in terms of the functions they can express, so with infinite training data and a function where the input variables don't interact with each other in any way they will both probably asymptotically approach the underlying joint probability distribution. This would definitely not be true if your features were not all binary.
Gradient boosted stumps adds extra machinery that sounds like it is irrelevant to your task. Logistic regression will efficiently compute a maximum likelihood estimate assuming that all the inputs are independent. I would go with logistic regression. |
H: Correct order of operations involved into Dropout
Suppose we have CNN with any hidden layer with activation followed by dropout layer. What is the correct precedence of activation and dropout operation if dropout implementation is inverted dropout and CNN mode is training mode? Do I need to compute activation in the first layer and then apply dropout with division by retain probability p, or I need to apply activation to the result of the division?
Say we have the following keras code
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
If I understand correctly, dropout with division by p will be applied to the activated result:
result = [survive_mask] * relu(output)/p
Is this correct? Wouldn't it be more natural to have
result = [survive_mask] * relu(output/p)
because otherwise dropout operation breaks activation value normalization (i.e. to [0, 1]) ?
AI: The usual processing for your suggested layers:
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
would be (reading left to right)
dense output -> relu -> apply dropout mask -> apply "inverse dropout" divide by p
The precise combination may vary depending upon optimisations, and can in theory be changed a little without affecting the result (it doesn't matter to the end result numerically if we scale then mask or mask then scale for instance). However when dealing with vectorised optimisations (like those found in TensorFlow and Theano), it is normal to accept a percentage of "wasted" processing, and just have that naive left-to-right processing happen. It is often harder to parallelise decision branches than to simply process all items, even repeated multiplying by and adding zeroes for a significant fraction of each array.
There is no "normalization" implied by activation functions, so this is not a concern. From your comments, it seems you are worried that dividing by p could mean that the output of a neuron that would be between 0 and 1 (because you were using sigmoid activation for example) would now be between 0 and 1/p - i.e. larger. That is true. Is this a problem? No it is not, and in fact it is required. The impact of the larger values is fully compensated for by the weights learned in the connections between layers. If you used "vanilla" dropout then the weights would be correspondingly larger, but you would need to scale the outputs down (new range would be 0 to p) during testing/prediction. |
H: Which functions neural net can't approximate
I read somewhere on the StackExchange that a neural network can't approximate the Pi number as a function of circles length and radii. Maybe this is an incorrect example or wrong information. Help me to understand.
What about the sum or multiplication of any arbitrary numbers? Are there any other specific functions neural networks can't approximate or not?
AI: A neural network can approximate any continuous function, provided it has at least one hidden layer and uses non-linear activations there. This has been proven by the universal approximation theorem.
So, there are no exceptions for specific functions. You ask:
I read somewhere on the StackExchange that a neural network can't approximate the Pi number as a function of circles length and radii.
A neural network to approximate $\pi$ is very easy. Possibly what you read is that a neural network cannot generate new digits of $\pi$ that it has not already been shown. More on that later . . .
What about the sum or multiplication of any arbitrary numbers?
Yes, a neural network can approximate that.
Are there any other specific functions neural networks can't approximate or not?
No, there are no specific functions that a neural network cannot approximate.
However, there are some important caveats:
Neural networks do not encode the actual functions, only numeric approximations. This means there are practical limits on the ranges of inputs for which you can achieve a good approximation.
A neural network being able to approximate a function in theory is not the same thing as you or I being able to construct a neural network that approximates that function. There is no known method to construct a neural network by analysis of a function alone (it can be done for specific simple functions such as xor).
The usual way to achieve approximation is to train a neural network by giving example data. The network will approximate to data it has been shown. There is no guarantee that this will generalise to new inputs that it has not been trained on and approximate the correct outputs. In fact for certain types of input/output it cannot possibly do so. For instance, it will not learn how to generate the 4th digit of $pi$ if it has been shown digits 1,2,3,5,6,7,8,9. The best generalisation results occur for functions that have smooth transitions between the training examples.
Neural networks do not extrapolate well to inputs outside of the data they have used for training. They "fit" to the training data (imagine a rubber sheet draped over all the points in the training set).
Neural networks do not learn to copy algorithms, only functions. So if you take a complex algorithm, such as AES encryption, and attempt to train a neural network to perform this given lots of input examples, has no real chance of working. Now, AES encryption can be considered a function e.g. $output = encrypt( input, key )$. So the NN can approximate it. But it will only do so for the specific inputs and outputs it has been shown. In addition AES does not respond well to approximation - a single bit wrong will cause it to be a bad encryption. So you won't see NNs used to encrypt or decrypt in cryptography.
The capability of a neural network to approximate is limited by the number of neurons and connections it has. More complex functions require larger networks. In order to train a larger network on a more complex function takes more time and more training data. You could in theory train a neural network to learn a random number generator function. However, that would take an impossible amount of resources - memory to store the network, and time to train it against the whole output of the RNG. |
H: Why does a randomly initialised convolution kernel correspond to an edge detector?
In this nice tutorial about CNNs, the authors build a single-layer CNN. The initial convolution weights are set randomly, according to a uniform distribution.
By the end of this scetion, the authors note that the randomly initialised kernel behaves very similar to an edge detector and give the following input and output as example.
Why does the randomly initialised kernel behave like an edge detector?
AI: Take this with a grain of salt, but I think this is simply not true. You can evaluate it with the code I just wrote:
Only 2, probably 3 of the 25 look like edge filters to me. The result of an edge filter looks like this: |
H: Feature importance with high-cardinality categorical features for regression (numerical depdendent variable)
I was trying to use feature importances from Random Forests to perform some empirical feature selection for a regression problem where all the features are categorical and a lot of them have many levels (on the order of 100-1000). Given that one-hot encoding creates a dummy variable for each level, the feature importances are for each level and not each feature (column). What is a good way to aggregate these feature importances?
I thought about summing or getting the average importance for all levels of a feature (probably the former will be biased towards those features with more levels). Are there any references on this issue?
What else can one do to decrease the number of features? I am aware of group lasso, could not find anything easy to use for scikit-learn.
AI: It depends on how you're one-hot encoding them. Many automated solutions for that will name all the converted booleans with a pattern so that a categorical variable called "letter" with values A-Z would end up like:
letter_A, letter_B, letter_C, letter_D,....
If after you've figured out feature importance you've got an array of feature and the associated weight/importance, I would analyze the array and perhaps sum up the feature importance weights for anything starting with "letter%". |
H: Why not train the final model on the entire data after doing hyper-paramaeter tuning basis test data and model selection basis validation data?
By entire data I mean train + test + validation
Once I have fixed my hyperparameter using the validation data, and choose the model using the test data, won't it be better to have a model trained on the entire data so that the parameters are better trained rather than having the model trained on just train data
AI: The question is under a wrong assumption. Many people do what you say they "cannot" do.
In fact, the grid search implementation in the widely deployed sklearn package does just that. Unless refit=False, it will retrain the final model using the entire data.
I think for some hyperparameters this might not be very desirable, because they are relative to the volume of data. For instance, consider the min_samples_leaf pre-pruning tactic for a decision tree. If you have more data, the pre-pruning may not perform as you want.
But again, most people do in fact retrain using the entire data after cross-validation, so that they end up with the best model possible.
Addendum: @NeilSlater says below that some people perform hold-out on top of CV. In other words, they have a train-test split and then perform model selection on the training. According to him, they re-train using the original training set split, but not the testing set. The testing set is then used to perform a final model estimation. Personally, I see three flaws on this: (a) it does not solve the problem I mentioned with some hyperparameters being dependent on the volume of training since you are re-training anyway, (b) when testing many models, I prefer more sophisticated methods such as nested cross validation so that no data goes to waste, and (c) hold-out is an awful method to infer how a model will generalize when you have little data. |
H: How to (better) discretize continuous data in decision trees?
Standard decision tree algorithms, such as ID3 and C4.5, have a brute force approach for choosing the cut point in a continuous feature. Every single value is tested as a possible cut point. (By tested I mean that e.g. the Information gain is calculated at every possible value.)
With many continuous features and a lot of data (hence many values for each feature) this apporach seems very inefficient!
I'm assuming finding a better way to do this is a hot topic in Machine Learning. In fact my Google Scholar search revealed some alternative approaches. Such as discretizing with k-means. Then there seem to be a lot of papers that tackle specific problems in specific domains.
But is there a recent review paper, blog post or book that gives an overview on common apporaches for discretization? I couldn't find one...
Or else, maybe one of you is an expert on the topic and willing to write up a small overview. That would be tremendously helpful!
AI: No, you probably don't want to try all possible cut points in a serious implementation. That's how we describe it in simple introductions to ID3, because it's easier to understand, but it's typically not how it is actually implemented, because it is slow. In particular, if there are $n$ data points, then you'll need to test $n-1$ candidate thresholds; using the naive algorithm to calculate the information gain of each of those candidate thresholds takes $O(n)$ time per candidate, for a total of $O(n^2)$ time.
In practice, there are optimizations that speed this up significantly:
Don't try all possible thresholds. Instead, pick a random sample of 1000 candidate thresholds (chosen uniformly at random out of the set of $n-1$ candidate thresholds), calculate the information gain for each, and choose the best one.
Use dynamic programming to efficiently compute the information gain of all $n-1$ splits, in total of $O(n)$ time, by reusing computation. The algorithm is pretty straightforward to derive. |
H: Semi-gradient TD(0) Choosing an Action
I am trying to write an optimal control agent for a simple game that looks like this:
The agent can only move along the x-axis, and has three actions available to it: left, right, and do nothing. A random number of falling rocks are spawned at arbitrary positions along the top row. The goal is to survive as long as possible by avoiding collision on each time step.
Here's what I've done so far:
1) I use a feature vector $Φ$ with $Φ_0(s)$ being the current x-coordinate of the agent and $Φ_1(s)...Φ_n(s)$ taking on a 0 or 1 (1 indicating the presence of a rock).
2) The corresponding weight vector $θ$ is initialized to 0 for all weights. So I have the linear function approximation $$\hat v(s,θ)=\sum_{i=0}^nΦ_i(s)θ_i$$
3) The reward on each time step is 1, and 0 upon collision.
I'm actually trying to implement the algorithm below from Sutton and Barto's 2017 draft.
Semi-gradient TD(0) for estimating $\hat{v} \approx v_{\pi}$
Input: the policy ${\pi}$ to be evaluated
Input: a differentiable function $\hat{v} : \mathbf{S^+} \times \mathbb{R}^n \rightarrow \mathbb{R}$ such that $\hat{v}(terminal,·) = 0$
Initialize value-function weights $\theta$ arbitrarily (e.g., $\theta = 0$)
Repeat (for each episode):
$\qquad$Initialize $S$
$\qquad$Repeat (for each step of episode):
$\qquad\qquad$Choose $A \sim \pi(·|S)\qquad$ # Not sure how to choose action here
$\qquad\qquad$Take action $A$, observe $R, S'$
$\qquad\qquad\theta \leftarrow \theta + \alpha[R + \gamma \hat{v}(S',\theta) − \hat{v}(S,\theta)]\nabla\hat{v}(S,\theta)$
$\qquad\qquad S \leftarrow S'$
$\qquad$until $S'$ is terminal
My problem is this:
1) Is this even an appropriate algorithm to apply to this sort of task? It feels like a policy gradient approach would be more suitable.
2) If yes to 1), how do I choose an action? In the algorithm above, this appears as "Choose $A \sim \pi(·|S)$" Since the policy is implicit, I compute the approximate value of the state as shown in 2) above and then greedify over the three actions (no ε-greedy, though), but the only thing that changes is $Φ_0(s)$. I'm definitely missing something.
AI: 1) Is this even an appropriate algorithm to apply to this sort of task?
No, you have selected an evaluation algorithm from chapter 9 of the book. None of the algorithms in chapter 9 are control algorithms. They are all designed to estimate the value function for a policy supplied as input. The corresponding control algorithms are discussed in chapter 10.
The current draft of the book does not give the corresponding TD(0) control algorithm with linear estimator. However, that algorithm does exist and might be suitable (with caveats). In fact in your case it could even have benefits over action-value based methods, because you reduce the scope of estimates needed by a factor of 3. This is something that you can take advantage of only if you have a full model of the environment, so can look ahead one time step to determine the best action. Without a model of the environment, or if you don't want to use the model in your agent, then you must use an action value based algorithm like Monte Carlo, SARSA or Q Learning.
2) If yes to 1), how do I choose an action?
Well it was a no, but you could use the control version of TD(0). Then you have the problem of using your state value function to figure out the policy. The rule here is that to use state values you need to use a model of the environment. Specifically you need to be able to predict the next state and immediate reward given the current state and action. In the book, this is usually represented by the transition function $p(s′,r|s,a)$ which gives the probability of each possible successor state and reward. In a fully deterministic game, the probability is just 1.0 for one target state and reward caused by each action. In your case you have new rocks appearing randomly at the top. To be complete you would probably have to model this in detail (which would be painfully slow). However, given the very low influence of this top row, and how little planning the agent can do to deal with it, I'd be tempted to just sample it.
Assuming you want to choose the greedy action, then you can find a policy by taking $argmax_a$ over the next step. When you have a state value function, then you have to run the step forward in simulation to figure out the expectation over each action. This is the same calculation for greedy policy as used in dynamic programming (back in chapter 4 of the book):
$\pi(s) = argmax_a \sum_{s'} p(s',r|s,a) (r + \gamma \hat{v}(s',\theta))$
Of course this is a lot simpler if you used action values istead (e.g. in SARSA):
$\pi(s) = argmax_a \hat{q}(s,a,\theta)$
. . . so despite the fact this is less efficient, you might want to use action values for less effort in this part.
One additional thing you are likely to have problems with: Your choice of state representation does not have good linear relationship with the true value function. The linear estimator is quite limited in what it can do. A better estimator here would be a neural network with at least one hidden layer.
However, you can probably tweak the representation slightly to get something that will work a little bit with a linear estimator - the trick is to have the rocks part of the state represented relative to the agent - i.e. don't have a grid of absolute positions of rocks, but make the grid relative to the agent. That way, a rock directly above the agent will always contribute the same to the current state value, which is important. Without this tweak, and using a linear approximator, your agent will not learn optimal control, it will instead learn a kind of fatalistic "with these many rocks, I have roughly this long to live" value function and probably just take random actions (if the distribution of rocks is not even it might learn to move to a particular column . . .) |
H: Clustering Multiple Networks
I'm looking for methods of Community Detection in networks. For example if I have a network of 100 people (each node is a person), how can I cluster nodes? What would be the best approach for grouping these People? I know this question is rather open, but I'm just looking for a nudge in the right direction.
AI: Your question is not clear in a way there are two different Graph Clustering problems. One is having a dataset of different graphs and you would like to cluster similar graphs (in this case each object is a graph), and the other when you have a graph (e.g. a social network) and you would like to group similar nodes inside that graph (here each object is a node).
The first problem needs Graph Embedding which is to transfer graphs into a n-dimensional manifold and from there on, you have a classical clustering problem. For this case have a look at what Horst Bunke has done.
A more simplified approach would be feature extraction from networks and apply classical clustering methods. These features are statistical and topological network measures such as Clustering Coefficient, Assortativity Index, Average Shortest Path Length, Density, Diameter, Average Centrality, etc. This approach is more about the characteristics of real-world networks e.g. biological networks or social networks as it takes into account those characteristics which determine the real-world phenomena.
The second problem is usually called Community Detection in literature. There are several method for doing that based on Modularity Score, Information Theory, Topological Structure or Spectral Graph Theory. |
H: Adding document vector doc2ec to seq2seq model
In general we use word2vec for word embedding in seq2seq model, is it possible to add the document vector from Doc2vec with the input words , I mean using the tag of the document as a word and its vector for emending : the picture will explain mu point a view
default seq2seq
My opinion
the doc_tag is followed by its original words so will that improve my seq2seq model ( logically ) ?
AI: The purpose of the encoder (green part) is to determine this document vector, so if you want to provide it yourself use the decoder and feed the embedding as the initial state.
I suppose you could use your embedding as a prior for the encoder output. One alternative to your suggestion, which might work, is to use the embeddings as an MSE regularizer to the encoder output. |
H: Item Similarity with Location Feature
I'm currently learning about Collaborative Learning and Content-based Recommendation.
One of the main things that is discussed in both methods is about calculating similarity between two users or two items. Commonly, similarity between two entity (each represented by a vector) can be calculated using Cosine similarity or other similarity coefficients.
One question that I haven't found the answer is about calculating similarity between two entities in which each has a feature that explains each entity's location.
Let's say, the entities in my data is about restaurants and each restaurant has several attributes such as
food price average
user rating/review score
location in $(\text{lat}, \text{long})$
etc.
For food price average or user rating, maybe it's still relevant to include them in cosine similarity function, but how about location–which is described in a tuple of real numbers? What's the best way to include distance between entities as a component in the similarity function?
Thanks!
AI: Perhaps you could transform the latitude and longitude into spherical coordinates. In this coordinate system, the cosine of the angle between the vectors has a natural geometric interpretation. |
H: BI vs Data Science. Looking for a difference in definitions
can someone please tell me the difference between a BI trendline, and a linear/exponential regression?
When explaining this to a hardcore BI person, what can be used to mark the difference? Thanks.
AI: Any difference in regression models can be reduced to differences in the latent model (e.g., linear vs. exponential), regularizer (e.g., $L^p$ norm), and loss function. So you can have subtle differences by keeping some of these three parameters fixed while modifying the rest.
My understanding of a BI trend line is that it assumes an affine latent model without saying anything about the regularizer or loss function (though I'd assume it's the MSE unless stated otherwise). In the data science world, you should also state what loss function and regularizer you used if you want to be clear. |
H: Detecting patterns from a collection of data
I have a collection of data for a multiplayer game (2000 games, 10 players each). I would like to create clusters from this data, each containing the ids of 3 players that had played against each other.
AI: You can use Python networkx module to find all 3-cliques:
import networks
G = nx.Graph() # The clique locator does not work with digraphs
G.add_edges_from([('A','B'),('A','D'),('A','C'),('B','D'),('C','D'),('D','E')])
[clique for clique in nx.enumerate_all_cliques(G) if len(clique)==3]
#[['B', 'A', 'D'], ['A', 'D', 'C']]
Finding all clique may take a lot of time and memory. Luckily, nx.enumerate_all_cliques is a generator that produces smaller cliques first, so you can stop retrieving cliques after you get a clique with more than 3 nodes:
cliques=[]
for c in nx.enumerate_all_cliques(G):
if len(c) < 3: continue
if len(c) > 3: break
cliques.append(c)
print(cliques) |
H: How ANN is used for classification?
I am reading about artificial neural networks and it is said that ANN is used for prediction after training with training data. It is also given that ANN is used for classification.
Say I have data consisting of input as ($\theta$, sin($\theta$)) and output as -1 if it is in upper half of sin wave and +1 if it is in lower half. Here is it guaranteed that a trained neural network will always produce output as +1 or -1 (i.e. classify as +1 or -1)? If not, then how is ANN used for classification?
AI: In your question, you have a binary classification problem. I understand what you're asking - you want to know how exactly a network classifies the inputs. Does the network only give (-1, +1) or something else?
Most neural networks don't just say "this is upper or this is lower". The networks would give you a probability distribution [0..1] for each possible class. The most common implementation is the popular softmax layer. You'd just choose the class with the highest probability as the prediction.
It's also possible to encode your network such that it outputs +1 or -1. In fact, we can always add a layer just after the softmax layer to do exactly that.
If you haven't I recommend you to study logistic regression before tackling neural network. |
H: Regular expression in python -
I want to extract the values of the below text
Pafient Name : Thomas Joseph MRNO : DQ026151?
Doctor : Haneef M An : 513! Gandar : Male
Admission Data : 19-Feb-2V'3‘¥T12:2'$ PM Bill No : IDOGIII.-H-17
Discharge Date : 22-Feb-20$? 1D:5‘F AM Bill Dale : E2-Feb-2017
extract only the values of the field names for example,
Thomas Joseph from the field name Pateint name, similarly for others field names and save the output to excel
Python code for the above
My attempt -
text = pt.image_to_string(img1)
print(text)
s = re.findall(r'\s:\s(\w+)', text)
print (s)
AI: It may not be perfect but does the job almost.
import re
re.findall(r'(?<=: )\w{2}-\w{3}-\d{4}|(?<=: )\d{2}-\w{3}-\w{2}|(?<=: )\s?\w+\s?\w+\s?\w+',data)
#['Thomas Joseph MRNO','DQ026151','Haneef M An','513','Male','19-Feb-2V','IDOGIII','22-Feb-20','E2-Feb-2017'] |
H: Self adjusting CNN network
I am currently trying to build a self adjusting network, such that given any number of inputs, should always provide an output of shape (15,145)
The network structure is pretty simple and looks like this:
inputs = 36
list_of_input = [Input(shape = (45,5,3)) for i in range(inputs)]
list_of_conv_output = []
list_of_max_out = []
for i in range(splits):
list_of_conv_output.append(Conv2D(filters = 145 , kernel_size = (30,3))(list_of_input[i]))
list_of_max_out.append((MaxPooling2D(pool_size=(3,2))(list_of_conv_output[i])))
merge = keras.layers.concatenate(list_of_max_out)
#reshape = Reshape((merge.shape[0],merge.shape[3]))(merge)
dense1 = Dense(units = 1000, activation = 'relu', name = "dense_1")(merge)
dense2 = Dense(units = 1000, activation = 'relu', name = "dense_2")(dense1)
dense3 = Dense(units = 145 , activation = 'softmax', name = "dense_3")(dense2)
model = Model(inputs = list_of_input , outputs = dense3)
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
print model.summary()
raw_input("SDasd")
hist_current = model.fit(x = [train_input[i] for i in range(100)],
y = labels_train_data,
shuffle=False,
validation_data=([test_input[i] for i in range(10)], labels_test_data),
validation_split=0.1,
epochs=150000,
batch_size = 15,
verbose=1)
It been adjusted for having 36 inputs which would given it an output shape of (15,1,145) - but how can i determine the number of filters, kernel size and pooling size that would give me the desired output size. The network is supposed to be used for classification, and the output vector of length 15 with classes for each third entry in the first axis (45 = 15*3). the total number of classes is 145, hence output dimension (15,145)
AI: Spatial pyramid pooling layers (https://arxiv.org/pdf/1406.4729.pdf) should solve this problem for you. These layers allow you to use input images of any dimension, instead of being restricted to 224x224 images, for example. |
H: Is GridSearchCV computing SVC with rbf kernel and different degrees?
I'm running a GridSearchCV with a OneVsRestClasssifer using SVC as an estimator. This is the aspect of my Pipeline and GridSearchCV parameters:
pipeline = Pipeline([
('clf', OneVsRestClassifier(SVC(verbose=True), n_jobs=1)),
])
parameters = {
"clf__estimator__C": [0.1, 1],
"clf__estimator__kernel": ['poly', 'rbf'],
"clf__estimator__degree": [2, 3],
}
grid_search_tune = GridSearchCV(pipeline, parameters, cv=2, n_jobs=8, verbose=10)
grid_search_tune.fit(train_x, train_y)
According to the documentation of SVC the degree parameter is only used by the poly kernel:
http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
degree : int, optional (default=3)
Degree of the polynomial kernel
function (‘poly’). Ignored by all other kernels.
but when I see the output of my GridSearchCV it seems it's computing a different run for each SVC configuration with a rbf kernel and different values for the degree parameter.
[CV] clf__estimator__kernel=poly, clf__estimator__C=0.1, clf__estimator__degree=2
[CV] clf__estimator__kernel=poly, clf__estimator__C=0.1, clf__estimator__degree=2
[CV] clf__estimator__kernel=rbf, clf__estimator__C=0.1, clf__estimator__degree=2
[CV] clf__estimator__kernel=rbf, clf__estimator__C=0.1, clf__estimator__degree=2
[CV] clf__estimator__kernel=poly, clf__estimator__C=0.1, clf__estimator__degree=3
[CV] clf__estimator__kernel=poly, clf__estimator__C=0.1, clf__estimator__degree=3
[CV] clf__estimator__kernel=rbf, clf__estimator__C=0.1, clf__estimator__degree=3
[CV] clf__estimator__kernel=rbf, clf__estimator__C=0.1, clf__estimator__degree=3
Shouldn't all values of degree be ignored, when the kernel is set to rbf?
AI: From what I understand, you'll be able to pass different values of degree even when you're using kernels that are not the polynomial kernel, just that it will not be used. I believe the score will come out to be similar for the other kernels even with different degrees. Will you be able to confirm this?
To avoid additional computation time due to redundant searches, you can fine tune the GridSearchCV by specifying two grids. Try out the below code and passing it into the param_grid argument.
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
You can explore the GridSearchCV documentation for the specific examples. Take a look at the example here (look at 3.2.1) |
H: Limitations of Perceptron
If you are allowed to choose the features by hand and if you use
enough features, you can do almost anything.For binary input vectors,
we can have a separate feature unit for each of the exponentially many
binary vectors and so we can make any possible discrimination on
binary input vectors.This type of table look-up won’t generalize.But
once the hand-coded features have been determined, there are very
strong limitations on what a perceptron can learn.
This is what Hinton explains in his Neural Networks course but I don't get the binary input example and why it is a table look-up type problem and why it won't generalize? What does he mean by hand generated features? I understand that perceptrons cannot classify non-linear data but I cannot relate this to his slide (slide 26). It would be nice if anybody explains this with proper example.
AI: The slide explains a limitation which applies to any linear model. It would equally apply to linear regression for example.
What does he mean by hand generated features?
This means any features generated by analysis of the problem. For instance if you wanted to categorise a building you might have its height and width. A hand generated feature could be deciding to multiply height by width to get floor area, because it looked like a good match to the problem.
I don't get the binary input example and why it is a table look-up type problem and why it won't generalize?
A table look-up solution is just the logical extreme of this approach. If you have a really complex classification, and your raw features don't relate directly (as a linear multiple of the target), you can craft very specific manipulations of them that give just the right answer for each input example. Essentially this is the same as marking each example in your training data with the correct answer, which has the same structure, conceptually, as a table of input: desired output with one entry per example.
In fact this might generalize, but only exactly as well as the crafted features do. In practice, when you have a complex problem and sample data that only partially explains your target variable (i.e. in most data science scenarios), then generating derived features until you find some that explain the data is strongly related to overfitting.
From your comment:
In his video lecture, he says "Suppose for example we have binary input vectors. And we create a separate feature unit that gets activated by exactly one of those binary input vectors. We'll need exponentially many feature units. But now we can make any possible discrimination on binary input vectors. So for binary input vectors, there's no limitation if you're willing to make enough feature units." 1.What feature? 2.Why are we creating this feature? And why adding exponential such features we can discriminate these vectors?
Here is an example of the scheme that Geoffrey Hinton describes. Say you have 4 binary features, associated with one target value and see the following data:
data 0 1 1 0 -> class 1
data 1 1 1 0 -> class 2
data 0 1 0 1 -> class 1
data 1 1 1 0 -> class 2
data 0 1 1 1 -> class 2
data 0 1 0 0 -> class 1
It is possible to get a perceptron to predict the correct output values by crafting features as follows:
data 0 1 1 0 -> features 1 0 0 0 0 -> class 1
data 1 1 1 0 -> features 0 1 0 0 0 -> class 2
data 0 1 0 1 -> features 0 0 1 0 0 -> class 1
data 1 1 1 0 -> features 0 1 0 0 0 -> class 2
data 0 1 1 1 -> features 0 0 0 1 0 -> class 2
data 0 1 0 0 -> features 0 0 0 0 1 -> class 1
Each unique set of original data gets a new one-hot-encoded category assigned. It is clear that ultimately if you had $n$ original features, you would need $2^n$ such derived categories - which is an exponential relationship to $n$.
Working like this, there is no generalisation possible, because any pattern you had not turned into a derived feature and learned the correct value for would not have any effect on the perceptron, it would just be encoded as all zeroes. However, it would learn to fit the training data very well, it could just associate each unique vector with a weight equal to the training output - this is effectively a table lookup.
The whole point of this description is to show that hand-crafted features to "fix" perceptrons are not a good strategy. Even though they can be made to work for training data, ultimately you would be fooling yourself. |
H: The danger of "amplifying noise"
A data scientist cautioned me against "just amplifying noise" in a data analysis. What did he mean? Can you explain and/or point to resources explaining the problem?
AI: There is very little information in this question. I will try to answer this in the most generic sense. Let's start by defining Noise. Noise here as you probably know is unwanted data. Any data which you are not looking for while evaluating a problem or scenario can be considered as noise.
Examples for amplifying noise:
Amplifying noise might occur in cases and scenarios where there is a small data set and you are trying to supersample the dataset or another example could be while working with waveforms. In order to detect weaker signals.
Disadvantages of Amplifying noise
The biggest disadvantage of amplifying noise from a data science perspective is that the model used to perform various operations on the data such as Regression, Classification etc will be less efficient. For example having noise based on supersampling in Classification may affect the model. If we were to use decision trees for classification you might create a bias in the algorithm which just pertains to noise while training. So your accuracy for classification also decreases. Similarly, in regression when you train with noise you might choose a wrong model because the noise alters the goodness of the fit. |
H: I have n dimensional data and I want to check integrity, can I downgrade to 2 dimensional feature space via PCA and do so?
Say I have n dimensional data samples. I want to check the integrity of the features, if they are good representation of the respective classes, i.e. these features are good or not.
My plan is:
I use PCA and convert this to 2 dim data. Plot this data. See if they're separable enough.
Does the above plan sound okay for testing to see if features are any good?
AI: This was going to be a comment but it grew to an answer. I think there should be some clarification because the question itself is not specifically about visualization but checking the "integrity" of the features.
PCA will work for make a generalization of the dataset as a whole. It is a very standard starting point for exploring data. If the first 2 components do show a clear separation, this is a pretty solid indication that at least some projection of your data can well represent your classes. So, short answer is yes. If the first 2 components don't show separation, that does not mean that the features are necessarily bad, it just means that the first two components do not explain the majority of the variability in the dataset.
Feature selection is used to check the integrity/importance/value of individual features and how they affect the model. You can use random forest gini importance to rank your features or lasso regularization to with cross-validation to find out how individual features are weighted in a log. reg. model (this does require a little more work as the weightings are not necessarily an exact measurement of variable importance). Feature selection and cross-validation are the most direct ways of determining feature integrity. PCA is mostly a good first pass and helpful visualization. |
H: Which machine learning technique for product ranking/scoring
I am trying to identify a ML technique to score products based on the number of times the product was "viewed", "clicked" and knowing the "cost per click" for each product. Given the product ID and category ID, how can I proceed to score each product?
I am sure I have to coarse classify them (some have no clicks, but views, some have both, some have none)? Since there are 1000s of products... Any tip?
I guess the technique is also used in e-commerce to design recommender systems, like based on popularity of a product. Any one can shed some light?
*Edit: Though the suggestions here are interesting, still I couldn't figure out best way to do this.
AI: So I am assuming you just want to be pushed in the right direction. There are 2 different ways you can go about this.
Netflix up until very recently did all its recommendations using classical algorithms and setups, see paper on their architecture.
For this type of light recommendation problem I would recommend using something from PredictionIO. It is very versatile and can be used to classify using a variety of inputs. It's also not very hard to learn.
You can also solve this problem using neural nets, it can be viewed as a recommendation by classification. Deep learning is all the jazz now and you can utilize these breakthroughs in the recommender space.
Youtube is the big one when it comes to deep neural nets applied to recommendations, see this paper.
They split their system into 2 separate neural net models. One for candidate generation, and then another for producing the actual recommendations.
Spotify also did some awesome stuff applying Convolutional Neural Nets to the actual audio streams with some equally interesting results:
http://benanne.github.io/2014/08/05/spotify-cnns.html
As far as implementing something like that goes I would look for examples and build in python using either tensorflow or theano and keras. It wouldn't have to be too 'deep'.
Hope that helped! |
H: How to deal with time series of multi-source energy in order of classifcation?
I would like to do classification of multi-source energy (wind/solar/teg) repersented in a time series data.
My questions are :
1- What are the most relevent feature that I should chose to do the classification (statistical ones (kurtoisis/means/ variance...) on each sliding window (for experimental purpose) or spectral ones (DWT/FFT)) and which feature selection/extraction method is the best in this case.
2-What is the best classification method should I chose?
Thank you
AI: OK, let us say you have 24 data points per day, in a year you would have 8700~ data points. As an initial analysis you would likek to classify the data points in 2 classes, 'summer energy', 'winter energy'. Notice I am refering to data points and not time series. For this analysis, on way is not to take a time series view but a collection of datapoints view. Once you do that, it becomes a unsupervised classification problem. You could use K-Mean, where the value of K is 2. You could also use neural network based models like Adaptive Resonance Theory networks. As for feature selection, take all the features that you receive from the sensor. It could additionally help to center and scale the data.
In my opinion, if you want to classify the data into 2 classes, timeseries analysis is not immediately required, you could employ the method mentioned above and see how the results are.
EDIT 1:
As I undestand (and assume), you have 4 sensors placed at 4 points of a single entity, e.g. an office. Each sensor output per hour is one of your dimensions. Hour of the day is also your data dimension. As an example, if you have 4 sensors, let us call them A,B,C,D, then your data point will look like:
Datapoint 1::
Vector position 0: value of A sensor, 1.0
Vector position 1: value of B sensor, 1.5
Vector position 2: value of C sensor, 0.5
Vector position 3: value of D sensor, 3.5
Vector position 4: hour of day (24), 14
You will have 720 such data points for one month. You should apply any clustering algorithm on the data points.
Few suggested clustering algorithms (there are many more):
1) K-Mean with K value as 2
2) ANN based approach
EDIT 2:
You could explore the ANN approach, you could make a 3 layer feed-forward network with 4 inputs and 2 outputs. The inputs would be the data point I mantioned above, however, it would be best to normalize the values ( here are some techniques) between 0 and 1. The output in your case will be 2 neurons, one representing summer and another winter, the input vector would be the datapoint vector and the output a vector of length 2, eg [0,1] for winter ideal valu & [1,0] for summer ideal value.
In another approach you could use K- Nearest Neighbour, in this case data normalization may not be strictly (technicaly) required.
For ANN library, you could use R/Python/Java, reference to one Java library is here.
For KNN, there are many library options, one reference is here |
H: ML technique to predict next online
I want a direction for ML technique to predict the next time you will be online in chat app
My table contains; user id, timestamp & status
status is a categorical variable (online, composing, offline), I'm saving a row for each status change which means user online, maybe writes something to me (composing) and afterwards went offline, so I can extract the time online (currently I don't see a reason for this feature)
I want to export the model to PMML
What is the best approach if I want to a service to get the next timestamp that user will be online ?
AI: That sounds like time series forecasting. Here's a related post:
https://stats.stackexchange.com/questions/212912/forecast-time-series-data-with-external-variables |
H: Is there a way to calculate value from a radar (spider) chart?
So I am currently working on a project for "sales people". Essentially, the user is giving us a plethora of information related to deals they are trying to win. We have data related to the price of the deal, margin of the deal, number of people working on the deal, caliber of the people working on the deal, and the risk of the deal. I want to assign a "score" to these deals based on the given metrics and visualize how the score is calculated in a simple way.
It would be very easy to visualize this all as a radar (spider) chart (I'm aware that a traditional radar chart has limited power since the ordering of metrics is arbitrary and the area formed by the spider chart doesn't hold much value) - but why can't I take the results from a correlation matrix or regression output to weight the metrics in some way and/or oder them based on importance?
An example - let's say I've built a radar chart for NBA players based on Points per game, assists per game, player efficiency, etc. If I had a logistic regression output to determine the MVP, I would use the output to order and weight the variables (based on p-values?) to build the radar chart. If I then took statistics from the most recent NBA season and plotted each NBA player, wouldn't the player with the biggest area be one of the most valuable?
I can't seem to find much information on building weighted radar charts - although I have seen that percentiles can be used for the actual metrics. Does anybody have any advice or alternative solutions?
AI: You basically want to display the importance of each feature in the model as it relates to your final score. Rather than starting with the visualization, I would figure out the data first and then see which visualization fits. Here, they use a simple bar chart:
http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html |
H: How to build a model which can predict the probability of an event based on a set of timeseries data?
I am trying to build a deep learning neural model using keras and tensorflow which can predict if a certain event will occur based on a set of timeseries data and some fixed data together . For example:
For a given set of entities say , Their price behavior is co-related due to rumor of some event that is going to happen in future.
1) A & B 's fixed data like type, group etc.
2) During a certain period of time - 1 Jan 2015 - 30 Mar 2015 , their price .
Data that I have is
INPUT : Name of entity, Type Of entity , Size , Country, Specific Attributes and time series stock data from 1 Jan 2015 - 30 Mar 2015
OUTPUT : Y/N . Boolean output if event happend or not.
Now my question is how do I build this since I have some fixed data which doesn't changes over time and some time series data which changes over time.
Options that I thought of are
1) LSTM - But not sure if I should feed in fixed static data.
2) CNN - Not sure if it is the right approach ?
Please let me know what should be my approach to handle such a problem.
AI: Since you have features that would be handled best with a recurrent neural net, AND some features that would be handled best with a feedforward net, what you can actually do is both and feed them into a main Dense layer which has a softmax output to give you the probability distribution.
This would be rather hard to do by hand, but luckily you are using Keras, which allows for this kind of modeling rather easily!
In the Keras functional API guide https://keras.io/getting-started/functional-api-guide/, there is a model actually very similar to what you are looking for, where the "Main" information is an LSTM layer (which you'd do for the stock prices), and the "Auxiliary" information would be (Name of entity, Type Of entity , Size , Country, Specific Attributes) etc...
The model looks like this:
The example model actually uses 2 loss functions (2 outputs), but you can easily build it to only have the one output. The code is all there so will be easy to replicate.
I basically use this kind of model for almost everything now and get great results, vs just LSTM alone. |
H: Abbreviation Classification using machine learning
I would like to classify abbreviations using machine learning. For example:
I have watermel. and I ask for user what is watermel.(my application context is about food). Then He classify as watermelon.
In other time, If Other user insert waterme. Is It exist a way to infer that waterme is the same as watermelon using machine learning techniques?
AI: Look into this package for Python:
https://pypi.python.org/pypi/Distance/
You can use this to generate a numeric value representing the similarity between word.
Here is a similar post that should help:
https://stats.stackexchange.com/questions/123060/clustering-a-long-list-of-strings-words-into-similarity-groups
Additionally, a level up in complexity would be to use t-SNE on an array generated using word2vec (this is word embedding). Examples and resources for this are:
https://www.codeproject.com/tips/788739/visualization-of-high-dimensional-data-using-t-sne
http://sebastianruder.com/word-embeddings-1/ |
H: Deriving backpropagation equations "natively" in tensor form
Image shows a typical layer somewhere in a feed forward network:
$a_i^{(k)}$ is the activation value of the $i^{th}$ neuron in the $k^{th}$ layer.
$W_{ij}^{(k)}$ is the weight connecting $i^{th}$ neuron in the $k^{th}$ layer to the $j^{th}$ neuron in the $(k+1)^{th}$ layer.
$z_j^{(k+1)}$ is the pre-activation function value for the $j^{th}$ neuron in the $(k+1)^{th}$ layer. Sometimes this is called the "logit", when used with logistic functions.
The feed forward equations are as follows:
$z_j^{(k+1)} = \sum_i W_{ij}^{(k)}a_i^{(k)}$
$a_j^{(k+1)} = f(z_j^{(k+1)})$
For simplicity, bias is included as a dummy activation of 1, and implied used in iterations over $i$.
I can derive the equations for back propagation on a feed-forward neural network, using chain rule and identifying individual scalar values in the network (in fact I often do this as a paper exercise just for practice):
Given $\nabla a_j^{(k+1)} = \frac{\partial E}{\partial a_j^{(k+1)}}$ as gradient of error function with respect to a neuron output.
1. $\nabla z_j^{(k+1)} = \frac{\partial E}{\partial z_j^{(k+1)}} = \frac{\partial E}{\partial a_j^{(k+1)}} \frac{\partial a_j^{(k+1)}}{\partial z_j^{(k+1)}} = \nabla a_j^{(k+1)} f'(z_j^{(k+1)})$
2. $\nabla a_i^{(k)} = \frac{\partial E}{\partial a_i^{(k)}} = \sum_j \frac{\partial E}{\partial z_j^{(k+1)}} \frac{\partial z_j^{(k+1)}}{\partial a_i^{(k)}} = \sum_j \nabla z_j^{(k+1)} W_{ij}^{(k)}$
3. $\nabla W_{ij}^{(k)} = \frac{\partial E}{\partial W_{ij}^{(k)}} = \frac{\partial E}{\partial z_j^{(k+1)}} \frac{\partial z_j^{(k+1)}}{\partial W_{ij}^{(k)}} = \nabla z_j^{(k+1)} a_{i}^{(k)}$
So far, so good. However, it is often better to recall these equations using matrices and vectors to represent the elements. I can do that, but I am not able to figure out the "native" representation of the equivalent logic in the middle of the derivations. I can figure out what the end forms should be by referring back to the scalar version and checking that the multiplications have correct dimensions, but I have no idea why I should put the equations in those forms.
Is there actually a way of expressing the tensor-based derivation of back propagation, using only vector and matrix operations, or is it a matter of "fitting" it to the above derivation?
Using column vectors $\mathbf{a}^{(k)}$, $\mathbf{z}^{(k+1)}$, $\mathbf{a}^{(k+1)}$ and weight matrix $\mathbf{W}^{(k)}$ plus bias vector $\mathbf{b}^{(k)}$, then the feed-forward operations are:
$\mathbf{z}^{(k+1)} = \mathbf{W}^{(k)}\mathbf{a}^{(k)} + \mathbf{b}^{(k)}$
$\mathbf{a}^{(k+1)} = f(\mathbf{z}^{(k+1)})$
Then my attempt at derivation looks like this:
1. $\nabla \mathbf{z}^{(k+1)} = \frac{\partial E}{\partial \mathbf{z}^{(k+1)}} = ??? = \nabla \mathbf{a}^{(k+1)} \odot f'(\mathbf{z}^{(k+1)})$
2. $\nabla \mathbf{a}^{(k)} = \frac{\partial E}{\partial \mathbf{a}^{(k)}} = ??? = {\mathbf{W}^{(k)}}^{T} \nabla \mathbf{z}^{(k+1)}$
3. $\nabla \mathbf{W}^{(k)} = \frac{\partial E}{\partial \mathbf{W}^{(k)}} = ??? = \nabla\mathbf{z}^{(k+1)} {\mathbf{a}^{(k)}}^T $
Where $\odot$ represents element-wise multiplication. I've not bothered showing equation for bias.
Where I have put ??? I am not sure of the correct way to go from the feed-forward operations and knowledge of linear differential equations to establish the correct form of the equations? I could just write out some partial derivative terms, but have no clue as to why some should use element-wise multiplication, others matrix multiplication, and why multiplication order has to be as shown, other than clearly that gives the correct result in the end.
I am not even sure if there is a purely tensor derivation, or whether it is all just a "vectorisation" of the first set of equations. But my algebra is not that good, and I'm interested to find out for certain either way. I feel it might do me some good comprehending work in e.g. TensorFlow if I had a better native understanding of these operations by thinking more with tensor algebra.
Sorry about ad-hoc/wrong notation. I understand now that $\nabla a_j^{(k+1)}$ is more properly written $\nabla_{a_j^{(k+1)}}E$ thanks to Ehsan's answer. What I really wanted there is a short reference variable to substitute into the equations, as opposed to the verbose partial derivatives.
AI: Notation matters! The problem starts from:
Given $\nabla a_j^{(k+1)} = \frac{\partial E}{\partial a_j^{(k+1)}}$
I don't like your notation! it's wrong in fact, in standard mathematical notation. The correct notation is
$$\nabla_{a_j^{(k+1)}} E = \frac{\partial E}{\partial a_j^{(k+1)}}$$
Then, gradient of the error $E$ w.r.t a vector ${\mathbf{a}^{(k)}}$ is defined as
$$\nabla_{\mathbf{a}^{(k)}} E = \left( \frac{\partial E}{\partial a_1^{(k)}} , \cdots, \frac{\partial E}{\partial a_n^{(k)}}\right)^T \;\;\;\; (\star)$$
(side note: We transpose because of the convention that we represent vectors as column vectors, if you'd like to represent as row vectors then the equations you want to prove will change up a transpose!)
therefore with chain rule,
$$\frac{\partial E}{\partial a_i^{(k)}}= \sum_j \frac{\partial E}{\partial z_j^{(k+1)}} \frac{\partial z_j^{(k+1)}}{\partial a_i^{(k)}}=\sum_j \frac{\partial E}{\partial z_j^{(k+1)}}W_{ij}^{(k)}$$
because of $z_j^{(k+1)} = \sum_i W_{ij}^{(k)}a_i^{(k)}.$ Now, you can express the above as vector (inner) product
$$\frac{\partial E}{\partial a_i^{(k)}} = (W_{:,i}^{(k)})^T \nabla_{\mathbf{z}^{(k+1)}} E$$ and stacking them in $(\star),$ we can express $\nabla_{\mathbf{a}^{(k)}} E $ as matrix-vector product
$$\nabla_{\mathbf{a}^{(k)}} E = (\mathbf{W}^{(k)})^T\nabla_{\mathbf{z}^{(k+1)}} E.$$
I'll leave the rest to you :)
More vector calculusy!
Let's use the convention of vectors as column-vectors. Then $\mathbf{z}^{(k+1)} = (\mathbf{W}^{(k)})^T \mathbf{a}^{(k)} + \mathbf{b}^{(k)}$ and
$$\nabla_{\mathbf{a}^{(k)}} E = \frac{\partial E}{\partial \mathbf{a}^{(k)}} = \frac{\partial \mathbf{z^{(k+1)}}}{\partial \mathbf{a}^{(k)}} \frac{\partial E}{\partial \mathbf{z}^{(k+1)}}= \mathbf{W}^{(k)} \frac{\partial E}{\partial \mathbf{z}^{(k+1)}}$$
because
$$\frac{\partial \mathbf{z^{(k+1)}}}{\partial \mathbf{a}^{(k)}} = \dfrac{\partial\left((\mathbf{W}^{(k)})^T \mathbf{a}^{(k)} + \mathbf{b}^{(k)}\right)}{\partial \mathbf{a}^{(k)}}=\dfrac{\partial\left((\mathbf{W}^{(k)})^T \mathbf{a}^{(k)}\right)}{\partial \mathbf{a}^{(k)}} + \dfrac{\partial\mathbf{b}^{(k)}}{\partial \mathbf{a}^{(k)}}$$
and $\dfrac{\partial\mathbf{b}^{(k)}}{\partial \mathbf{a}^{(k)}}=0$ since $\mathbf{b}^{(k)}$ doesn't depend on $\mathbf{a}^{(k)}.$
Thus
$$\dfrac{\partial\left((\mathbf{W}^{(k)})^T \mathbf{a}^{(k)}\right)}{\partial \mathbf{a}^{(k)}} = \dfrac{\partial \mathbf{a}^{(k)}}{\partial \mathbf{a}^{(k)}} \mathbf{W}^{(k)} = \mathbf{W}^{(k)}.$$
by vector-by-vector (eight and seventh row, last column identities, respectively) |
H: Improving classifier performances in R for imbalanced dataset
I have used an "adabag"(boosting + bagging) model on an imbalanced dataset (6% positive), I have tried to maximized the sensitivity while keeping the accuracy above 70% and the best results I got were:
ROC= 0.711
SENS=0.94
SPEC=0.21
The results aren't Inhofe, especially the bad specificity.
Any suggestion on how to improve the result? Can the optimization be improved, or would the addition of a penalty term help?
This is the code:
ctrl <- trainControl(method = "cv",
number = 5,
repeats = 2,
p = 0.80,
search = "grid",
initialWindow = NULL,
horizon = 1,
fixedWindow = TRUE,
skip = 0,
verboseIter = FALSE,
returnData = TRUE,
returnResamp = "final",
savePredictions = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary,
preProcOptions = list(thresh = 0.80, ICAcomp = 3, k = 7, freqCut = 90/10,uniqueCut = 10, cutoff = 0.2),
sampling = "smote",
selectionFunction = "best",
index = NULL,
indexOut = NULL,
indexFinal = NULL,
timingSamps = 0,
predictionBounds = rep(FALSE, 2),
seeds = NA,
adaptive = list(min = 5,alpha = 0.05, method = "gls", complete = TRUE),
trim = FALSE,
allowParallel = TRUE)
grid <- expand.grid(maxdepth = 25, mfinal = 4000)
classifier <- train(x = training_set[,-1],y = training_set[,1], method = 'AdaBag',trControl = ctrl,metric = "ROC",tuneGrid = grid)
prediction <- predict(classifier, newdata= test_set,'prob')
plot from classifierplots package:
I tried xgboost as well.
Here is the code:
gbmGrid <- expand.grid(nrounds = 50, eta = 0.3,max_depth = 3,gamma = 0,colsample_bytree=0.6,min_child_weight=1,subsample=0.75)
ctrl <- trainControl(method = "cv",
number = 10,
repeats = 2,
p = 0.80,
search = "grid",
initialWindow = NULL,
horizon = 1,
fixedWindow = TRUE,
skip = 0,
verboseIter = FALSE,
returnData = TRUE,
returnResamp = "final",
savePredictions = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary,
sampling = "smote",
selectionFunction = "best",
index = NULL,
indexOut = NULL,
indexFinal = NULL,
timingSamps = 0,
predictionBounds = rep(FALSE, 2),
seeds = NA,
adaptive = list(min = 5,alpha = 0.05, method = "gls", complete = TRUE),
trim = FALSE,
allowParallel = TRUE)
classifier <- train(x = training_set[,-1],y = training_set[,1], method = 'xgbTree',metric = "ROC",trControl = ctrl,tuneGrid = gbmGrid)
prediction <- predict(classifier, newdata= test_set[,-1],'prob')
plot from classifierplots package:
Update:
I tried asymmetric adaboost, this is the code:
model_weights <- ifelse(training_set$readmmited == "yes",
(1/table(training_set$readmmited)[1]) * 0.4,
(1/table(training_set$readmmited)[2]) * 0.6)
ctrl <- trainControl(method = "repeatedcv",
number = 5,
repeats = 2,
search = "grid",
returnData = TRUE,
returnResamp = "final",
savePredictions = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary,
selectionFunction = "best",
allowParallel = TRUE)
classifier <- train(x = training_set[,-1],y = training_set[,1], method = 'ada',trControl = ctrl,metric = "ROC",weights = model_weights)
but the specificity is zero, what am I doing wrong?
AI: You should try compensating for the imbalanced data and then can you try a lot of different classifiers. Either balance it out, use SMOTE to interpolate (this always struck me as too magical), or assign weights.
Here's a nice article walking through it with caret, which is what it appears you're using:
http://dpmartin42.github.io/blogposts/r/imbalanced-classes-part-1 |
H: Why use both validation set and test set?
Consider a neural network:
For a given set of data, we divide it into training, validation and test set. Suppose we do it in the classic 60:20:20 ratio, then we prevent overfitting by validating the network by checking it on validation set. Then what is the need to test it on the test set to check its performance?
Won't the error on the test set be somewhat same as the validation set as for the network it is an unseen data just like the validation set and also both of them are same in number?
Instead can't we increase the training set by merging the test set to it so that we have more training data and the network trains better and then use validation set to prevent overfitting?
Why don't we do this?
AI: Let's assume that you are training a model whose performance depends on a set of hyperparameters. In the case of a neural network, these parameters may be for instance the learning rate or the number of training iterations.
Given a choice of hyperparameter values, you use the training set to train the model. But, how do you set the values for the hyperparameters? That's what the validation set is for. You can use it to evaluate the performance of your model for different combinations of hyperparameter values (e.g. by means of a grid search process) and keep the best trained model.
But, how does your selected model compares to other different models? Is your neural network performing better than, let's say, a random forest trained with the same combination of training/test data? You cannot compare based on the validation set, because that validation set was part of the fitting of your model. You used it to select the hyperparameter values!
The test set allows you to compare different models in an unbiased way, by basing your comparisons in data that were not use in any part of your training/hyperparameter selection process. |
H: Eliminate input in gradient by clever choosing of cost function in neural networks
In http://neuralnetworksanddeeplearning.com/chap3.html
The author explains that for for a single neuron in neural net, by choosing cost function as cross entropy we can eliminate the derivative of activation function in the gradient term, if the activation function is choose to be sigmoid function.
In the problem, he asks why we cannot eliminate the input to neuron term x in gradient of cost function with respect to weights.
I had the following reasoning, in order to compute gradient of Cost, we use chain rule, and derive cost with respect to activation, and activation with respect to (w * x + b), and the sum with weight.
For a weight Wi, the derivative of (Summation w * x + b) w.r.to Wi is always xi, and the derivative of activation function cannot know that, so it can never eliminate Xi unless its zero.
Or is there any other subtle reasoning?
AI: The main reason is that the chain rule is hidden in your justification. The fact that the inner function is linear w.r.t weights then derivative w.r.t weight will always result into input $x$ will be somewhere in $\dfrac{\partial C}{\partial w}.$ |
H: Predicting probability from scikit-learn SVC decision_function with decision_function_shape='ovo'
I have a multiclass SVM classifier with labels 'A', 'B', 'C', 'D'.
This is the code I'm running:
>>>print clf.predict([predict_this])
['A']
>>>print clf.decision_function([predict_this])
[[ 185.23220833 43.62763596 180.83305074 -93.58628288 62.51448055 173.43335293]]
How can I use the output of decision function to predict the class (A/B/C/D) with the highest probability and if possible, it's value? I have visited https://stackoverflow.com/a/20114601/7760998 but it is for binary classifiers and could not find a good resource which explains the output of decision_function for multiclass classifiers with shape ovo (one-vs-one).
Edit:
The above example is for class 'A'. For another input the classifier predicted 'C' and gave the following result in decision_function
[[ 96.42193513 -11.13296606 111.47424538 -88.5356536 44.29272494 141.0069203 ]]
For another different input which the classifier predicted as 'C' gave the following result from decision_function,
[[ 290.54180354 -133.93467605 116.37068951 -392.32251314 -130.84421412 284.87653043]]
Had it been ovr (one-vs-rest), it would become easier by selecting the one with higher value, but in ovo (one-vs-one) there are (n * (n - 1)) / 2 values in the resulting list.
How to deduce which class would be selected based on the decision function?
AI: Your link has sufficient resources, so let's go through:
When you call decision_function(), you get the output from each of the pairwise classifiers (n*(n-1)/2 numbers total). See pages 127 and 128 of "Support Vector Machines for Pattern Classification".
Click on the "page 127 and 128" link (not shown here, but in the Stackoverflow answer). You should see:
Python's SVM implementation uses one-vs-one. That's exactly what the book is talking about.
For each pairwise comparison, we measure the decision function
The decision function is the just the regular binary SVM decision boundary
What does that to do with your question?
clf.decision_function() will give you the $D$ for each pairwise comparison
The class with the most votes win
For instance,
[[ 96.42193513 -11.13296606 111.47424538 -88.5356536 44.29272494 141.0069203 ]]
is comparing:
[AB, AC, AD, BC, BD, CD]
We label each of them by the sign. We get:
[A, C, A, C, B, C]
For instance, 96.42193513 is positive and thus A is the label for AB.
Now we have three C, C would be your prediction. If you repeat my procedure for the other two examples, you will get Python's prediction. Try it! |
H: How are the positions of the output nodes determined in the Kohonen - Self Organizing Maps algorithm?
In the Cooperative stage of Kohonen's SOM, the neighborhood for a winning neuron(output node). In most cases, the neighborhood function happens to be the Gaussian Function.
For example,
$$h_j,_i = exp(-d_j,_i^2/2*\sigma^2)$$
where $h_j,_i$ is the window function ($i$ is the index of the winning neuron and $j$ is the index of the encompassing neuron), $d_j,_i$ is the lateral distance between the winning neuron $i$ and the excited neuron $j$.
Now the user who is going to implement the SOM has the input data and the randomly initialized weight vectors. How is $d_j,_i$ determined? Is it randomly initialized too? Also what is the relation between the weight vectors of the output nodes and their positions?
The figure below is just to give a visual representation of the organization of the input and the output nodes
AI: The distance is calculated according to a distance function (euclidian, manhattan, mahalanobis and so on. As this is an unsupervised model (hence the self-organizing part in the name) there are no output neurons. Would you clarify the term?
I'll just recap the training a bit:
Inititalizing: We have a grid (let's assume 2d) of neurons $n_i = (w_i, k_i)$, where $w_i$ is a randomly initialized weight and $k_i$ the position on the grid.
We'll now pick a training sample $x_i$ randomly. For this instance $x_i$ we pick the neuron $n_m$ ($m$ for minimum) where the distance between the weight vector and the instance is minimal given a distance function $d$, so $n_m = \text{argmin}_{n_j} \; d(x_j, W(n_j))$ where $W(\cdot)$ gives the weight for the respective neuron.
We now pick a set of neurons for which we will adapt the weight vector given a neighbourhood function (for example your gaussian, or a cone). The weights that should have their weights updated are given by $N^{+t} = \{ n_i = (w_i,k_i) \mid d_A(k_m,k_i) \leq \delta^t \}$, where $t$ is the step in time (I'll mention that later), the "reach" $\delta^t$ the neighbourhood should have and of course the position on the grid $k_m$ for the winning neuron.
We then update the weights of the neurons in the neighbourhood of the winning neuron $n_m$ according to some updating rule. This can be interpreted as moving the weights closer to the input we currently look at (because we chose the neuron with the closest weight vector). A possible update rule could be: $w_s^{t+1} = w_m^t + \epsilon^t \cdot h_{mi}^t \cdot (x_j - w_m^t)$, where $\epsilon^t$ is a time-dependent learning rate and $h_{si}^t$ weights the distance from the winning neuron to the neuron we are changing right now, also time-dependent.
In order to somewhat guarantee a topological mapping two things should be considered now:
The learning rate, which is part of the update rule, has to be decreased over time, starting with a relatively high value.
The neighbourhood "radius" has to be decreased as well, also starting from a relatively high value.
Let's visualize this on this picture real quick. The red nodes are inputs. The dark grey is the winning neuron for the input that is considered right now (the picture does not make it clear that we look at one example/input at a time!). The light grey nodes are the ones that are within the neighbourhood of the winning neuron and hence are updated slightly according to some updating rule.
Disclaimer: this was from the top of my head right now, please check any formulas or hypotheses I propose here before using it in any kind of work. (Which you should always do!)
.
See also wikipedia, they also have some explanation of the algorithm that might better suit you.
Image source
Additional source (German Wikipedia) |
H: Are there any rules for choosing the size of a mini-batch?
When training neural networks, one hyperparameter is the size of a minibatch. Common choices are 32, 64, and 128 elements per mini batch.
Are there any rules/guidelines on how big a mini-batch should be? Or any publications which investigate the effect on the training?
AI: In On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima there are a couple of intersting statements:
It has been observed in practice that
when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize [...]
large-batch methods tend to converge to sharp minimizers of the
training and testing functions—and as is well known, sharp minima lead to poorer
generalization. n. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation.
From my masters thesis: Hence the choice of the mini-batch size influences:
Training time until convergence: There seems to be a sweet spot. If the batch size is very small (e.g. 8), this time goes up. If the batch size is huge, it is also higher than the minimum.
Training time per epoch: Bigger computes faster (is efficient)
Resulting model quality: The lower the better due to better generalization (?)
It is important to note hyper-parameter interactions: Batch size may interact with other hyper-parameters, most notably learning rate. In some experiments this interaction may make it hard to isolate the effect of batch size alone on model quality. Another strong interaction is with early stopping for regularisation.
See also
this nice answer / related question
Efficient Mini-batch Training for Stochastic Optimization
this RNN study |
H: Algorithm for finding best juice combinations
I am making a fun experiment in which a machine will mix different percentages of three juices – orange, apple and grape. After each mix is dispensed, a participant will taste the juice and rate it on a numeric scale, a score from 1 to 7.
Using the data collected, I would like to try and find the optimal juice mix programmatically.
For this I wish to implement a machine learning algorithm, that will both generate new ratios of juices to try, and using the response will try and find to the optimal juice mixture percentages.
What algorithm would you recommend me to use?
Notes:
I am planning on displaying this at an event, and estimate around 200 people to taste, at least.
I am aware that due to the fact that different people have different taste preferences, I aim for the best tasting mixture that will fit most people.
AI: As described, you have no data describing individual people (such as age, sex, shoe size), but are searching for an optimum value of the mix for the whole population. So what you want is a mix with the maximum expected rating, if you chose a random person to rate it from the population. In principle, this expected rating is a function taking two parameters e.g. $f(n_{apple}, n_{orange})$ - the amount of the third juice type is a not a free choice, so you only have two dimensions.
You can break down your problem into two distinct parts:
Taking samples from your population in order to find approximation to the function $f(n_{apple}, n_{orange})$
Using the approximation as it evolves to guide the search for an optimum value.
For a simple approach, you could ignore the second bullet point and just randomly sample different mixes throughout the event. Then train a regression ML on the ratings (any algorithm would do, although you'll probably want something nonlinear, otherwise you'll just predict one of the pure juices as favourite) - finally graph its predictions and find the maximum rating at the end. This would probably be fine when pitched as a fun experiment.
However, there is a more sophisticated approach that is well-studied and used to make decisions when you want to optimise an expected value of an action whilst exploring options - it is usually called multi-armed bandit. In your case, you would need variants of it that consider an "arm space" or parametric choice, as opposed to a finite number of choices that represent selecting between actions. This is important to you, since splitting your mix parameters up into e.g. in 5% steps, will give you too many options to explore given the number of samples you need to make. Instead, you will need to make an assumption that the expected rating function is relatively smooth - the expected rating for 35% Apple, 10% Orange, 55% Grape is correlated with the rating for 37% Apple, 9% Orange, 54% Grape . . . that seems at least reasonable to me, but you should make clear in any write-up that this is an assumption and/or find something published that supports it. If you make this assumption, you can then use a function approximator such as a neural network, a program like xgboost or maybe some Guassian kernels to predict expected rating from mix percentages.
In brief for a multi-armed bandit problem, you will use data collected as your experiment progresses to estimate the expected value for each choice, and on each step will make a new choice of mix. The choice itself will be guided by your current best approximation. However, you don't always sample the current top-rated value, you need to explore other mixes in order to refine your estimated function. You have choices here too - you could use $\epsilon$-greedy where e.g. 10% of the time you choose completely randomly to get other sample points. However, you might need something more sophisticated that explores more to start with and still converges quickly, such as Gibbs sampling.
One thing you don't say is at what level you are pitching this experiment. Studying the multi-armed bandit problem by yourself referring to blogs, tutorials and papers could be a bit too much work if this is for school science fair. If this all seems a bit too vague and a lot of work to study, then you can probably stick with a simple regression model from the data of a random experiment.
I suggest whichever approach you take, that you run some simulations of input data and see whether your approach works. Obviously there is a lot of guess work here. But the principle is:
Create a "true" model function - e.g. pick an imaginary favourite mix and make it score higher. Make it a simple and probably quite subtle function - e.g. score 5 for best result, and take away euclidean distance in "juice space" times a small factor (maybe 1.5) from it.
Create a noisy sampler that imitates someone in your experiment giving a rating to a specific mix. Ensure that the mean value from this matches the "true" function.
Try out your sampling and learning strategies, see how well they find the favourite mix.
I highly recommend this kind of dry run before putting your system to real use, otherwise you will have no confidence that your ML/approximator is working.
One more piece of advice about your estimator: You are expecting a large amount of variance in your data, and will not have a lot of samples. So to avoid over-fitting you will want to have a relatively simple ML model. For a neural network for example, you will probably want only one hidden layer with very few neurons in it (e.g. 4 or 5 might be enough). Finding a model sophisticated enough to predict a curve, but simple enough that it doesn't overfit given very noisy target outputs might take a few tries - this is the main reason why I suggest performing trial runs with simulated data. |
H: Elastic Regression fitting good mean and bad variance
So, I'm kinda new to machine learning and I was trying to predict the monthly sales of a business using a set of features and using a sliding window of the past sales of 12 months.
I used some algorithms to do it, including linear/polynomial regression, lasso/elastic and SVR. I got the best results with elastic regression resulting in the following result:
As it shows, the model fit the mean of the curve somewhat well, but I would like it to fit the variance as well. So, I've been searching what technique or feature to use could better fit my data, but I still found nothing precise.
Would someone knows what could I do to to take the variance of the system into account?
Thanks in advance!
AI: Your model actually looks pretty good. What it sounds like you are asking to do is to overfit your model. I would not recommend that you do that. You can do that by finding more variables that you can input into the model, fitting extra polynomial terms or anything else like that, fitting a neural network will potentially do it for you too. However, you generally want to smooth your predictions out, like what you have.
One thing that you could try is to add an autocorrelation term. That might cause your model to behave as you intend. With negative autocorrelation your predicted values will have a tendency to bounce back and forth around the mean. But I wouldn't recommend doing that, your performance will probably suffer, just judging by the graph that you provided. |
H: Classification followed by regression to handle response variable that is usually zero
I have a data set consisting of a bunch of predictors (mostly unbounded or positive real numbers) and a single response variable that I wish to predict. The response is typically exactly zero -- around 90% of the time. I have tried modelling this using standard Gaussian process methods as well as random forests. However, in both cases (although moreso when using random forests) the model seems to handle the data poorly, usually predicting a non-zero response. Now, if the predicted responses were in fact very close to zero I could just set a cut-off below which the values would be rounded to zero, but they are significantly non-zero in many cases.
My idea for a solution is to train two models: a classification model trained on the entire training-set that predicts whether a variable is zero or non-zero, and a regression model trained only on the rows in the training set with a non-zero response. I would then first use the classification model to predict which observations have a response that is exactly zero, and subsequently use the regression model to predict the value of the non-zero responses.
Is this a sound way to solve the described problem? Does this sort of model have a name? Are there better ways to do this?
AI: This sounds entirely reasonable, and the usual name for this structure I have heard for this is just "pipeline" which also applies to other system-feeds-next-system structures - it might also be "machine learning pipeline" or "data processing pipeline".
There are ways to assess performance of a ML pipeline:
You can of course compare the final accuracy or loss value, with the simpler model. Has turning the model into a more complex multi-stage one actually improved things? Sadly nothing is guaranteed, although I would be hopeful in your case initially - in part because you could apply adjustments available to classifier models used to deal with class imbalance issues.
You can decide which part of the pipeline will gain you the most benefit by switching between pipeline-so-far input to each unit and perfect input from the training data. Then you can see how much incremental difference is possible by perfecting that unit in the pipeline.
In your case you have a two stage pipeline, so you can check whether it is worth focusing more effort on the classifier or regression parts by comparing the incremental improvements between:
The unadjusted output of the whole pipeline run end-to-end.
The output of the regression (or zero) assuming that the classifier was perfect.
A perfect score.
Whichever of the two differences gives you the largest difference (2) - (1), or (3) - (2) points at work being most rewarded for working on the classifier or regression stage respectively.
You can see a worked example of this per-stage analysis in Advice for Applying Machine Learning (slides 21, 22), amongst other places. |
H: Python package for machine-learning aided data labelling
In a lot of cases unlabelled data needs to be transformed to labelled data. The best solution is to use (multiple) human classifiers. However, going to all the data by hand (i.e. in text-mining or image-processing) is often a daunting task. Is there software that can combine human classifiers and machine-learning techniques in real time? I am especially interested in python packages.
To illustrate, classifying images from video streams is very repetitive. After 100 images (from different streams) a machine-learning algorithm could be used to predict the labels given by the human classifier. The machine classifier might be very confident about some (un)seen samples and very uncertain about others. The human classifier can then focus on the uncertain samples helping the machine classifier to learn better what is does not yet know.
AI: It sounds like you are looking for active learning. In active learning, the classifier learns which samples would be most useful to have labelled by a human.
There are many techniques for active learning, and many ways to adapt an existing (standard) learning algorithm to the active learning setting. The particular approach you mentioned is called "uncertainty sampling", and can be applied to any standard classifier that outputs confidence/certainty scores. There are other selection methods as well, which may perform better in some settings.
You can also apply unsupervised methods to cluster the samples, then label one or a few samples from each cluster. |
H: How to Choose a Sample for Multiply Classifiers
I've got a dataset of 1.5 million and am looking to train 7 different classifiers -- for each classifier I have up to 10 classes to predict. The total sample has 20K text features (more if I include bigrams). Like most distributions of text features, only 20% of them account for 80% of occurrences in the sample. I am going to manually label 10K for each prediction category, and use that to predict against the remaining 1.5 million as well as new documents that come through.
My question is, how would I choose the subsample based on the features and distribution. Should I just choose a random sample (ie try to match the distribution)? Or should I try to find the 10K that maximizes the number of features represented in the sample? Whats the benefit and drawback of each?
I have only one shot to label these 10K so I want to make sure I choose the right sample that maximizes my accuracy for each of the prediction categories!
AI: Ideally you'll end up with a dataset where each class is fairly represented, with enough data for each class to enable suitable predictive performance. As you currently have no labels, any kind of stratified sampling is unavailable to you.
Random sampling will get you the 10K samples you need, but there's no guarantee you'll get fair representation for each of your classes. Assuming you were to use this approach it would make sense to continue to label the samples until you have decent representation for all classes. There is no guarantee regarding balance and also no guarantee that it will cover the majority of the variance of your dataset in feature space either.
An alternative sampling approach that is able to capture a majority of the variance of the data in feature space should hold up with better performance/generalisation. There are a few variations of how this could be done but you could try a clustering approach.
Cluster your points in feature space and then instead of sampling randomly, sample from each cluster in turn. So draw a random point from the first cluster, then second, then third etc until the last cluster, and repeat if necessary. Label them in this order and keep an eye on the counts for each class as there is always the chance a class is not well represented by this approach. Another potential drawback in this approach is selecting the number of clusters. You could aim for the number of classes in the dataset or go with anything up to 10K clusters.
An additional variant on this would be the strategy for selecting a sample from the clusters: instead of choosing a random point, choose the sample closest to the centroid; or if choosing multiple points from a cluster, order the samples via distance from centroid and sample uniformly from this.
Active learning is a semi-supverised approach that may be the most useful choice in this instance. When labelling, once you've got some decent coverage for each of your classes try an active learning approach to selecting which samples to label next. |
H: Understanding how distributed PCA works
As part of big data analysis project, I'm working on,
I need to perform PCA on some data, using cloud computing system.
In my case, I'm using Amazon EMR for the job and Spark in particular.
Leaving the "How-to-perform-PCA-in-spark" question aside, I want to get an understanding of how things work behind the scenes when it comes to calculating PCs on cloud-based architecture.
For example, one of the means to determine PCs of a data is to calculate covariance matrix of the features.
When using HDFS based architecture for example, the original data is distributed across multiple nodes, I'm guessing each node receives X records.
How then is the covariance matrix calculated in such case when each node have only partial data?
This is just an example. I'm trying to find some paper or documentation explaining all this behind-the-scenes voodoo, and couldn't find anything good enough for my needs (probably my poor google skills).
So I can basically summarize my question(s) \ needs to be the following:
1. How distributed PCA on cloud architecture works
Preferably some academic paper or other sorts of explanation which also contains some visuals
2. Spark implementation of D-PCA
How does Spark do it? Do they have any 'twist' in their architecture to do it more efficiently, or how does the RDD objects usage contribute to improving the efficiency? etc.
A presentation of even an online lesson regarding it would be great.
Thanks in advance to anyone who can provide some reading material.
AI: The question is more related to Apache Spark architecture and map reduce; there are more than one questions here, however, the central piece of your question perhaps is
For example, one of the means to determine PCs of a data is to calculate covariance matrix of the features.
When using HDFS based architecture for example, the original data is distributed across multiple nodes, I'm guessing each node receives X records.
How then is the covariance matrix calculated in such case when each node have only partial data?
I shall address that, which hopefully will clear the matter to a degree.
Let us look at a common form of covariance calculation, $\frac{1}{n}\sum(x-\bar{x})(y-\bar{y})$
This requires you to calculate the following:
$\bar{x}$
$\bar{y}$
$x-\bar{x}$ and $y-\bar{y}$
Multiply the $(x-\bar{x})$ and $(y-\bar{y})$
in a distributed manner. The rest is simple, let us say I have 100 datat points (x,y), which is distributed to 10 Apache Spark workers, each getting 10 data points.
Calculating the $\bar{x}$ and $\bar{y}$: Each worker will add $x/y$ values of 10 data points and divide this by 10 to arrive at partial mean of $x/y$ (this is the map function). Then the Spark master will run the aggregation step (in Spark DAG of the job) where the partial means from all 10 workers are taken and again added, then divided by 10 to arrive at the final $\bar{x}$ or $\bar{y}$ (the aggregate/reduce operation)
Calculating the $(x-\bar{x}) \cdot (y-\bar{y})$: Same way, distribute the data points, broadcast the $\bar{x}$ and $\bar{y}$ values to all the workers and the calculate the partial $(x-\bar{x}) \cdot (y-\bar{y})$, again run aggregation to get $\sum (x-\bar{x})(y-\bar{y})$
The above method is used for distributed calculation, you shall get the covariance, for multi-dimensional data, you can get the covariance matrix.
The point is to distribute the calculation for stages that can be distributed and then centralize the calculation stages that cannot be distributed. That is in effect one of the important aspect of Spark architecture.
Hope this helps. |
H: Neural networks - adjusting weights
My question would be about backpropagation and understanding the terms feedforward NN vs backpropagation. I have two questions really:
If I understand correctly, even a feedforward network updates its weights (via a delta rule for example). Isn't this also backpropagation? You have some random weights, run the data through the network, then cross-validate it with the desired output, then update the rules. This is backpropagation, right? So what's the difference between FFW NN and RNN? If you can't backpropagate on the other hand, how do you update the weights in a FFW NN?
I've seen NN architectures looking like this:
Basically all the neurons are being fed the same data, right? OK, the weights are randomized and thus different in the beginning, but how do you make sure Temp. Value #1 will be different from Temp. Value #2, if you use the same update rule?
Thank you!
AI: I'm not an expert on the backpropagation algorithm, however I can explain something. Every neural network can update it's weights. It may do this in different ways, but it can. This is called backpropagation, regardless of the network architecture.
A feed forward network is a regular network, as seen in your picture. A value is received by a neuron, then passed on to the next one.
A recurrent neural network is almost the same as a FFN, the difference being that the RNN has some connections point 'backwards'. E.g. a neuron is connecteded to a neuron that has already done his 'job' during backpropagation. Because of this, the activations of the previous output have an effect on the new output.
On question #2
Interesting question. This has to do with weight initialization. Yes, you're right, each neuron in the hidden layer accepts the same connections. However, during the initalization process, they have received a random weight. Depending on your NN libary, the neurons might also have been initialized with a random bias.
So even though the same rule is applied, each neuron has different outcomes as all it's connections have different weights than the other neurons weights.
On your comment: just because all the neurons happen to have the same backpropagation function, doesn't mean they will end up with the same weights.
As they are initialized with random weights, each neurons error is different. Thus they have a different gradient, and will get new weights.
You also have to keep in mind that for a certain output to be reached, there are multiple solutions (due to non-linearity). So due to initialized random weights, one neuron might be close to a certain solution while another neuron is closer to the other.
Additionally, as was stated in the comments, a network works as a whole. The output neuron is also non-linear, and for most test cases, the output should be non-linear and the output neuron most likely requires that the hidden neurons activate at different input values. |
H: Can I create a word cloud of crowdfunding donors using word cloud?
I have a table like this:
FirstName SecondName Amount
Lorenzo Perone 100
Mario Rossi 25
... ... ...
I'd like to create a "word cloud" using "Amount" as weight, is it possible using the "word cloud" tool?
Thanks.
AI: I will give you a simple solution using R which requires the wordcloud package. Of course there are many other solutions which do not require any programming skills.
The solution is a slight varient of this R-Bloggers tutorial. Feel free to have a look there for further formatting.
library(wordcloud)
words = c('Paolo Gentiloni', 'Matteo Renzi',
'Enrico Letta', 'Mario Monti',
'Silvio Berlusconi', 'Romano Prodi')
freq = c(100, 25, 50, 70, 95, 20)
wordcloud(words = words, freq = freq, min.freq = 1,
max.words=200, random.order=FALSE, rot.per=0.35,
colors=brewer.pal(8, "Dark2"))
It produces the following output:
Btw. @Lorenzo Perone: Where the ones listed Italien names? I was not sure about that. |
H: How to train model to predict events 30 minutes prior, from multi-dimensionnal timeseries
Experts in my field are capable of predicting the likelyhood an event (binary spike in yellow) 30 minutes before it occurs. Frequency here is 1 sec, this view represents a few hours worth of data, i have circled in black where "malicious" pattern should be.
Interactions between the dimensions exist, therefore dimensions cannot be studied individually (or can they?)
I'm trying to build a supervised ML model using Scikit Learn which learns a normal rythm, and detects when symptoms might lead to a spike. I am lost for which direction to take. I have tried Anomaly detection, but it only works for on the spot detection, not prior.
How could I detect "malicious" patterns prior to those events (taking them as target variables) ?
I welcome any advice on which algorithms or data processing pipeline might help, thank you :)
AI: This is a fun problem. This is a time series and from this time series you want to identify the trigger of a certain event. So it is a binary classification problem. Based on the information from the specified window will a spike occur? Yes or No.
The first step is to set up your database. What you will have is a set of instances (which can have some overlap but to avoid bias it is best for them to be independently drawn) and then for each instance a human needs to label if there was a spike or if there was not a spike.
Then you need to identify the time window you want to use for your time series analysis. You have done this and decided 30 minutes is a good start.
Now, you have 6 waveforms in a 30 minute window from which you can extract data to get information about your classification. You can use the raw data samples as your features, but this is WAY TOO many features and will lead to poor results. Thus you need some feature extraction, dimensionality reduction, techniques.
There are a million ways you can extract data from these waveforms. First, ask yourself, as a human what are the telltale signs that these other waveforms should have which would mean a spike would arise. For example, in seismic data, if you see agitation in a waveform from a neighboring town then you should expect to see agitation in your town soon.
In general, I like to extract all the basic statistics from my waveforms. Get the mean, standard deviation, fluctuation index, etc. Get whatever you think might help. Check how these statistics correlate with your labels. The more correlation the better they might be. Then there are some very good techniques for extracting time and frequency information from your time-series. Look into envelope mode decomposition and empirical mode decomposition. I have used empirical mode decomposition successfully on some time series data and obtained far better results than I expected.
Now even though you have your reduced feature space you can do better! You can apply some dimensionality reduction techniques such as PCA or LDA to get a lower dimensional space which may better represent your data. This might help, no guarantees.
Now you have a small dataset with instances that are a Frankenstein concoction which represents your 6 waveforms across the 30 minute window. Now you are all set to select your classifier. You will want a binary classification algorithm, luckily that is the most common. There are many to choose from. How to choose?
How many instances do you have?
$\# instances > 100* \#features$?
Then you are all set to use a deep learning technique such as neural networks, 1D convolutional neural networks, stacked autoencodders, etc...
Less than that!!!!
The you should stick with shallow methods. Check out kernel support vector machines, random forests, k-nearest neighbors etc..
Common misconception: A shallow method CAN and WILL perform better than a deep learning technique if you have properly selected your features. feature extraction is the most important aspect of a machine learning architecture.
I want to use anomaly detection!
This would also work and there are some good techniques that would do this. However, the nature of anomaly detection is to learn the distribution of the nominal case. So you would feed your algorithm all the instances in your dataset that did not result in a spike. Then from this your algorithm would be able to identify when a novel instance is significantly different from this nominal distribution and it will flag it as a n anomaly. This would mean that a spike will occur in your context.
Check out:
Learning Minimum Volume Sets
Anomaly Detection with Score functions based on Nearest Neighbor Graphs
New statistic in P-value estimation for anomaly detection
You can also use more rudimentary anomaly detection techniques such as a generalized likelihood ratio test. But, this is kind of old-school. |
H: Simple Time Series Prediction
I have a data set like this. Here the first column is date, the second column is Temperature, third one is humidity, fourth and fifth column are two other boolean data. I have data of 6 years like this.
2010-01-01,25.6,59,0,1
2010-01-02,25.6,60,0,1
2010-01-03,24.2,45,1,1
2010-01-04,26.3,20,0,1
2010-01-05,26.2,17,0,1
2010-01-06,24.3,65,0,0
2010-01-07,23.1,50,0,1
2010-01-08,26.3,25,1,0
2010-01-09,26.6,23,0,1
2010-01-10,24.3,60,0,1
And the label (The variable I want to predict) for this data set is: (boolean)
0,0,0,1,1,0,0,0,1,0
Now I want to implement Machine Learning to predict from this data set for a future time frame. I have almost no knowledge on machine learning. I want to use python to do it. Which library or methodology will be best and easiest for this? And can I have a simple sample code?
AI: Your problem looks to me more like a classification problem than a time series problem. My suggestion: Split the date into several sub-variables (year, month, day, week-of-day (maybe). Then just use this and the other values as input for a classification algorithm. Ideally you try several. I can recommend sklearn for this (http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html)
One advice: Depending on the classification algorithm you need to first normalize your data. I find the following blog useful for entry-level problems (with code examples-- if you want to approach this from a time-series perspective he also talks about this) and documentation http://machinelearningmastery.com/blog/.
Hope that helps. |
H: Creating an Artifical Neural Network that produces a set of possible outputs
I'm new to machine learning, and was working on creating an ANN which would classify each observation to a certain value. I have worked primarily with the sigmoid function up to this point to get the probability of an observation (true/false or binary output).
In this instance, I want each observation to be classified to one of 5 values: 0, 1, 2, 3, 4. I am using the sklearn library's StandardScalar function to scale the input data. What would be recommended in terms of:
Best way to scale the output data, and
Appropriate activation function to use for the output layer.
Thanks!
AI: You don't need to scale the output data. For classification with a ANN the best activation is the softmax function:
f(x) = e^x_i / sum e^x_j
Which normalizes an input vector by applying the exponential function element wise, then dividing by the sum. This produces a discrete probability distribution.
Then your output layer should have 5 output neurons, apply softmax, and you will get a discrete probability distribution over the set [0, 1, 2, 3, 4]. Then you don't need to normalize the data, just do a one-hot encoding. Note that this might differ depending on the framework used. |
H: Does the SVM require lots of features most of the time?
So I know about the curse of dimensionality (too many features too less data).
Say I have a 3000 sample dataset, would 3 features be too less?
AI: So I'll post an answer to my own question. For anyone who comes across this post during the feature selection / More features or less process, I dont know what you can do (well except if you're on python then mork's answer has a good way to do feature selection there) but I can tell you what NOT to do.
Do not under any circumstances ever "try" determining best features by training+testing the SVM / statistical model. That is oh this feature works because of more classification accuracy than the other one. NO. Not unless that is the only way left, dont do it. That is a way, you are free to do it, but if you can try something else please do. Dont listen to anyone who tells you to do that
How many features your problem requires depends on how many optimal features you can find. I'll leave it at that. How to find them? That is the million dollar question.
Edit:
People are getting confused. When you dont know about the accuracy of your features, it is bad practice to "train" data to see how many features your SVM needs. For that it is better you select features on the basis of some criteria set by your problem. If you want after that, then you may try feature selection techniques. But remember, reducing too many dimensions may also decrease accuracy sometimes. |
H: Using random forest to select important variables & then putting into logistic regression?
I was wondering does it make sense to use random forest to select most important variables then put into logistic regression for prediction? I think that it might not make sense because what's important for random forest might not be important for logistic regression?
AI: There are many factors which are underlying the 'importaces' of featues obtained from random forest.
For instance, features with more number of categories (unique values if its a numerical feature) would be more likely to get find splits; making it more important feature.
Having all features with same number of categories is not a common scenario.
So, even though the features identified are likey to be the best predictors (in logistic regressions etc.), one need to take the inherent biases in the randome forest algorithm into consideration while utilizing the 'important' features. Therefore whether it would make sense to use the important features from random forest model into logistic regression would depend on if any and how much the importances are biased.
Source for the information and further useful details on the biases in estimation of feature importances by random forest algorithm: Bias in random forest variable importance measures: Illustrations, sources and a solution Bias in random forest variable importance measures: Illustrations, sources and a solution |
H: How can I identify the most predictive factors?
I've been playing around with bagged trees and random forests. How can I tell what factors most influenced the categorization? Will scikitlearn just spit it out, or is it trickier than that?
AI: It basically lies within the fitted object.
model.fit(X, y)
importances = model.feature_importances_
The importances are in the same order as the columns of features ($X$). |
H: Is there a way to measure the "sharpness" of a decision boundary of a CNN?
It is commonly seen as something bad if the decision boundary of a neural network is too sharp, meaning if slight changes in the input change the class prediction completely.
Given a trained CNN, is it possible to measure / calculate the "sharpness" of its decision boundaries? Did somebody do that already?
AI: You might enjoy looking into the literature on "adversarial examples". Given an instance $x$ with a label $y$, an adversarial example is a (typically carefully constructed) instance $x'$ that is very similar to $x$, but whose label differs from $y$. The research literature suggests that it is often possible to find adversarial examples that are very close to the original $x$. You could use the distance $d(x,x')$ as a measure of sharpness of the decision boundary near $x$; or you could average this over many $x$ to get a global measure of sharpness.
There are many methods for finding adversarial examples. A standard simple one is the gradient sign method, originally described in the following paper:
Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. Explaining and harnessing adversarial examples. arXiv:1412.6572, 2014.
Since then there have been numerous improvements that find even closer adversarial examples, e.g., iterative gradient sign (arxiv:1607.02533), Deepfool (arxiv:1511.04599), and others. You might also be interested in Cleverhans, a software library to assist with finding adversarial examples. |
H: Machine learning learn to work well on future data distribution?
This is based on my limited machine learning scope and experience, so correct me if I'm wrong. Many of the currently used machine learning models (SVMs, boosted trees, DNNs) work under the assumption that the training, validation and test data sets share the same distribution. They can work to some extent if the distributions differ but not by a lot. Here "can work" means that they work sub-optimally (i.e. can work better if the distributions are the same), not that their theory behind is supposed to deal w/ the distribution difference and can handle them like "nailing it".
Hence my question: is there work on predicting based on the assumption that the data sets are actually moving through a series of distribution changes? A crazy thought would be to observe the distribution difference between training and validation sets, and assume that the same diff will exist between validation and test sets and learn to predict well on test set. This will work great on time series where the nature of the data might change over time.
AI: It has been studied under various names likes Domain Adaptation, Sample Selection Bias, Co-variate Shift.
Please go trough this survey paper on Transfer learning. It covers all the possible combination like
1)same distribution for train and test data
2) Gradual change between train and test distribution
3) Different but related distribution for train and test
It'll also give you all the necessary resources required to study further on this topic. |
H: how to do the imputation for categorical feature with a missing rate?
I have a dataset containing a categorical feature with a missing rate 95%. What value can replace the missing cells? Or drop this feature?
AI: You can turn it into a one-hot encoded feature with an added class of 'Missing', depending on the cardinality (how many categories are there). If the cardinality is too high, you will need to use other techniques for high cardinality features but you can still have 'Missing' as an additional category. |
H: Same TF-IDF Vectorizer for 2 data inputs
I am trying to work on Dataset released by quora, to identify if Question1 has similar intent as of Question2
The dataset looks like:
id|question1|question2|is_duplicate
0|What is the step by step guide to invest in share market in india|What is the step by step guide to invest in share market?|0
I am trying to refer to Abhishek Thakur's feature to get started.
It says:
As per my understanding the python code for sklearn would be:
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
tfidf_vectorizer = TfidfVectorizer()
data['tf_idf_q1'] = tfidf_vectorizer.fit_transform(data.question1)
data['tf_idf_q2'] = tfidf_vectorizer.fit_transform(data.question2)
data['tf_idf_q1] and data['tf_idf_q2] will refer to 2 models for each question as in 1st part of the image.
I am not sure how would i achieve second part? Do I fit_transform the vectorizer with first question and then transform the second question?
Or do I merge 2 questions and then get a vectorizer?
Something like below:
merged_questions = pd.DataFrame(data['question1'].map(str) + data['question2'].map(str))
data['tf_idf_q1_q2'] = tfidf_vectorizer.fit_transform(merged_questions)
Any inputs are greatly appreciated.
AI: You can use something like this
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
tfidf_vectorizer = TfidfVectorizer()
raw_data = pd.DataFrame(*raw_data, columns = ['id', 'is_identical', 'q1', 'q2'])
data['tf_idf_q1'] = tfidf_vectorizer.fit_transform(data['q1'])
data['tf_idf_q2'] = tfidf_vectorizer.fit_transform(data['q2'])
data_for_model = data[['tf_idf_q1', 'tf_idf_q2', 'is_identical']]
X = data_for_model[['tf_idf_q1', 'tf_idf_q2']].as_matrix()
Y = data_for_model['is_identical'].as_matrix()
model = Sklearn.LogisticRegression()
model.fit(X, Y)
Combined Model - here you actually learn the transformation for all questions. Then transform each one(question) separately to create features for your model training.
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
tfidf_vectorizer = TfidfVectorizer()
raw_data = pd.DataFrame(*raw_data, columns = ['id', 'is_identical', 'q1', 'q2'])
tf_train_data = pd.concat([data['q1'], data['q2']])
trained_tf_idf_transformer = tfidf_vectorizer.fit_transform(tf_train_data)
data['tf_idf_q1'] = trained_tf_idf_transformer.transform(data['q1'])
data['tf_idf_q2'] = trained_tf_idf_transformer.transform(data['q2'])
data_for_model = data[['tf_idf_q1', 'tf_idf_q2', 'is_identical']]
X = data_for_model[['tf_idf_q1', 'tf_idf_q2']].as_matrix()
Y = data_for_model['is_identica'l].as_matrix()
model = Sklearn.LogisticRegression()
model.fit(X, Y) |
H: What is the difference between LeakyReLU and PReLU?
I thought both, PReLU and Leaky ReLU are
$$f(x) = \max(x, \alpha x) \qquad \text{ with } \alpha \in (0, 1)$$
Keras, however, has both functions in the docs.
Leaky ReLU
Source of LeakyReLU:
return K.relu(inputs, alpha=self.alpha)
Hence (see relu code)
$$f_1(x) = \max(0, x) - \alpha \max(0, -x)$$
PReLU
Source of PReLU:
def call(self, inputs, mask=None):
pos = K.relu(inputs)
if K.backend() == 'theano':
neg = (K.pattern_broadcast(self.alpha, self.param_broadcast) *
(inputs - K.abs(inputs)) * 0.5)
else:
neg = -self.alpha * K.relu(-inputs)
return pos + neg
Hence
$$f_2(x) = \max(0, x) - \alpha \max(0, -x)$$
Question
Did I get something wrong? Aren't $f_1$ and $f_2$ equivalent to $f$ (assuming $\alpha \in (0, 1)$?)
AI: Straight from wikipedia:
Leaky ReLUs allow a small, non-zero gradient when the unit is not active.
Parametric ReLUs take this idea further by making the coefficient of leakage into a parameter that is learned along with the other neural network parameters. |
H: Does the input data representation matter while training CNN for speech recognition?
I am currently doing pattern recognition on spectograms of audio files using convolutional neural networks.
The spectograms are made using matplotlib cm.jet colormaps. Problem with this color map is that it auto ranges its colors based on the min and max value of the input it is given.
so an example:
Spectrograms of two different version of audio file.
One with static filter output, and the other with delta filter outputs.
RGB values show no difference in ranges for both delta and normal, but the db, scale shows a big difference.
My input consist of one column of the static and one column of delta, or a matrix (40,2,3), but since these ranges are very different kinda make me suspect that this would not work very well.
Am I right or wrong?
AI: Data representation does matter because this is all the information that you pass to a learning algorithm.
It is normal for static and delta (delta-delta) to have different range (I have worked with mfccs). They represent different information.
Static features can be small but they may change rapidly making delta large or vice-versa. The blue regions in the first spectrogram (low magnitude) becomes red in second( high magnitude).
As long as all the input are processed in the same manner (static followed by delta followed by delta-delta or any order), it won't be a problem. |
H: using classification when there is no dataset + guidance
i am working on my final year project that is a social network. Based on user interest, i have to add him in groups based on area he lives in, age group, interest type, gender and some other features.i have to use machine learning to predict in which group should i place him. I am thinking of using classification but i need data set to train which i don't have at all nor there is any data set related to this problem, so if any one of you guide me in right direction how can i do this ?
AI: As far as I know you shouldn't use ML at this stage. There are two problems:
You can not get enough data related to your task.
Even anyhow you manage to get the data it will be either to general or
will be some other domain based so relying on the result would be
difficult.
Instead you can write some rule based or weighted value solutions like depending upon the type of group what age-group should be part of the group. Regular expressions to extract some keywords from interest tags and other text features and these can be used to decide the probability of a person falling into a group.
Once if you have enough data and their labels using this method you can start trying ML algorithms and they will give the results. |
H: Anomoly detection method selection
I need to decide between SVM (One-Class Support Vector Machine) and PCA (PCA-Based Anomaly Detection) as anomaly detection methods. Azure ML is used and provides SVM and PCA as methods - hence the choice of 2 possible methods.
Does anyone have suggestions or a defined process for method selection? (Similar to cheat sheets you get for selecting a regression method).
The use case is to detect anomalies in high frequency network traffic data (from firewalls, routers & switches)?
AI: Without putting in the time to look through Azure's documentation, my guess is that their PCA method is really just a way to do a feature reduction, then use some algorithm they have to classify. Best thing to do is try both methods and then CV and compare performances. https://gallery.azure.ai/Experiment/1219e87f8fb84e88a2e1b54256808bb3 |
H: Why predicted proababilities from this binary classifier does not sum up to 1?
I have a C5.0 model that is trained to predict binary class (C1/C2) on a dataset with 20 features. The model is configured to perform boosting (10 trials) and it has a miss-classification cost function (100:1 where 100 is the cost for miss-classifying a Negative sample as Positive and 1 is the cost for miss-classifying Negative as Positive).
Looking at predicted probabilities generated by the model, I can see that it ranges from 0 to 1 for each class. i.e, I have instances where the predicted class (C1) has a probability lower than 0.5 ( for example: predicted class=C1 and predicted probability=0.1 ) - This is where the question arises: if P(C1) < 50% why is it classified as C1 (there are only two classes, C1 and C2)
Based on my understanding of decision trees, the predicted probabilities are often generated based on the percentage of test cases on the leaf node that were labeled in each class divided by the total number of instances hitting that leaf node. This method will dictate the probability for two classes must sum up to 1.
My question is why does the model classify an instance in class C1 if it has only got 0.1 confidence in it. Does a predicted probability of 0.1 on class C1 mean that there is a 0.9 confidence in it belonging to class C2? If so, why does it classify an instance in class C1 is it has less than 0.5 confidence in it?
My own theory is that this might be due to boosting and miss-classification cost and the way they are influencing the predicted class.
AI: You are absolutely right to be skeptical about the results you are getting. I've been using boosted trees (not necessarily C5.0) for over 5 years, and I've never encountered results like what you've described here. Excluding the possibility that you simply had a typo in your code, I can think of two reasons:
Low decision threshold: As is often the case in class imbalance problems (e.g., fraud detection, marketing response, bankruptcy, etc) a low probability could still indicate the minority class if the decision threshold is also low. In your case, if the model might have automatically decided to pick a threshold of 0.05, then 0.1 would be flagged as C1.
Cost function: This is what you mentioned -- if your model has a high cost associated with missing true positives then 0.1 could be classified as C1. |
H: Unstable accuracy of CNN - When should I stop training?
I'm using caffenet for fine-tuning.
I'm doing cross validation (15 vs all) with a very small data set of about 250 images. I'm testing every 10 iterations (~2 epochs). My batch size is 50. With some sets I'm getting very unstable accuracy - Can jump from 70% to 90% and back to 70% and back and fourth. My question is: Let's say I hit 90% accuracy after 40 iterations (~8 epochs) - Does this mean that the net had reached an optimal state or could it be that it just had a lucky guess on the validation set? My final question is: Should I stop training and save the net? Thanks.
AI: If accuracy regresses something is wrong in either the network, or (more likely here) the meta-parameters (probably learning rate.)
It can be difficult to tell when a model converges. I'd recommend looking at diagnostic graphs (typically training loss, training/validation accuracy, and ratio of weights:updates) over epochs. Typically convergence is considered when loss and accuracy level out and show diminishing returns beyond some threshold (your tolerance for 1.0e-x% improvements.) So, stop training/validation when it's improving less than what you care about. |
H: Regression yields much smaller standard deviation and the mean is off, what could be wrong?
I'm modeling a regression problem. An initial attempt yields the following:
labels.mean(): 0.00018132978443886167
labels.std(): 0.013450786078937208
predictions.mean(): 0.0005549060297198594
predictions.std(): 0.00430255476385355
As you can see, the mean is off, and the standard deviation is totally different. I wonder what does it indicate?
My guess: does it mean that my features are not discriminative enough, so that the model see examples w/ positive and negative labels alike, hence the small variance in the output?
I'm running the regression using XGBRegressor, with early-stopping. I have 1M training examples, 100K validation examples (for early-stopping), and another 100K for testing purpose (for which the mean and the standard deviation are shown above).
I also checked that the label distribution of the three sets are mostly basically the same.
AI: The difference in standard deviation is nothing suspicious. It is only to be expected, if you have a weak correlation.
Suppose the regression is y ~ x, i.e., y = ax + b. Suppose that x only explains a small fraction of the variability in y, i.e., y is scattered all over the place and the least regression line only weakly fits the data. Suppose also that the line is nearly horizontal (i.e., a is small). Then the standard deviation of the predicted y-values will be small (since the line is nearly horizontal) but the standard deviation of the actual y-values might be large.
But really, the way to figure out what is going on is to visualize the data. You should always start by visualizing the data. Plot a scatterplot, and superimpose the least squares fit line on top of it. I bet you'll immediately have a better sense of what might be going on. |
H: What tools do you use to clean corrupt data?
Customer often send currupt data for analysis. I spent a lot of time in cleaning the data or waiting for a correct dataset.
Can you recommend a tool that can handle the most common curruptions (eg. wrong set quotes)...
AI: Weka has built in preprocessing techniques
Also u may need to check the powerful tool named dataPreparator as it provides a variety of techniques for data cleaning, transformation, and exploration
Chaining of preprocessing operators into a flow graph (operator tree) and it can
Handle large volumes of data (since data sets are not stored in the computer memory) through a user friendly graphical user interface |
H: Number of outputs exceeds the number of classes in the training set
I have to build a classifier that classifies samples to one of thirteen classes, but the training dataset I have contains only 10 classes (the dataset is not balanced and some classes does not have any sample)
Is it right to build a Neural network classifier that has thirteen outputs although I don't have thirteen classes in my training set?
Would it affect the accuracy to have more classes in the output that the classes in the training set?
I was considering to put the full number of classes just in case in the future I can retrain the model again with better dataset so I don't need to change the code of the classifier.
Thank you,
AI: In theory, this should not do any harm to the accuracy of the trained network on the data that you have. However, of course the network will have no ability to predict the three classes it has not been shown. If it is well trained, it should predict close to zero probability for all three unseen classes.
It might be a reasonable structure to have all thirteen classes defined for a case where you continue to train online, as data arrives - although the network might have unknown problems adjusting to new classes when they appear, you will have no measure of its ability to do this. Also, if your goal is to produce a model for someone else's use and then leave it with some simple notes on how to re-train it, then it could make sense to have known future requirements already coded.
However, if you intend to re-train from scratch with new data, or if you will be building any new model when there is new data, then it is not strictly necessary to add unseen classes in advance. Depending on training time for your network, you might view any hyper-parameter in the code as something you can adjust quickly. If you write the code with a constant defining the number of classes, and refer that in all code that needs to know this number, then it should only take seconds to adapt your model - that's no time at all compared to time spent on other parts of the problem. |
H: How to use an existing model as in input into a new model
We have a click-model which is currently being used for search ranking in production, and I want to create a new model which takes the old model click probability as one input and adds some other variables in too. The problem is that training data will positionally biased by the fact that the probability of a click is correlated to the old model's prediction.
My plan is to introduce a penalty factor on the original model's prediction to ensure that it doesn't dominate the new model (eyeballing the results to decide on an appropriate penalty factor). Is this approach valid or would there be a better way to approach this?
Note that I don't want to rebuild the old model with the new variables because
The existing model takes a long time (days) to build
The new model and old model will be deployed separately, i.e. the old model will be scored offline/batch whereas the new model will be scored real-time
AI: The problem is that training data will positionally biased by the fact that the probability of a click is correlated to the old model's prediction.
It should be the case that many of your input variables are correlated in some way with the output, otherwise your model could not work. The main difference here is you are expecting a strong correlation from a single feature. This is not a problem - you could think of it as a complex form of feature engineering.
You are essentially stacking the old model with some new variables which you hope are predictive. You should probably in this case include all the existing/old variables so that the new model can more easily spot mistakes made by the old model.
My plan is to introduce a penalty factor on the original model's prediction to ensure that it doesn't dominate the new model
I doubt this would be useful. However the correct way to assess this plan is to try it and measure the performance compared to the simpler version without any penalty. |
H: Any Machine Learning algorithm to know a yes/no answer?
Any Machine Learning algorithm to know the meaning of sentences? Specifically, I have sentences like, "we do not allow managers to trade derivatives", "We do not have policy on hedging", "our policy do not permit trading on stock", "trading is not allowed"... The actual sentence is longer. Any algorithm I can know whether trading on stock is allowed or not by programming? Thank you for your help!!!
AI: You could do sentiment analysis for this task. Since the output is binary (yes/no) and there are obviously key words that will let you know whether a stock is available or not.
You could get very good results using an LSTM for the sentences that feeds into a final logistic regression layer to answer whether the answer is yes "1" or no "0". So you'd end up with an output probability of the question being answered "yes" or "no".
This guy did the exact same model (with a different dataset basically) using theano, I highly recommend looking at this tutorial (code is provided as well):
http://deeplearning.net/tutorial/lstm.html
All you need to do is build a dataset and get your hands a little dirty with python & theano.
If you want to go deeper, look into word embeddings.
Alternatively if that is too deep, you could just use a naive bayes algorithm. https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering
It has good results on spam detection & probably might be able to help with your task as well! |
H: Feature selection in R too large dataset
I'm doing credit risk modelling and the data have large number of features.I am using boruta package for feature selection. The package is too computationally expensive, I cannot run it on the complete training dataset. What i'm trying to do is take a subset of the training data(let's say about 20-30%) and run the boruta package on that subsetted data and get the important features. But when i use random forest to train the data I have too use the full dataset. My question is, Is it right to select features only on a part of train data but then build the model on whole of training data?
AI: Technically, it will be alright to use a sample of the data. One important assumption is that most of the features don't have outlier. In such case your sampling might miss out on those.
But, I don't see a necessity of sampling with the size of data that you have given. 150mb is not large data at all, given that you have 8 GB ram at disposal. Rather than sampling the data, first check if there is alternate way of selecting the features. Since there are only 18 columns, look at the summaries for each columns, how do the features correlate with your outcome variable, which variables add very little information. If you have not done more exploratory analysis of the 18 variables, I'll suggest you do that first.
Secondly, why not build your random forest directly with 18 features? If you had, say, 180 features, it would have made sense to do automated feature selection first using Boruta etc. If there are features that do not add information, their variable importance in forest model will be low. You can drop them in subsequent analysis |
H: Classify ciphertext vs. plaintext
I'm attempting a rather simple exercise in machine learning and trying to classify samples of text as either plaintext or ciphertext (encrypted).
Here are two samples:
Plaintext: This is a sentence in plaintext which any human person can read
Ciphertext: 5oXbLiEZbMUgOOdYy+q4+rsDaqUngBrrUbpVeuu2ggvP6hHObC4GgTLhq
The specific encryption used doesn't have any special attributes I can use to classify (for example, the ciphertext isn't guaranteed to be significantly longer than plaintext), so the task is all about figuring out what text is indistinguishable from random characters and what text is readable plaintext.
My current heuristic involves counting whitespaces and assuming anything with a whitespace ratio above a certain threshold is plaintext, but I'm trying to find a better robust algorithm.
AI: Simply counting the frequency of characters ought to easily distinguish between English language and ciphertext, because they're so obviously different.
You can just count the frequency of characters in a big corpus of English, and a big corpus of ciphertext, and apply a chi-squared test to each to figure out which one matches the counts in a new chunk of text.
Or if you can assume ciphertext has a roughly uniform distribution over characters, that alone lets you construct a good test for whether new text is unlikely to be ciphertext.
I did a short blog post on something similar. https://blog.cloudera.com/blog/2016/09/solving-real-life-mysteries-with-big-data-and-apache-spark/ |
H: What are the differences between IBM BlueMix and IBM Data Science Experience?
This may seem like a silly question, but as I am going through the documentation for both services it is difficult to disentangle what each does, specifically. From what I've gathered
BlueMix is essentially an all-in-one cloud platform for accessing various IBM Analytics APIs and you can code in various languages, while
Data Science Experience is sort of an RStudio on steroids (it even allows you to use RStudio), where I can process massive data sets using IBM's resources instead of my own, but I can also code things in Python if I want.
For my needs, I will be using several different types of data, including time-series physiological data and natural language text (sounds like I'll likely need Watson). I would like to be able to use TensorFlow for my work as well.
AI: If you are comfortable with advanced coding and familiar with modeling technics in R and Python, I would recommend to use the datascience experience, since you will do all the coding.
But if you want to have ready-to-use models and API's, and wont be bothered to tweak or modify some advanced parameters, Bluemix is a good choice for you :) |
H: Does MLP always find local minimum
In linear regression we use the following cost function which is a convex function:
We Use the following cost function
in logistic regression because the preceding cost function is not convex whenever the hypothesis (h) is logistic function. We have changed the equation of cost function to have a convex shape to find its global (the only one which exists). There is a fact that I can not understand. In Multi Layer Perceptrons ANNs I have seen a lot that they can be stuck in local minimums. Why is that? We have used this cost function for each perceptron and gotten the rules for updating the values for the weights in back propagation algorithm; So why do we stuck?
AI: The loss functions are only simple convex functions with respect to the weight parameters (and specific data) when there is a single layer. More exactly, they can proven to be always convex with respect to the weights in the simple models (linear or logistic regression), but not with respect to weights of deeper networks.
You can prove that there must be more than one minimum in a network with 2 or more layers - and thus the loss function cannot be convex - by considering swapping the weights around when you have found a minimum value. Unlike with a single layer network, it is possible to swap the weights around that feed into the hidden layer whilst maintaining the same output. For example, you can swap the weights between input and hidden layer so that values of neuron output 1 and neuron output 2 are reversed. Then you can also swap the weights feeding out of those neurons to the output so that the network still outputs the same value. The network would have a different set of weights, but generate the same outputs, and so this new permutation of weights is also at a minimum for the loss function. It is the "same" network, but the weight matrices are different. It is clear that there must be very many fully equivalent solutions all at the true minimum.
Here's a worked example. If you have a network with 2 inputs, 2 neurons in the hidden layer, and a single output, and you found that the following weight matrices were a minimum:
$W^{(1)} = \begin{bmatrix} -1.5 & 2.0 \\ 1.7 & 0.4 \end{bmatrix}$
$W^{(2)} = \begin{bmatrix} 2.3 & 0.8 \end{bmatrix}$
Then the following matrices provide the same solution (the network outputs the same values for all inputs):
$W^{(1)} = \begin{bmatrix} 1.7 & 0.4 \\ -1.5 & 2.0 \end{bmatrix}$
$W^{(2)} = \begin{bmatrix} 0.8 & 2.3 \end{bmatrix}$
As we said the first set of 6 parameters was a solution/minimum, then the
second set of 6 parameters must also be a solution (because it outputs the same). The loss function therefore has 2 minima with respect to the weights. In general for a MLP with one hidden layer containing $n$ neurons, there are $n!$ permutations of weights that produce identical outputs. That means that there are at least $n!$ minima.
Although this does not prove that there are worse local minima, it definitely shows that the loss surface must be much more complex than a simple convex function. |
H: What can i do after a PCA with the results?
After performing a PCA and studying the proceeding i ask myself what the result is good for in the next step. From the PCA i learned how to visualize the dataset by lowering the dimension, i got handy new vectors to describe the members of the population in a more efficient way and i learned which original predictors correlate and contribute more than others.
I ask myself questions like, is there more to learn from PCA, is it a good idea to use the PCA model for a learning algorithms to perform better and is it possible to respect nomial or ordinal predictors in the PCA somehow?
AI: Well PCA, as suggested above by @CarltonBanks, does help you remove features with the least correlation and use mash the features together such that they have the highest correlation.
To answer your question, how to visualize higher dimensions using PCA
Transform the feature matrix with the number of components of your data set to 2 or 3
This ensures you can represent your dataset in 2 or 3 dimensions. To simply see your answer just plot this transformed matrix into a 2d or 3d plot respectively.
This helps you visualize a higher dimensionality data as a 2d or 3d entity so while using regression or some predictive modeling technique you can assess the trend of data.
Should we use PCA in machine learning algorithms more often?
Well, that strictly depends, using PCA reduces the accuracy of your data set so unless you need to save up some space caused due to a lot of features with bad correlation and the overall accuracy doesn't matter. If your machine learning model scenario is similar to this then it is ok to proceed.
However, most use of PCA is as you asked before for visualizing higher dimensionality data to determine the data trend and to check which model fits best. |
H: sklearn select N best using classifier
pretty simple question here but just can't seem to find the answer in the normally great documentation for sklearn.
I am working with binary classifiers, but we can just assume i am using LogisticRegression, and I was wondering if there is a general way to have the classifier select, say only 10 best (most sure) data points?
For example, say I train a set with 500K data points, and my test set has 10K lines, and out of the 10K, I just want to choose 10 that have the highest chance of being true positives. Does this make sense?
I have read about, and have been playing with the class_weights attribute, which works well for giving more/less weight to each of the binary outcome classes, but its not quite working for what I want in that it always give different number of position predictions, and I can't really tell how sure the classifier is about each one of those.
AI: class_weight is used in the process of model-training to train(=fit) a better model (let us call it clf).
Your question is is about choosing the most sure predictions.
You just need to predict the probability (for binary classification this will be the probability of positive class)
y_test_predicted_probability = clf.predict_proba(X_test)
and then choose 10 points with the highest y_test_predicted_probability
#some code to do this
top_picks_indexes = y_test_predicted_probability[:,1].argsort()[-10:] # chose top 10 probabilities for class = 1
# create a vector, Y_top_picks, with all zeros except ones for the selected top probabilities
Y_top_picks = np.zeros(len(X_test))
Y_top_picks[top_picks_indexes] = 1 |
H: Number of features vs. number of samples : if small sample size is sufficient, why take large number of samples?
As a newbie, I am a little confused. I have a dataset for binary classification with 11 features and 102 sample data. I have seen in most places (e.g., kaggle competions), the dataset may have hundreds of thousands of data samples for tens of features. On the other hand, this paper says (at least for LDA classifier) optimal number of features is n-1 for a sample size n. My question is, if small no. of samples is enough (or even optimal), why care for larger samples? What am I missing here?
AI: Bounds on the needed amount of samples are very common in PAC learning.
When you define a concept class you can compute a minimal size set of sample that will enable learning.
However,
More samples will allow improving accuracy
More samples will enable learning more complex concepts, that might fit your data better.
As @Emre wrote, real life data sets usually are not clean as in PAC learning. The concept class is not given to you, the data has noise and a given distribution is not guaranteed.
Showing that a classifier can be learnt with a small amount of data is great.
It is a big advantage of the learner.
However, more data usually helps and if it helps more than expected for such a classifier, it is possible that the classifier requirements doesn't hold. |
H: What is a LB score in machine learning?
I was going through an article on kaggle blogs. Repeatedly, the author mentions 'LB score' and 'LB fit') as a metric for effectiveness of machine learning (along with cross validation (CV) score).
With a research for the meaning of 'LB' I spent quite a bit of time, I realised that generally people directly refer it as LB without much background.
So my question is - What is a 'LB'?
AI: In the context of Kaggle, it means LeaderBoard (emphasis mine). |
H: How will a rotation matrix affect contestants in machine learning contests?
Machine Learning contests like Kaggle usually layout the machine learning task in a human-understandable way. E.g. they might tell you the meaning of the input (features). But what if a Machine Learning contest doesn't want to expose the meaning of its input data? One approach I can think of is to apply a (random) rotation matrix to the features, so that each resulting feature doesn't have obvious meaning.
A rotation on the input space shouldn't change a model's ability to separate the positives from the negatives (using binary classification as an example) -- after all the same hyper plan (when applied the same rotation) can be used to separate the examples. What could be changed by the rotation is the distribution of each feature (i.e. when looking at a single feature's values across all examples) if a contestant cares about them. However, rotation is PCA-invariant, so if a contestant decides to work on the PCA-ed version of the input then the rotation doesn't change anything there.
How much do contestants reply on statistical analysis on the (raw, i.e. non-PCA-ed) input features? Is there any (other) thing that I should be aware of that a rotation can change for a contestant during such a contest?
AI: Kaggle competitions with clean, anonymised and opaque numerical features are often popular. My opinion is they are popular because they are more universally accessible - all you need is to have studied at least one ML supervised learning approach, and maybe have a starter script that loads the data, and it is very easy to make a submission. The competitions become very focused on optimising parameters, picking best model implementations and ensembling techniques. The more advanced competitors will also refine and check their CV approaches very carefully, trying to squeeze the last iota of confidence out of them in order to beat the crowd climbing the public leaderboard.
Examples of historic Kaggle competitions with obfuscated data might be Otto Group Product Classification or BNP Paribas Cardif CLaims Management. For some of these competitions the data is adjusted for anonymity of the users who might otherwise be identified from the records. In other cases it is less clear what the sponsor's motivation is.
However, there are negative consequences (you will find these complained about in the same competitions):
Use of insight from domain knowledge, or exploration/study of the underlying principles from the subject being predicted are effectively blocked. It is hard to assess the impact of this, but it is possible that the sponsors miss out on potentially better models.
Doing "just" the machine learning side can be a bit too mechanical and boring for some competitors, who may not not try as hard.
How much do contestants reply on statistical analysis on the (raw, i.e. non-PCA-ed) input features?
There are always data explorations and views of data published in forums (and Kaggle's scripts - called kernels), and many people view, upvote and presumably use the insights from them. I recall at least one competition forum thread where there was a lot of discussion about weird patterns appearing in data, which were probably an artefact of obfuscation (sorry I cannot find the thread now).
With obfuscated data, there can be attempts to de-obfuscate, and they have sometimes been partially successful. |
H: R, xgboost: eval_metric for count:poisson
I wonder what are the recommended eval_metrics for count:poisson as objective in xgboost in R?
AI: This is from XGBOOST documents:
“poisson-nloglik”: negative log-likelihood for Poisson regression.
You might want to go and read more about poisson n loglikelihood.
Here is a link for you to get started:
https://onlinecourses.science.psu.edu/stat504/node/31
Hope this can be helpful |
H: How to remove columns in Transformer function in Pipeline?
I already used a custom transformation function in a scikit-learn pipeline. In this function I only added features to my data frame. It works great.
Below is a working example:
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.datasets import make_blobs
x, y = make_blobs(n_samples=300, n_features=2, centers=1)
x_train = pd.DataFrame(x[:150,:], columns=['x1','x2'])
x_test = pd.DataFrame(x[150:,:], columns=['x1','x2'])
class myTransformation(object) :
def __init__(self, colname):
self.colname = colname
def transform(self, x) :
dat = x.copy()
squared = dat.loc[:,self.colname]**2
squared.name = "%s_sqre"%self.colname
dat.loc[:,squared.name] = squared
dat.loc[:, self.colname+'_2'] = dat[self.colname]
return dat
def fit(self, dat, y=None) :
return self
makePipe = Pipeline([('makeTransfo', myTransformation(colname="x2"))])
fittedPipe = makePipe.fit(x_train)
x_1 = fittedPipe.transform(x_train)
x_2 = fittedPipe.transform(x_test)
Now I would like to be able to add the ability to remove the equal columns in the data frames.
For now, I have the following function:
def delSameCols(df) :
cols = []
for i in range(df.shape[1]) :
for j in range(i+1, df.shape[1]) :
if (df.iloc[:,i].dtype!='O') | (df.iloc[:,j].dtype!='O') :
if np.array_equal(df.iloc[:,i],df.iloc[:,j]) :
cols.append(df.columns[j])
cols = list(set(cols))
print( u' -%s features removed'%len(cols) )
return df.drop(cols, axis=1), cols
I have no idea how to deal with this/how to add a new function in the pipeline or directly in the existing function?
Does anyone have any idea?
AI: I succeeded in getting a satisfying solution. I posted an entire working script. What do you think about it? Especially the creation of an attribute (self.lstRemCols) not initialized in the init function?
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.datasets import make_blobs
class myTransfo(object) :
def __init__(self, colname):
self.colname = colname
def transform(self, x) :
dat = x.copy()
squared = dat.loc[:,self.colname]**2
squared.name = "%s_sqre"%self.colname
dat.loc[:,squared.name] = squared
dat.loc[:, self.colname+'_2'] = dat[self.colname]
return dat
def fit(self, dat, y=None) :
return self
class removeSameCols(object) :
def __init__(self) :
pass
def _delSameCols(self, df) :
cols = []
for i in range(df.shape[1]) :
for j in range(i+1, df.shape[1]) :
if (df.iloc[:,i].dtype!='O') | (df.iloc[:,j].dtype!='O') :
if np.array_equal(df.iloc[:,i],df.iloc[:,j]) :
cols.append(df.columns[j])
cols = list(set(cols))
print( u' - %s features to be removed'%len(cols) )
return cols
def transform(self, x) :
dat = x.copy()
lstcols = list(set(dat.columns) - set(self.lstRemCols))
return dat.loc[:, lstcols]
def fit(self, x, y=None) :
dat = x.copy()
self.lstRemCols = self._delSameCols(dat)
return self
x, y = make_blobs(n_samples=300, n_features=5)
x_train = pd.DataFrame(x[:150,:], columns=['x1','x2','x3','x4','x5'])
x_test = pd.DataFrame(x[150:,:], columns=['x1','x2','x3','x4','x5'])
makePipe2 = Pipeline([('makeCols', myTransfo(colname="x2")),
('remCols', removeSameCols())])
makePipe2.fit(x_train)
x_1 = makePipe2.transform(x_train)
# test if only same columns in x_train are removed.
x_test.x4 = x_test.x5
x_2 = makePipe2.transform(x_test) |
H: how to train tensorflow chat application on updated dataset everyday
I have created chatbot on Cornell movie dataset and it's working fine. I have trained chatbot application up to global step 330000. I am using tensorflow library. I am using ||source|| chatbot. I am also receiving output fine. Then I created this new dataset file by modifying original dataset and i wish to train chatbot application with updated files. Now should I delete previously saved checkpoints and saved data and start training from zero or should i train from 330000 onwards without worrying changes in dataset.
I wish to modify database daily basis and train on modified database.
Please kindly let me know if you have a suggestion for me as I am stuck on this issue. I will really appreciate if you help me with this issue.
AI: The distribution of your data won't change too much over a smaller period of time, retraining from scratch seems like a waste. Just add the new data to your set and do one or more epochs on all your data, not just the new ones. It can readjust the weights a little bit for changes in the distribution which is exactly what you want. Make sure you are not overreliant on early stopping as regularization method against overfitting because you continuously train like this, use some other regularizer. I would suggest periodically to retrain from scratch but this doesn't have to happen very regularly, maybe once every two months. |
H: What does images per second mean when benchmarking Deep Learning GPU?
I've been reviewing performance of several NVIDIA GPU's and I see that typically results are presented in terms of "images per second" that can be processed. Experiments are typically being performed on classical network architectures such as Alex Net or GoogLeNet.
I'm wondering if a given number of images per second, say 15000, means that 15000 images can be processed by iteration or for fully learning the network with that amount of images?. I suppose that if I have 15000 images and want to calculate how fast will a given GPU train that network, I would have to multiply by certain values of my specific configuration (for example number of iterations). In case this is not true, is there a default configuration being used for this tests?
Here an example of benchmark Deep Learning Inference on P40 GPUs (mirror)
AI: I'm wondering if a given number of images per second, say 15000, means that 15000 images can be processed by iteration or for fully learning the network with that amount of images?.
Typically they specify somewhere whether they talk about the forward (a.k.a. inference a.k.a. test) time, e.g. from the page you mentioned in your question:
Another example from https://github.com/soumith/convnet-benchmarks (mirror): |
H: Pull data from R and set lines to ignore
I am trying to pull data from my .csv file. I am using this command:
mydata[mydata$Model_Data>3.0,c("Model_Data","Date")]
As you can see from the results below, it is counting ALL instances, however, I wish to set this to a 64 line ignore upon first matching instance. How would you suggest I go about adding this 'ignore 'n' line function' ?
Snippet of data here:
Model_Data Date
1 8.788927 19670103
2 5.625603 19670104
3 4.853577 19670105
4 4.558040 19670106
5 4.322114 19670109
6 3.011257 19670110
7 6.234991 19670111
8 3.970446 19670112
9 3.144710 19670113
11 3.121524 19670117
15 3.659759 19670123
314 5.034324 19680401
316 4.395672 19680403
320 4.042018 19680410
485 3.647299 19690113
750 3.632671 19700203
785 4.167759 19700325
809 4.520325 19700429
829 4.116661 19700527
1138 7.950606 19710816
1139 3.260332 19710817
1493 3.929633 19730111
1502 3.094216 19730124
1515 3.728929 19730213
1570 3.369889 19730503
1934 3.254718 19741010
2008 3.845721 19750127
2021 3.039563 19750213
2714 4.134147 19771110
2820 6.223156 19780414
2821 7.745218 19780417
2827 4.743293 19780425
2828 3.033731 19780426
2896 4.192446 19780802
2897 4.422611 19780803
2958 4.189009 19781030
2960 4.180385 19781101
3183 3.427686 19790920
3196 4.392758 19791009
3197 6.126659 19791010
3259 3.585480 19800109
3264 3.165421 19800116
3275 3.521842 19800131
3314 3.699859 19800327
3468 5.436180 19801105
3510 4.302425 19810107
3917 3.917657 19820817
3918 4.391777 19820818
3920 3.173933 19820820
3921 3.354431 19820823
3924 3.382543 19820826
3930 3.257510 19820903
3953 3.201376 19821007
3955 3.558906 19821011
3957 3.060809 19821013
3972 3.596346 19821103
4414 4.728832 19840802
4415 6.362526 19840803
4416 3.995445 19840806
4419 3.081986 19840809
4420 3.271267 19840810
4468 3.220568 19841018
4510 3.585172 19841218
4759 3.046736 19851213
4775 5.164241 19860108
4818 3.460404 19860311
4899 3.051578 19860707
4946 5.638514 19860911
4947 3.806834 19860912
5039 6.066431 19870123
5095 3.536616 19870414
5224 5.730824 19871016
5225 11.180750 19871019
5226 6.897399 19871020
5227 4.537756 19871021
5228 3.229374 19871022
5582 3.155970 19890317
5728 6.600688 19891013
5729 6.704284 19891016
5931 4.177266 19900803
6046 4.380562 19910117
6257 3.193782 19911115
6480 3.983774 19921005
6533 3.139862 19921218
6572 4.023227 19930216
6574 3.056474 19930218
6605 3.035592 19930402
6637 3.505759 19930519
6819 3.674848 19940204
6857 3.853518 19940331
6858 3.048781 19940404
7150 3.038139 19950531
7184 5.645942 19950719
7242 3.910097 19951010
7301 3.255512 19960104
7305 3.574967 19960110
7346 5.189287 19960308
7435 5.112708 19960716
7568 3.238173 19970123
7577 3.007577 19970205
7760 5.930298 19971027
7761 7.836272 19971028
7953 3.613173 19980804
7970 3.899849 19980827
7972 5.255132 19980831
7973 5.310272 19980901
7999 3.806758 19981008
8130 3.407432 19990419
8265 3.204466 19991028
8311 3.483312 20000104
8314 3.101274 20000107
8324 3.273686 20000124
8361 3.905785 20000316
8374 5.991309 20000404
8382 4.642381 20000414
8563 4.287756 20010103
8635 3.294676 20010418
8736 3.260459 20010917
8738 3.009326 20010919
8947 3.005049 20020719
8950 4.684101 20020724
9830 3.024372 20060120
9926 3.253976 20060608
10106 6.153445 20070227
10108 3.018874 20070301
10208 3.016551 20070724
10210 5.810003 20070726
10213 3.048170 20070731
10214 3.093512 20070801
10216 3.203759 20070803
10217 3.358139 20070806
10220 3.897666 20070809
10225 4.290941 20070816
10333 3.479401 20080122
10334 4.699377 20080123
10497 3.575420 20080915
10498 3.417303 20080916
10499 3.578305 20080917
10500 4.763519 20080918
10501 3.362684 20080919
10507 4.354753 20080929
10512 3.611450 20081006
10515 3.786056 20081009
10516 4.791725 20081010
10517 3.406449 20081013
10910 7.382690 20100506
10911 3.423236 20100507
10912 3.022318 20100510
11225 4.906076 20110804
11226 4.635050 20110805
11227 6.636959 20110808
11228 5.464225 20110809
11229 4.077466 20110810
11230 3.956520 20110811
11235 3.315989 20110818
11852 3.357452 20140203
12029 4.794625 20141015
12243 3.784646 20150821
12244 6.559546 20150824
12245 4.181079 20150825
12246 3.869772 20150826
12344 3.089881 20160115
12346 3.519323 20160120
12455 3.676909 20160624
AI: You almost had everything.
Taking example from iris
n <- 10
iris[iris$Sepal.Length>5.5, c("Sepal.Length", "Species")][1:n,] |
H: What are stovepipes?
I am reading the book The Data Warehouse Lifecycle Toolkit by Ralph Kimball. I come across the term Stovepipes fairly often. After doing some research I read that Stovepipes are when you don't have conformed dimensions to link data marts. This short description was all I could find every other resource I looked at simply said that Stovepipe models are bad.
What is is meant by "conformed dimensions"? What exactly are these Stovepipes and can somebody provide me with an example?
AI: It seems "stovepiping" is a term that legendary author on the subject of data warehousing Ralph Kimball adopted, and it looks that it actually originates from intelligence community. In its original (intelligence community) meaning, it is mostly related to the usage of raw data, instead of processed data.
From Wikipedia:
Stovepiping (also stove piping) is a metaphorical term which recalls a stovepipe's function as an isolated vertical conduit, and has been used, in the context of intelligence, to describe several ways in which raw intelligence information may be presented without proper context. It is a system created to solve a specific problem. The lack of context may be due to the specialized nature, or security requirements, of a particular intelligence collection technology. It also has limited focus and data within is not easily shared.
From RationalWiki:
Stovepiping is a term originating in the intelligence community to describe a process by which raw data is funneled directly to high-ranking officials or the media.
Also, there is an interesting article in PhraseFinder.
Even better explanation from Relational Solutions Blog:
Most data marts are designed as one off reporting solutions, designed to solve an immediate problem that the business users need to solve. When designed as a stand-alone, they are often referred to as “stove pipes” or “silos” of information.
Analysts new to the space think these are new terms, but Data warehousing consultants have used these terms since the 90’s. They’re used to describe stand-alone reporting solutions. Typically these stand-alone solutions are developed by individual teams or departments.
These groups develop “silo’s” or “stove pipe” reporting databases to achieve a specific goal that they were unable to get financial approval for. If they have a need for something that you can't get approval for, you resort to building something on your own. It happens in every company and every department. |
H: Confidence intervals for binary classification probabilities
When evaluating a trained binary classification model we often evaluate the misclassification rates, precision-recall, and AUC.
However, one useful feature of classification algorithms are the probability estimates they give, which support the label predictions made by the model.
These probabilities can be useful for a variety of reasons depending on the use case. When using these probabilities it would be useful to have a confidence interval rather than a single point estimate.
So, how can we estimate a probability confidence interval given that the misclassification error may not always serve as a proxy for the error between the estimated probability and the actual probability (which is often unknown)?
I've considered using brier score but I'm sure there is a better way. Can anyone point me in the right direction or offer your own insight?
For example, If I have classes [C0, C1] and my probabilities for a given instance $(x^{(i)}, y^{(i)})$
are {C0: 80, C1:20} then I will classify this instance as C0. Let's suppose that C0 is the correct class label, at this point the model has done it's job and made the correct classification.
I want to go another step further and use the probabilities {C0:80, C1:20} which could be useful for a variety of reasons.
Let's say C0 and C1 respectively represent a customer keeping and closing their account with a bank.
If we wanted to create an expected value $EV$ of dollars at risk of leaving the bank we could calculate $EV$ as $P(C1) * account \space balance$. This would give us a point estimate of $EV$, which is fine but may not tell the whole story given our uncertainty in the model.
So, how can we provide a lower limit and upper limit of the probability this instance is of class C1 with 95% confidence?
AI: I don't think there is a good way to do this for all models, however for a lot of models it's possible to get a sense of uncertainty (this is the keyword you are looking for) in your predictions. I'll list a few:
Bayesian logistic regression gives a probability distribution over probabilities. MCMC can sample weights from your logistic regression (or more sophisticated) model which in turn predict different probabilities. If the variance in these probabilities are high you are less certain about the predictions, you could empirically take the 5% quantile or something.
With neural networks you could train them with dropout (not a bad idea in general) and then instead of testing without the dropout, you do multiple forward passes per prediction and this way you sample from different models. If the variance is high, again you are more uncertain. Variational Inference is another way to sample networks and sample from these different networks to get a measure of uncertainty.
I don't know from the top of my head but I'm sure you could do something with random forests with the variance between the different end nodes where your features end up, assuming they are not deep, but this is just something I thought of. |
H: Advice on the learning resources for deep learning
Which is better for beginner in machine learning: The deep learning book written by Yoshua Bengio or the videos and notes in CS231n from Stanford?
AI: http://neuralnetworksanddeeplearning.com/
http://www.deeplearningbook.org/
They are the two very popular online free books. The first link even has working deep learning code. |
H: Feature Extraction from Convolutional neural network (CNN) and using this feature to other classification algorithm
As in this, the author is using CNN to extract features of the images, and then doing SVM for further analysis. My question is how to extract features in CNN?
E.g., here is a CNN code I'm using:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 36 # There are 36 of these filters.
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
data.test.cls = np.argmax(data.test.labels, axis=1)
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
y_pred = tf.nn.softmax(layer_fc2)
y_pred_cls = tf.argmax(y_pred, dimension=1)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.global_variables_initializer())
train_batch_size = 64
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
optimize(num_iterations=900)
print_test_accuracy(show_example_errors=True)
In this case, how to extract the features? Also, I want to know that are the extracted features different filters that we used or the final updated weights that we get after completion of CNN?
AI: Before trying to extract features, you need to define your network. Suppose your network has an architecture like this:
Conv1 layer
Conv2 layer
Conv3 layer
Dense1 layer
Dense2 layer
Now you can extract features for each input for any layer (say for Conv2) in the following way:
conv2_tensor = sess.graph.get_tensor_by_name('Conv2')
_, conv_val = sess.run([conv2_tensor],
{'x': image_data}) |
H: Issue with backpropagation using a 2 layer network and softmax
I have a simple neural network with one hidden layer and softmax as the activation function for the output layer. The hidden layer uses various activation functions since I am testing and implementing as many of them as I can.
For training and testing for the moment I am using the MNIST dataset of handwritten digits so my input data is a matrix that in each row has a different image and in each column a pixel of that image that has been reshaped as a vector.
When I use a sigmoid activation function for both layers the computed gradients and analytical gradients seem to agree but when I try something else like tanh or softplus for the hidden layer and softmax for the output layer there are big differences as can be seen from the data below (Left->Numerical Gradient, Right->Analytical Gradient)
(1)sigmoid (2)softmax
-9.4049e-04 -6.4143e-04
-6.2623e-05 -2.5895e-05
1.0676e-03 6.9474e-04
-2.0473e-03 -1.3471e-03
2.9846e-03 1.9716e-03
4.0945e-05 2.7627e-05
-2.5102e-05 -1.7017e-05
8.8054e-06 6.0967e-06
7.8509e-06 5.0682e-06
-2.4561e-05 -1.6270e-05
5.6108e-05 3.8449e-05
2.0690e-05 1.2590e-05
-9.7665e-05 -6.3771e-05
1.7235e-04 1.1345e-04
-2.4335e-04 -1.6071e-04
(1)tanh (2)softmax
-3.9826e-03 -2.7402e-03
4.6667e-05 1.1115e-04
3.9368e-03 2.5504e-03
-7.7824e-03 -5.1228e-03
1.1451e-02 7.5781e-03
1.5897e-04 1.0734e-04
-9.6886e-05 -6.5701e-05
3.3560e-05 2.3153e-05
3.3344e-05 2.1786e-05
-1.0282e-04 -6.8409e-05
2.1185e-04 1.4774e-04
9.0293e-05 5.3752e-05
-4.0012e-04 -2.6047e-04
6.9648e-04 4.5839e-04
-9.7518e-04 -6.4468e-04
(1)sigmoid (2)sigmoid
-9.2783e-03 -9.2783e-03
8.8991e-03 8.8991e-03
-8.3601e-03 -8.3601e-03
7.6281e-03 7.6281e-03
-6.7480e-03 -6.7480e-03
-3.0498e-06 -3.0498e-06
1.4287e-05 1.4287e-05
-2.5938e-05 -2.5938e-05
3.6988e-05 3.6988e-05
-4.6876e-05 -4.6876e-05
-1.7506e-04 -1.7506e-04
2.3315e-04 2.3315e-04
-2.8747e-04 -2.8747e-04
3.3532e-04 3.3532e-04
-3.7622e-04 -3.7622e-04
-9.6266e-05 -9.6266e-05
The way I implement backpropagation is as follows:
Variables->
Theta1, Theta2: tables with the various weights for the first and second layer.
m: size of my training set
y: a vector with the correct category for every input sample
Y: a matrix with the one hot encoding for the category for every input sample
X: a matrix with input data, each row is a different training sample
% Feedforward
a1 = [ones(m, 1) X];
z2 = a1*Theta1';
a2 = [ones(m, 1) activation(z2, activation_type)];
z3 = a2*Theta2';
a3 = activation(z3, 'softmax');
h = a3;
% Calculate J
J = sum(sum((-Y).*log(h) - (1-Y).*log(1-h), 2))/m + lambda*p/(2*m); # sigmoid
%J = -(sum(sum((Y).*log(h))) + lambda*p/(2*m)); # softmax
% Calculate sigmas
sigma3 = a3.-Y;
sigma2 = (sigma3*Theta2).*activationGradient([ones(m, 1) z2], 'sigmoid');
sigma2 = sigma2(:, 2:end);
% Accumulate gradients
delta_1 = (sigma2'*a1);
delta_2 = (sigma3'*a2);
The first cost calculation J was computed for sigmoid and the one below it for softmax (see comments) so I switch between the two.
Have I missed something during backpropagation, why is it working as expected with sigmoids but not as expected with sofmax?
AI: I think it might be a relatively trivial bug in your cost function for softmax:
J = -(sum(sum((Y).*log(h))) + lambda*p/(2*m))
should be
J = -sum(sum((Y).*log(h)))/m + lambda*p/(2*m)
I.e. for softmax only, you have effectively subtracted the regularisation term from the cost function instead of adding it. Also, you forgot to divide the error term by the number of examples in the batch (and you are taking this average when calculating the gradients)
Your back propagation calculations look correct to me if you correct this miscalculation for J. |
H: Different number of features in train vs test
I'm doing the titanic exercise on kaggle and there is a categorical Cabin attribute that has a lot of different strings: C41, C11, B20 etc. (about 100).
To be able to train my model I'm converting it to numerical attributes (using pandas get_dummies()). So in the end I get 100+ attributes.
On the test dataset however, there are less cabins, so I'll end up with fewer attributes.
I did something like this to make them equal (create columns that are in the training set and delete those that aren't):
for column in X.columns:
if column not in X_test.columns:
X_test[column] = 0
for column in X_test.columns:
if column not in X.columns:
X_test.drop([column], axis=1, inplace=True)
but I know it is not a good thing. So how else should I approach it?
I tried removing the cabin column altogether but my model performs better on test data with that column.
AI: You could concatenate your train and test datasets, crete dummy variables and then separate them dataset.
Something like this:
train_objs_num = len(train)
dataset = pd.concat(objs=[train, test], axis=0)
dataset = pd.get_dummies(dataset)
train = copy.copy(dataset[:train_objs_num])
test = copy.copy(dataset[train_objs_num:]) |
H: Best way for data preparation to have accurate prediction
I'm trying to experiment if an opportunity will win or lose in Azure Machine Learning studio. However, am still in Data preparation method.
In my Data base I have opportunity table and products table.
For example, one opportunity has multiple products. Should I deal with the many products and put them in one record?
Will it affect the prediction if we have duplicate records for an opportunity like (a) or it’s better to have one record per opportunity in order to feed it to ML studio. And if yes which one will be better approach (b) or (c).
Approach a
Approach b
Approach c
oppid |first product |first technology|2nd product |2nd technology
1 out-services active directory TRN-items Adobe Acrobat
AI: The simplification of the data may make the model more stable, but it will also remove its ability to use more specific input criteria. For example, the way you move from Approach A to Approach B you are aggregating specific products into product categories. This means that if your model is successful in Approach A, it will be able to predict based on specific products. On the other hand, if your model training succeeds in Approach B it will only be able to predict on product category (and you will have to convert products into its categories before supplying it to the model).
So to answer your questions, the number of data samples you have determine how much you have to aggregate and simplify your data. The data itself in its most detailed form could also fail to train the model properly, in which case the approach taken in Approach B is the best next step. |
H: What happens when you have highly correlated columns in a dataset?
I am doing a regression model. And I was wondering what would be the consequence if we have two or more Highly correlated columns in the dataset? Is that something that can decrease the accuracy of the model?
Answering this question would help decide how to deal with it. Would PCA be the best option here?
AI: Having highly correlated features is a type of redundancy in features. And yes, it effects a regression model if you are having highly correlated features. A very nice explanation is given here.
PCA is a nice choice when it comes to dimensionality reduction. |
H: Why CNN doesn't give higher accuracy over simple MLP network? [From Keras examples]
I'm still new to machine learning and just came across powerful deep learning library, Keras.
I've read Keras document and tried few Keras examples on Github here. I've also studied some basic knowledge and concepts of deep learning from several sources but still haven't really had solid understanding in CNN and RNN which look to be very powerful networks.
So, to prove my assumption, I downloaded reuters_mlp.py example from Keras Github which originally uses simple MLP networks as a model. I combined the idea of CNN which I got from imdb_cnn.py example to reuters_mlp.py example and then observed the result.
Surprisingly, the result didn't come out like I expected. CNN performed worst than simple MLP networks. Can someone please explain why the accuracy of CNN is lower than the simple MLP networks?
Here are the outputs (Tensorflow as backend)
8982 train sequences, 2246 test sequences, 46 classes, num_words=1000
MLP (sequences_to_matrix, mode=bianry):
Epoch 1/5
8982/8982 [==============================] - 3s - loss: 1.3236 - acc: 0.6984
Epoch 2/5
8982/8982 [==============================] - 2s - loss: 0.7182 - acc: 0.8250
Epoch 3/5
8982/8982 [==============================] - 2s - loss: 0.4544 - acc: 0.8864
Epoch 4/5
8982/8982 [==============================] - 2s - loss: 0.3197 - acc: 0.9192
Epoch 5/5
8982/8982 [==============================] - 2s - loss: 0.2511 - acc: 0.9356
1920/2246 [========================>.....] - ETA: 0s
Test loss: 1.05213204963 Test accuracy: 0.785396260071
CNN (pad_sequences):
Epoch 1/5
8982/8982 [==============================] - 81s - loss: 1.9794 - acc: 0.5181
Epoch 2/5
8982/8982 [==============================] - 78s - loss: 1.4289 - acc: 0.6591
Epoch 3/5
8982/8982 [==============================] - 79s - loss: 1.1546 - acc: 0.7175
Epoch 4/5
8982/8982 [==============================] - 78s - loss: 0.9639 - acc: 0.7663
Epoch 5/5
8982/8982 [==============================] - 77s - loss: 0.8378 - acc: 0.7935
2240/2246 [============================>.] - ETA: 0s
Test loss: 0.960687935512, Test accuracy: 0.764470169243
AI: CNN (and RNN) models are not general improvements to the MLP design. They are specific choices that match certain types of problem. The CNN design works best when there is some local pattern in the data (which may repeat in other locations), and this is often the case when the inputs are images, audio or other similar signals.
The reuters example looks like a "bag of words" input. There is no local pattern or repeating relationships in that data that a CNN can take advantage of.
Your results with a CNN on this data set look reasonable to me. You have not made a mistake, but learned how a CNN really works on this data. |
H: Are deep - learning toolkits targeted for certain areas or all-purpose tool kits?
Are any of the open deep learning toolkits targeted to certain areas, or a all toolkits all purpose toolkits, meaning it is a blackbox for deep learning.
My question comes in regards to Microsoft's CNTK which seem to contain examples of speech and text classification, where others usually just have MNIST or CIFAR...
AI: Yes, different toolkits are suited for different purposes given they contain different algorithms. This is due to the lack of a general AI (an AI that is intelligent in all respects). Modern AI has different algorithms which are better suited for different tasks. For example a CNN framework is much better for images than a NN would be, and much better than encoders. However, a LSTM framework would be better suited to capture temporal dependencies in the data.
However, if two frameworks are using the same algorithms they should both be relatively the same. Differences will arise due to the underlining codes being different as can be seen between Keras and TensorFlow. Although, this should be limited.
Worry more about the algorithm you are choosing for your task than the framework. Coding up most machine learning algorithms is not that hard if you want to customize it. |
H: Using the cosine activation function seems to perform very badly
I have created a neural network to classify the MNIST handwritten numbers dataset. It is using softmax as the activation function for the output layer and various other functions for the hidden layer.
My implementation with the help of this question seems to be passing the gradient checks for all activation functions but when it comes to the actual run with my training data for an exemplary run of 10 iterations I get an accuracy of about 87% if I use sigmoid or tanh as the activation function for the hidden layer, but if I use cosine it returns an accuracy of 9%. Training the network with more iterations (100, 200, 500) does not have any effect either and in fact my minimization function does not manage to move below 2.18xxx for the cost function no matter how many epochs pass.
Is there some pre-processing step that I need to perform before using cosine if not why is it that this activation function works so badly?
AI: Cosine is not a commonly used activation function.
Looking at the Wikipedia page describing common activation functions, it is not listed.
And one of the desirable properties of activation functions described on that page is:
Approximates identity near the origin: When activation functions have this property, the neural network will learn efficiently when its weights are initialized with small random values. When the activation function does not approximate identity near the origin, special care must be used when initializing the weights.
$cos(0) = 1$, a basic cosine function does not have this property. Combined with its periodic nature, this makes it look like it could be particularly tricky to get correct starting conditions and other hyper-parameters in order to have a network learn whilst using it.
In addition, cosine is not monotonic, which means that error surface is likely to be more complex than for e.g. sigmoid.
I suggest trying with a low learning rate, and initialising all the bias values to $-\frac{\pi}{2}$. Maybe reduce the variance in initial weights a little too, just to start off with things close to zero. Essentially this is starting with $sin()$. Caveat: not that I have tried this myself, just an educated guess, so I would be interested to know if that helps at all with stability. |
H: Where to find statistically relevant documentation of common Python packages?
I am trying to entice my lab to transition from Matlab and R to Python. The main objection at this point seems to be that Python's analytical libraries are not sufficiently well documented. Given how prolific Python is, I suspect that sufficiently detailed documentation exists and we simply cannot find it.
Example
Recently, I needed to upsample a signal (1D vector) and found a Python function called resample. In contrast to Matlab and R, pandas documentation doesn't tell me what kind of interpolation resample uses (linear, cubic splines, pchip?).
Is there any place where I could find such information about this and other libraries, preferably with equations or references to papers? I understand that I could analyze source code but this isn't most time efficient. If you know R, I am basically looking for a Python equivalent of CRAN (PDF warning).
Thanks!
AI: Switching to Python from Matlab is going to be somewhat dependent on the field you are in (for example, if you are in image processing/ imaging, Matlab is pretty well solid and so swithcing will be more difficult) and the stubbornness of your coworkers/boss. While documentation for Matlab is easy to find, it's also all you get, whereas Python packages generally have decent documentation as well as countless posts on SE in which someone has likely already answered your question.
One of the greatest aspects of Python is that it is open source, but it can make finding the right tool slightly more time-consuming. Your example is about resampling. Pandas resample is designed for time-series (see pandas doc). All the arguments are described, but I can also look at the source code if I really need to understand anything more in depth. Or I search for some examples online: http://benalexkeen.com/resampling-time-series-data-with-pandas/
But it seems like you are actually interested in interpolation functions. If we search for "Python interpolation" we will discover a few methods. Looks like Scipy has an extensive package for interpolating: https://docs.scipy.org/doc/scipy/reference/interpolate.html
It takes some time to figure out how to search for what you need in Python, but once you get the hang of it, there are a lot of great examples online of all sorts of application. Plus, plotting in Python (and R) is way beyond Matlab (c'mon mathworks). In the end, you can do what I did which is flat out refuse to use Matlab unless it is actually necessary and hope they don't fire you! |
H: Updating One-Hot Encoding to account for new categories
My question is focused around how to appropriately update an encoded feature set when a new category is introduced by the test data. I use the data in logistic regression and I know it is not a 'live' model (i.e. gradient descent is performed whenever new data is introduced) but do I have to retrain the model to account for added features or do I just add it to subsequent test set values.
To exemplify the problem consider a TV Show training set where each show has a 'networks' feature set that includes one or more of the following:
["abc","cbs","nbc"]
Then, in the testing set there is a TV Show with the feature set:
["abc", "hulu"]
Would I have to add the new feature retroactively to the training data and retrain the model eventhough it will never occur? Wouldn't this introduce 'look-ahead-bias'?
How do I account for the added feature in the encoder going forward?
AI: I think you have two options:
Automate your train/test pipeline so that one-hot encoding is part of
it. If new categorical variables are introduced, they can be featured
in the training dataset even if not very prevalent. This would
introduce some bias if the nature of the TV show distribution has
changed over time (e.g. 20 years ago there weren't as many options)
but I don't necessarily think it is a show stopper.
If new possibilities are introduced over time but for whatever reason you can't retrain, then you should omit using that new value. This has its own disadvantages because in your example, it would be a TV show with no network. |
H: clustering multivariate time-series datasets
I am new to clustering.i have data from quality testing of an automobile manufacturing company.
I have 100000 datasets.each dataset has 4 variables force, voltage, current, distance. each variable is a continuous time-series with 8000 data points each(1 to 17000 milliseconds). the length of time series differs from on dataset to another. all variables in one dataset has to be compared with another dataset
I have to find clusters in the 100000 datasets based on similarities in shape of each variable in a dataset.
which type of measuring best suits, in this case, to find similarity in shape of time-series
AI: For most clustering approaches, first you need to choose a similarity measure. Some common default ones for raw time series are Euclidean distance and Dynamic Time Warping (DTW).
When you have computed the similarity measure for every pair of time series, then you can apply hierarchical clustering, k-medoids or any other clustering algorithm that is appropriate for time series (not k-means!, see this).
Update: if the number of time series (along with their size) makes it computationally not acceptable to compute pairwise distances, then one option can be to extract features from each time series, and then use such features as proxies for the time series in the clustering process. Some examples of such features are maximum value, number of peaks, mean value. There are libraries like tsfresh in Python that are meant to easily extract such kind of features from time series. With these features, then any clustering approach like k-means can be applied. |
H: Using word2vec with mixed language data
I scraped my Facebook chat history and wanted to try out some basic machine learning stuff with word2vec. However, the data has all sorts of stuff - links, emoji, Cyrillic alphabet, etc. Even if I manage to clear some of those, would it be possible to process the Cyrillic alphabet with word2vec?
AI: Word2Vec uses the bag-of-word model as input which means that you can use whatever alphabet. Building a bag of word just consists of attributing one feature per word in the entire dictionary of your corpus. You can do this with a cyrillic alphabet as with other alphabet. |
H: How can you map the exceedance of a threshold into an activation function of a Neural Network?
I am totally new to Artificial Neural Networks. Let’s say that the model you are trying to turn into an artificial neural network has an output that is triggered only by the exceedance of a threshold: $y\geq y_{1}. $Therefore, you need to find a way to use this inequality as an activation function. Is this feasible?
AI: This is feasable. This is also called a Binary/Step activation function. You must only use this activation function on the output neurons.
The Step function will round down an answer that is lower than 0.5 to 0, and an answer that is higher than 0.5 to 1. However, please note that you do not need to use a binary activation function to output 1 - I advise you to just use TanH or sigmoid and backpropagate a whole bunch of iterations.
However, in other comments, you mentioned that you know what y1 is. That is not of importance, the network will act as a black box and will figure out a treshold itself. Don't set up your own activation function just to get the right output - that avoids the whole point of backpropagation. |
H: Predicting car failures with machine learning
I want to start with machine learning with a small prediction problem but I'm not sure I chose the right approach. I want to make a program that gets data of mechanical failures on cars (manufactured time, failure time, reason, and different characteristics of the car). Then I would give the data of new cars that will be released to market and I would try to predict when would they fail.
I was reading that the best approach is using survival analysis with R but since I'm not really familiar with this algorithm, I was wondering If there's any other approach.
AI: I'm also just a beginner in ML (who is however not familiar with survival analysis w/ R), but has tackled a couple of ML projects. Based on my knowledge, you could use supervised learning.
Store data, preferably in CSV format, (one column about the duration between buying the car and the car's mechanical breakdown), and the rest about the car's data/characteristics.
Next, you can run a neural network through your data, and use your NN's library's predict() method to predict the duration before breakdown based on your data.
You could then theoretically (assuming that there is a logical correlation between the data) see which characteristics are most prone to make a car break down.
As for implementing your program, I use Python with the Keras library, which is simple enough for any programmer to use, but there exist many other great ML libraries, notably TensorFlow.
Do note that I am also just a beginner, and that my approach might be erroneous, yet I do wish you good luck on your future ML projects! |
H: Count the frequncy of words in a cell of a column in a series
I want to calculate the frequency of the words in obama['text'] (obama is the variable where i have stored this series element ) in a dictionary and store it in another column . Without using Counter library , how do i do that . The data is in this format :
URI | name | text
<http://dbpedia.org/resource/Barack_Obama> Barack Obama barack hussein obama ii brk husen bm born august 4 1961 is the 44th and current president of the united states and the first african american to hold the office born in honolulu hawaii obama is a graduate of columbia university and harvard law school where he served as president of the harvard law review he was a community organizer in chicago before earning his law degree he worked as a civil rights attorney and taught constitutional law at the university of chicago law school from 1992 to 2004 he served three terms representing the 13th district in the illinois senate from 1997 to 2004 running unsuccessfully for the united states house of representatives in 2000in 2004 obama received national attention during his campaign to represent illinois in the united states senate with his victory in the march democratic party primary his keynote address at the democratic national convention in july and his election to the senate in november he began his presidential campaign in 2007 and after a close primary campaign against hillary rodham clinton in 2008 he won sufficient delegates in the democratic party primaries to receive the presidential nomination he then defeated republican nominee john mccain in the general election and was inaugurated as president on january 20 2009 nine months after his election obama was named the 2009 nobel peace prize laureateduring his first two years in office obama signed into law economic stimulus legislation in response to the great recession in the form of the american recovery and reinvestment act of 2009 and the tax relief
The output should be in the format in a new column obama['word count']:
{ 2009:4 , the :40 , chicago :10
and so on }
AI: Why shouldn't you use the Counter class; it's exactly what you need?
from pandas import Series
from collections import Counter
text="barack hussein obama ii brk husen bm born august 4 1961 is the 44th and current president of the united states and the first african american to hold the office born in honolulu hawaii obama is a graduate of columbia university and harvard law school where he served as president of the harvard law review he was a community organizer in chicago before earning his law degree he worked as a civil rights attorney and taught constitutional law at the university of chicago law school from 1992 to 2004 he served three terms representing the 13th district in the illinois senate from 1997 to 2004 running unsuccessfully for the united states house of representatives in 2000in 2004 obama received national attention during his campaign to represent illinois in the united states senate with his victory in the march democratic party primary his keynote address at the democratic national convention in july and his election to the senate in november he began his presidential campaign in 2007 and after a close primary campaign against hillary rodham clinton in 2008 he won sufficient delegates in the democratic party primaries to receive the presidential nomination he then defeated republican nominee john mccain in the general election and was inaugurated as president on january 20 2009 nine months after his election obama was named the 2009 nobel peace prize laureate during his first two years in office obama signed into law economic stimulus legislation in response to the great recession in the form of the american recovery and reinvestment act of 2009 and the tax relief"
df = Series(text).to_frame()
newdf = df.assign(word_count = lambda x: x[0].str.split(' ').apply(Counter)[0])
newdf['word_count']
0 {'44th': 1, 'born': 2, 'november': 1, 'running... |
H: h2o, different stopping metric leads to different optimal for hyperparameters
I want to choose the "optimal" hyperparameters for gbm. So I run the following code using the h2o package
# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
max_depth = c(5,10,20,30),
min_rows = c(5,10,20,30),
learn_rate = c(0.01,0.05,0.08,0.1),
balance_classes=c(T,F))
# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "mean_per_class_error")
# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment",
seed = 42, distribution = "multinomial",
training_frame = td.train.hyper.h2o, nfolds = 3,
hyper_params = hyper_params, search_criteria = search_criteria)
# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "mean_per_class_error", decreasing=FALSE)
hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)
This gives as optimal combination for the hyperparameters
learn_rate max_depth min_rows ntrees
0.08 10 5 200
Then I am trying to do the same but with different stopping_metric. So in the above i use mean_per_class_error, and in the following i use logloss, so i run the following code:
# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
max_depth = c(5,10,20,30),
min_rows = c(5,10,20,30),
learn_rate = c(0.01,0.05,0.08,0.1),
balance_classes=c(T,F))
# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "logloss")
# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment",
seed = 42, distribution = "multinomial",
training_frame = td.train.hyper.h2o, nfolds = 3,
hyper_params = hyper_params, search_criteria = search_criteria)
# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "logloss", decreasing=FALSE)
hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)
which gives as optimal combination for hyperparameters
learn_rate max_depth min_rows ntrees
0.1 20 5 500
I know that i use as argument strategy = "RandomDiscrete", but still for instance the optimal combination for gbm using stopping_metric = "mean_per_class_error" is the "50th optimal combination" for the gbm using stopping_metric = "logloss", and the "2nd optimal combination" for gbm using stopping_metric = "logloss", is the "14th optimal combination" for gbm using
stopping_metric = "mean_per_class_error"
Why could that happen ?
AI: First of all, you are using different metrics to determine how well you are doing, that means it's not weird that different metrics find different hyperparameter settings that work better. Second of all, some hyperparameters might not matter for the problem you are solving, which means all the signal you are getting from those hyperparameters is noise. Third of all, most of the machine learning algorithms are stochastic, meaning there is randomness involved in training them and sometimes in evaluating them, this means even starting the same grid or random search could lead to different hyperparameters. That said the probability of that is only high if the real performances are close to each other. |
H: What is the purpose of setting an initial weight on deep learning model?
I'm now learning about deep learning with Keras, and to implement a deep learning model at Keras, you set the initializer to set its initial weights on.
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer="random_uniform"))
The kernel_initializer can take something others, such as random_normal, which uses Gaussian, not uniform distribution, and zero, which literally sets all weights to 0.
However, I don't understand why you like to set different weights at the initializer. Specifically, what advantages does it have over setting all initial weights to 0, which sounds more natural for novices like me?
Also, should the initial weights, if needed, be always set a tiny value (e.g. 0.05) to?
AI: This is greatly addressed in the Stanford CS class CS231n:
Pitfall: all zero initialization. Lets start with what we should not do. Note that we do not know what the final value of every weight should be in the trained network, but with proper data normalization it is reasonable to assume that approximately half of the weights will be positive and half of them will be negative. A reasonable-sounding idea then might be to set all the initial weights to zero, which we expect to be the “best guess” in expectation. This turns out to be a mistake, because if every neuron in the network computes the same output, then they will also all compute the same gradients during backpropagation and undergo the exact same parameter updates. In other words, there is no source of asymmetry between neurons if their weights are initialized to be the same.
There are several weight initialization strategies; each one is best suited for a type of activation function. For instance, Glorot's initialization aims at not saturating sigmoid activations, while He's initialization is meant for Rectified Linear Units (ReLUs). |
H: Always overestimate
I have a regression use case where I am supposed to estimate a value based on 3-4 features. Using random forest, I was able to get ~20% error. However, I have a constraint now. I can overestimate but not underestimate. So, at the cost of improving the error I am allowed to overestimate. What is the right approach of handling this constraint? Is it okay to just go with an approach like 1.2x the estimation provided by the model?
AI: Use an asymmetric loss function with a cliff at your margin; e.g.,
$\mathcal L (x) \equiv \begin{cases}
x^2 && x > 0\\
c x^2 && -m < x < 0 \\
d x^{2p} && x < -m
\end{cases}$
where $c>1$, $p>1$, $d\equiv cm^{2-2p}$. The idea is to encode your constraints in your loss function. |
H: Calculating mean of data frame inside a series object
I have a data frame with following structure:
df.columns
Index(['first_post_date', 'followers_count', 'friends_count', 'last_post_date','min_retweet', 'retweet_count', 'screen_name', 'tweet_count', 'tweet_with_max_retweet', 'tweets', 'uid'],
dtype='object')
Inside the tweets series, each cell is another data frame containing all the tweets of an user.
df.tweets[0].columns
Index(['created_at', 'id', 'retweet_count', 'text'], dtype='object')
I want to perform calculation on the tweets of each users, for example, finding average number of retweets of each user, the tweet with maximum retweets etc.
How can I do these?
Edit
link to sample data
AI: In the absence of a MultiIndex (the Right Way$^\mathrm{TM}$), the apply method can do what you want; e.g.
df.assign(
max_retweet=df.tweets.apply(lambda x: x.retweet_count.argmax('retweet_count')),
avg_retweet=df.tweets.apply(lambda x: x.retweet_count.mean())
)
avg_retweet max_retweet
26662 0.045476 187
32316 0.821538 427
25879 0.633681 583
43411 0.112465 421
28840 0.472222 8
Also, don't share data with pickle; it's dangerous. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.