text
stringlengths
83
79.5k
H: suggest ingredients based on recipe title I would like to construct system that would suggest user ingredients once he/she inputs title of the recipe. I think that this is the task of machine learning or AI, but on the other hand I am pretty new to ML and generally in AI development and I feel kinda lost, since I don't know where to start and with what to start. So far, I've managed to crawl data from web page with recipe title and ingredients which are used for this recipes. My thoughts are that, this problem may be solved with suggester system, but on the other hand I think classification / clustering algorithms may be used to divide recipes in clusters / categories and once input title is associated to cluster ingredients may be generated from that cluster, but I don't know which is best solution and I wonder if there may be other ones. Thanks in advance! I'm posting my temporary solution which may help others solve similar problems and I think this is pretty basic, but works nice: import all data to database (postgres in my case), where Recipe table has only name and Ingredient table has name and ForeignKey to recipe once user inputs recipe name (rname), I run query using postgres' Trigram Similarity module to detect similarity between rname and recipe table names. Then I filter out recipes which similarity is greater than .1 (for example) After that, I get all ingredients which are linked to filtered recipes, annotate ingredient usage for individual ingredient (calculate ingredient usage ratio). And order by usage ratio DESC Finally, I return ingredients (10 mostly used ones) I think, looking through proposed solutions and also posting them. AI: If the user inputs a title, then you could construct a system which finds the most similar titles in the corpus, and outputs the ingredients in the retrieved recipes. Some ideas below: 1) Represent the titles in a common vector space using a vocabulary-based vectorization and use Jaccard or Cosine similarity to find the most similar titles. examples of similarity measures 2) Convert the words in the title to their respective word-embeddings using FastText, GloVe, or Word2Vec (publicly available), and then use cosine distance or word-movers distance to find the most similar titles. example of publicly available GloVe word embeddings 3) Create sentence embeddings for the title by taking the average of the word embedding vectors or using InferSent embeddings, and then use a distance function to find the most similar titles. InferSent repository link spaCy is a good Python package for performing these type of NLP tasks link
H: Dueling DQN what does a' mean? what does $a'$ mean in the "combining" equation in Dueling DQN? (top of the page 5) $$Q(s,a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \biggl( A(s, a; \theta, \alpha) - \frac{1}{N}\sum_{a'}^{N}A(s, a'; \theta, \alpha) \biggr)$$ Where there are $N$ actions to choose from; $s$ is the incoming state (the input vector) $a$ is the action taken? (the chosen action) $a'$ I don't know what it represents in this context $\theta$ represents the weights of the convolutional layers $\alpha$ are the weights of the "Advantage stream" which outputs a vector $\beta$ are the weights of the Value stream (which outputs a scalar) Why not to simply use $a$ everywhere, why is $a'$ used in the average? AI: It is just a type of namespacing, because $a$ is already assigned the chosen action. There are two contexts of action being considered in the equation, so there needs to be a symbol for each context. Using $a'$ is an obvious choice as the letter $a$ is implicitly linked to representing an action already. The sum over $a'$ is a sum over all possible actions in state $s$, irrespective of the chosen action $a$. So both $a$ and $a'$ represent actions. $a$ is the current action, supplied on the LHS of the equation. $a'$ represents the iterator of a sum over all actions $[\forall a' \in \mathcal{A}(s)]$, only used in the calculation on the RHS. Sometimes you will see a completely different letter chosen, or some subscripting or other way to show these represent different actions. It is also quite common to see $a$ representing current action, and $a'$ representing the next action (taken when in state $s'$). But that is not what is happening here.
H: Reframing action recognition as a reinforcement learning problem Given the significant advancements in reinforcement learning, I wanted to know whether it is possible to recast problems such as action recogniton, object tracking, or image classification into reinforcement learning problems. AI: Given the significant advancements in reinforcement learning Worth noting that many of the recent advancements are due to improvements in neural networks used as function approximators, and understanding how to integrate them with reinforcement learning (RL) to help solve RL challenges involving vision or other complex non-linear mapping from state to best action. So at least some of current improvements in RL were due to researchers asking the opposite question "Given the significant advancements in neural networks . . ." I wanted to know whether it is possible to recast problems such as action recogniton, object tracking, or image classification into reinforcement learning problems. Generally, at the top level, the answer is Yes, but it offers no benefit, and could perform a lot worse. That is because in typical supervised learning scenarios there is nothing to match the concept of actions that modify state. In the example classifiers you can have the equivalent of a state (the input to be classified), an action (the choice of category) and the reward (whether of not the choice matches the label). But taking an action does not lead to another state, rewards are not sparse or cumulative across multiple actions within a specific environment or "episode". There are no time steps. RL algorithms are generic MDP solvers - they can learn about relationships between state, action and likely next state, and optimise long-term goals over may time steps when given a current state. That also makes them less efficient learners when those relationships are not valid or important in the problem you want to solve. If you trained e.g. Q-learning on a typical image classification dataset, and added time steps, it would spend a lot of time/resources establishing that its choice of action had no influence on which images it was subsequently presented with, or how easy it was to get a reward from later images as opposed to earlier ones depending on how the state varied. If you really did allow the choice of action to determine the next image, then you would be training the RL to do something else other than classification. You could frame the classifiers as contextual bandits, which is perhaps closer match. However, that still throws away knowledge that you have about the classification problem, replacing it with a generic reward system. For instance, a contextual bandit solver would deliberately guess wrong classes to check whether sometimes there was a small chance of a high reward for doing that. If you were very careful about how you represented actions and rewards, and set other hyper-parameters, then you might be able to re-create a similar gradient setup to normal supervised learning, and only lose a little bit of efficiency through using RL or contextual bandit framing of your problem. However, you would still have added some non-necessary complexity. If you search, you may find some ways to combine RL with supervised learning, for instance in this paper, the authors propose using RL to refine a generative RNN. However, these currently seem niche and are not aimed to improve upon or replace supervised learning. Finally, in theory you could allow RL to control a video camera pan/zoom as part of activity recognition (or any other video or multi-image classification task). That would be a full RL problem, because the agent's actions really would be influencing the later states and hopefully improving the accuracy of recognition. For learning efficiency, you would likely want to combine this initially with a network that had already been trained to recognise actions on a supervised data set. You would need to experiment with how much the recognition part was trained compared to the RL part (as it will start to collect data outside of your normal datasets). And of course setup and training of the combined system could be a major project. You may be able to simulate it in a game engine perhaps in the early stages.
H: R: for each row in DF1 subset/count corresponding rows in DF2 How can I subset/count rows in one data frame that correspond to rows in another data frame? I have a data frame DF1 with dates, categories and time instances for each of the date and category combinations. For example: DF1<-data.frame("DATE"=c(as.Date("2018-12-05"),as.Date("2018-12-06"),as.Date("2018-12-07")), "CATEGORY"=c("cat1","cat2","cat3"), "TIME"=c(as.POSIXct("2018-12-05 10:05"),as.POSIXct("2018-12-06 10:20"),as.POSIXct("2018-12-07 10:40"))) that is DATE CATEGORY TIME 1 2018-12-05 cat1 2018-12-05 10:05:00 2 2018-12-06 cat2 2018-12-06 10:20:00 3 2018-12-07 cat3 2018-12-07 10:40:00 Then I have another data frame DF2 with objects, categories and a time interval. For example: DF2<-data.frame("OBJECT_ID"=1:9, "CATEGORY"=c("cat1","cat2","cat3","cat1","cat3","cat2","cat1","cat2","cat3"), "START"=c(as.POSIXct("2018-12-05 09:00"),as.POSIXct("2018-12-06 10:00"),as.POSIXct("2018-12-07 10:00"), as.POSIXct("2018-12-05 09:30"),as.POSIXct("2018-12-06 08:30"),as.POSIXct("2018-12-07 10:30"), as.POSIXct("2018-12-05 08:30"),as.POSIXct("2018-12-06 08:30"),as.POSIXct("2018-12-07 08:30")), "END"=c(as.POSIXct("2018-12-05 10:00"),as.POSIXct("2018-12-06 11:00"),as.POSIXct("2018-12-07 10:00"), as.POSIXct("2018-12-05 11:30"),as.POSIXct("2018-12-06 10:30"),as.POSIXct("2018-12-07 10:30"), as.POSIXct("2018-12-05 11:30"),as.POSIXct("2018-12-06 12:30"),as.POSIXct("2018-12-07 13:30")) ) that is OBJECT_ID CATEGORY START END 1 1 cat1 2018-12-05 09:00:00 2018-12-05 10:00:00 2 2 cat2 2018-12-06 10:00:00 2018-12-06 11:00:00 3 3 cat3 2018-12-07 10:00:00 2018-12-07 10:00:00 4 4 cat1 2018-12-05 09:30:00 2018-12-05 11:30:00 5 5 cat3 2018-12-06 08:30:00 2018-12-06 10:30:00 6 6 cat2 2018-12-07 10:30:00 2018-12-07 10:30:00 7 7 cat1 2018-12-05 08:30:00 2018-12-05 11:30:00 8 8 cat2 2018-12-06 08:30:00 2018-12-06 12:30:00 9 9 cat3 2018-12-07 08:30:00 2018-12-07 13:30:00 I need to count the number of rows in DF2 that contain the time instance given in DF1 for each date and category. Meaning: DATE CATEGORY TIME NO_OF_OBJECTS 1 2018-12-05 cat1 2018-12-05 10:05:00 2 2 2018-12-06 cat2 2018-12-06 10:20:00 2 3 2018-12-07 cat3 2018-12-07 10:40:00 1 I have a feeling that the apply family should be able to do something here, but I cannot quite grasp how this could be achieved. AI: Here is a solution using data.table package. DF3 <- data.table(merge(DF1, DF2, by = "CATEGORY")) # merge tables by category DF3$TIME_OK <- (DF3$TIME >= DF3$START & DF3$TIME <= DF3$END) # create a column to check time DF3 <- DF3[DF3$TIME_OK == T,] # subset table DF3 <- DF3[,.(NO_OF_OBJECT = .N), by = c("CATEGORY", "DATE", "TIME")] # aggregate results Basically, it merges both data frames and subsets only relevant time period before aggregating results.
H: How to make a decision tree with both continuous and categorical variables in the dataset? Let's say I have 3 categorical and 2 continuous attributes in a dataset. How do I build a decision tree using these 5 variables? Edit: For categorical variables, it is easy to say that we will split them just by {yes/no} and calculate the total gini gain, but my doubt tends to be primarily with the continuous attributes. Let's say I have values for a continuous attribute like {1,2,3,4,5}. What will be my split point choices? Will they be checked at every data point like {<1,>=1......& so on till} or will the splitting point will be something like the mean of column? AI: Decision trees can handle both categorical and numerical variables at the same time as features, there is not any problem in doing that. Theory Every split in a decision tree is based on a feature. If the feature is categorical, the split is done with the elements belonging to a particular class. If the feature is contiuous, the split is done with the elements higher than a threshold. At every split, the decision tree will take the best variable at that moment. This will be done according to an impurity measure with the splitted branches. And the fact that the variable used to do split is categorical or continuous is irrelevant (in fact, decision trees categorize contiuous variables by creating binary regions with the threshold). Implementation Although, at a theoretical level, is very natural for a decision tree to handle categorical variables, most of the implementations don't do it and only accept continuous variables: This answer reflects on decision trees on scikit-learn not handling categorical variables. However, one of the scikit-learn developers argues that At the moment it cannot. However RF tends to be very robust to categorical features abusively encoded as integer features in practice. This other post comments about xgboost not handling categorical variables. rpart in R can handle categories passed as factors, as explained in here Lightgbm and catboost can handle categories. Catboost does an "on the fly" target encoding, while lightgbm needs you to encode the categorical variable using ordinal encoding. Here's an example of how lightgbm handles categories: import pandas as pd from sklearn.datasets import load_iris from lightgbm import LGBMRegressor from category_encoders import OrdinalEncoder X = load_iris()['data'] y = load_iris()['target'] X = OrdinalEncoder(cols=[3]).fit_transform(X) dt = LGBMRegressor() dt.fit(X, y, categorical_feature=[3])
H: Keep track of trainings, datasets eetc After searching quite some time for it on Google I could not find a sufficient software/toolbox that can manage trainings of neural networks. I thought of a program that combines visualization techniques without the need to write code as well as having the possibility to compare several trainings of neural networks and be able to store them easily. Does a program like this exist? Regards Lukas AI: It does, it's an API called Keras, written on top of TensorFlow. With that you can rapidly prototype any "standard" NN architecture, such as feedforward NN, RNN, or CNN. With it you can export the trained model with one line of code and save it for later use. As for visualization, Keras offers integration with pydot (GraphViz) to plot the whole NN as a graph for you to inspect.
H: Python: Detect if data of a time series stays constant, increases or decreases i need to analyse and later try to improve (integrate a filter) for measurement data that i compare to accurate reference data with python. First i want to calculate the mean offset and the standard deviation of the measurement data overall, during halt, increasing and decreasing state. How can i automatically detect and mark sections in which the reference system data stays constant, increases or decreases and use this information later for the statistical analysis. I already tried to write a simple algorithm myself but the problem is though the reference data is given precise enough so you can calculate the difference of two neighbouring data points and it is zero if no change takes place this case is also in the majority during sections of change (increase, decrease). So the algorithm can't really mark the state of the data points reliable. I already researched a little bit and here are some keywords that seemed interesting for my problem: time series analysis, anomaly detection, novelty detection, change point detection, structural changes, One-Class Support Vector Machines So my question would be if sb. could give me a pointer towards the right direction (correct method, python package, tutorial, example) so i can solve this problem myself. AI: First i want to calculate the mean offset and the standard deviation of the measurement data overall, during halt, increasing and decreasing state. I would recommend checking windows of your dataset (let's say 10 samples in a row). A very convenient method to do that is pandas.DataFrame.rolling(), please take a look at it :)
H: Stata-style replace in Python In Stata, I can perform a conditional replace using the following code: replace target_var = new_value if condition_var1 == x & condition_var2 == y What's the most pythonic way to reproduce the above on a pandas dataframe? Bonus points if I can throw the new values, and conditions into a dictionary to loop over. To add a bit more context, I'm trying to clean some geographic data, so I'll have a lot of lines like replace county_name = new_name_1 if district == X_1 and city == Y_1 .... replace county_name = new_name_N if district == X_N and city == Y_N What I've found so far: pd.replace which lets me do stuff like the following, but doesn't seem to accept logical conditions: ` replacements = { 1: 'Male', 2: 'Female', 0: 'Not Recorded' } df['sex'].replace(replacements, inplace=True) ` AI: df.where(condition, replacement, inplace=True) Condition is assumed to be boolean Series/Numpy array. Check out where documentation - here is an example.
H: What is the use of torch.no_grad in pytorch? I am new to pytorch and started with this github code. I do not understand the comment in line 60-61 in the code "because weights have requires_grad=True, but we don't need to track this in autograd". I understood that we mention requires_grad=True to the variables which we need to calculate the gradients for using autograd but what does it mean to be "tracked by autograd" ? AI: The wrapper with torch.no_grad() temporarily sets all of the requires_grad flags to false. An example is from the official PyTorch tutorial. x = torch.randn(3, requires_grad=True) print(x.requires_grad) print((x ** 2).requires_grad) with torch.no_grad(): print((x ** 2).requires_grad) Output: True True False I recommend you to read all the tutorials from the link above. In your example: I guess the author does not want PyTorch to calculate the gradients of the new defined variables w1 and w2 since he just want to update their values.
H: GAN vs DCGAN difference I am trying to understand the key difference between GAN and DCGAN. I know that DCGAN uses a convolutional network. But: What data is better to push into GAN and what data fits better to DCGAN? Does DCGAN work better with small data dimensions? AI: A Generative Adversarial Network (GAN) takes the idea of using a generator model to generate fake examples and discrimator model that tries to decide if the image it receives is a fake (i.e. from the generator) or a real sample. This was originally shown with relatively simple fully connected networks. A Deep Convolution GAN (DCGAN) does something very similar, but specifically focusses on using Deep Convolutional networks in place of those fully-connected networks. Conv nets in general find areas of correlation within an image, that is, they look for spatial correlations. This means a DCGAN would likely be more fitting for image/video data, whereas the general idea of a GAN can be applied to wider domains, as the model specifics are left open to be addressed by individual model architectures. The linked paper that proposed DCGANs specifically raises the topic of unsupervised-learning, and essentially wanted to marry the (at the time) recent success of conv nets with the new idea of GANs. I also couldn't find any direct comparisons of when to use them, but there are plenty of articles that explain both models. This is a good place to start - after reading that, you could probably decide for yourself. Regarding dimensions - I don't think the dimensions of your data would dictate which of the two variants to go for, other than of course influencing things that we always have to consider, such as training time, model complexity, capacity to learn and so on.
H: RandomForest - Reasons for memory usage / consumption? Which factors influence the memory consumption? Is it the number of trees (n_estimators) or rather the number of data records of the training data or something other? AI: Both the amount of data and the number of trees in your forest will take up memory storage. For the amount of data you have, this is obvious why it takes up storage - it’s data, so the more of it you have, the more space it takes up in memory. If your dataset is too large you may not even be able to read it all into memory - it may need to stay on the disk for training (I don’t think scikit supports this). For the number of trees, each tree that is grown is trained on a subset of random features (so that the forest can avoid variance and avoid all the trees growing the same way). Once the forest is trained, the model object will have information on each individual tree - it’s decision path, split nodes, thresholds etc. This is information, and the more information you have, the more storage it will need. Therefore more trees = more information = more memory usage. There are other factors that determine how much memory the model object (the trained forest) will take up; for example, max_depth which sets the maximum number of layers / levels any tree can grow to. If this is large, trees can grow deeper and therefore the model object will be bigger and therefore will need more storage. This shouldn’t be too large though, to avoid overfitting.
H: Does hierarchical agglomerative clustering with centroid-linkage suffer from chain-effect? It is known that the results of hierarchical agglomerative clustering using single-link method in order to determine the inter-cluster distance suffer form the chain-effect (natural clusters tend to extend trough a line of few points, like in the image below). Does centroid-linkage have the same disadvantage? AI: The potential worst cases of centroid linkage are probably too crazy to explain as easy as the single-link effect... To see how it reacts to the classic Single-Link problem, why don't you just try yourself? Roughly, because of the way centroid linkage works, it may end up using a virtual cluster center that is outside of the actual cluster. You then may see some very weird links to happen. That is also why it can have non-monotone linkage levels (so later merges may be cheaper than earlier merges).
H: Automated Function (python) I am trying to create a function that automates the process of taking a CSV file, splits in the data in features and responses, apply different models (regression) to the data and score them according to some metric such as MAE, RSME, etc. Model parameter should be easily interchangeable. Here is a little of what I have so far. import numpy as np import pandas as pd def get_data(file_name): data = pd.read_csv(file_name) para = int(raw_input('Enter the no of parameters to be used ')) print(para) param= [] for k in range(0,para-1): param[k]= raw_input('Enter the parameter') rec = int(raw_input('Enter the no of records in the dataset ')) print(rec) x_parameter = [] y_parameter = [] x1= [] for i in range(0,para): for x1[i] in data[i]: x_parameter[i].append(x1[i]) for j in range(0,rec): print x_parameter[j] print y_parameter[j] get_data('C:\Users\Douglas\Desktop\trainingset.csv') from sklearn import model_selection from sklearn import linear_model from sklearn import SGDRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.neighbors import KNeighborsRegressors from sklearn.ensemble import GradientBoostingRegressor from sklearn.svm import SVR dataframe = getdata(file_name) array = dataframe.values X = array[] #confirguartion for cross validation seed = 7 #prepare models models = [] models.append(('LR', LinearRegression())) models.append(('SGDR', SGDRegressor())) models.append(('KNR', KNeighborsRegressor())) models.append(('DTR', DecisionTreeRegressor())) models.append(('GBR', GradientBoostingRegressor())) models.append(('SVR', SVR())) #evaluate each model in turn results = [] names = [] scoring = 'neg_mean_absolute_error' for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=sc) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print msg AI: Your code looks decent, although this is not really the place to ask these types of questions. You're looking to write a wrapper around SK-Learn, What I would try to do is to write a method to handle different types of datasets(json or csv) and then call it on a filepath and then have a function which reads in a set of models and applies each of them to the dataset in question, and finally uses metrics present in sklearn to get some results. There is auto-sklearn which does this for you and they have a pretty solid codebase that is well documented. Here is a link for your convenience (https://automl.github.io/auto-sklearn/stable/)
H: Interactive dashboard for time series data I'm searching for some tools that allow me to build a dashboard in order to visualize information about time series. This dashboard needs to be interactive and allows to be integrated into a web site (like a web application). I made some projects with D3.js, but for this one I would prefer something faster in term of implementation (python+matplotlib like). What kind of options do I have? AI: D3 (data Driven Documents) is a great and powerful tool, but does require a certain afinity for JavaScript. There are a few tools that piggy-back off it though and are more friendly to the aspiring data scientist and plotly is behind a few others. All are easier to use compared to D3 in my opinion. Check out a few of the following: Bokeh (python) Dash (python - based on plotly, looks very promising!) Plotly (many languages) Shiny (R) Here is a comparison of Bokeh and Dash. Here is a related question on StackOverflow.
H: How google get the 'use over time for' data from the before internet years? Recently I am reading a book and I found there are lots of words I rarely saw. While I was searching their meaning, I found that for a single vocabulary google has a use over time for: statistic -- from 1800 to 2010. I may imagine how google could gather the frequency that a vocabulary to be used in Internet. But what about before the Internet? How they know the frequency of the people in 1800 using a vocabulary? Does they first convert the chartaceous material into digital data then computing for the result or something else? AI: I am not sure if you are familiar with this, but in linguistics there is a term "corpus". It is a large collection of texts used for statisical analysis and other linguistic research, both historical and contemporary. There are corpora of digitalized texts, both literature and other domains such as newspapers, bulletins, etc etc. I recon Google uses data from such corpora. For example: https://www.sketchengine.eu for English. Here's a somewhat robust list of corpora: https://en.wikipedia.org/wiki/List_of_text_corpora
H: How can I apply PCA to KNN? I want to know the do not want to how to use library I will denote a $n\times p$ data matrix by $X$, where $n<p$. That is, each row of $X$ is one sample data with $p$ feature variables. By using singular vector decomposition method, I can decompose $X$ into $A$, $B$, and $C$ such that $X=ABC$, where $A$ is a $n\times n$ matrix satisfying $A^TA=I_{[n\times n]}$, $B$ is a $n\times p$ matrix whose diagonal elements are all non-negative and real but non-diagonal elements are all zeros, and $C$ is a $p\times p$ matrix such that $P^TP = I_{[p\times p]}$. Without the loss of generality, I assume that $B$ is obtained satisfying that the diagonal elements of $B$ is descending order, i.e., $b_{i,i} > b_{j,j}$ for $i<j$. The score matrix, $S$, is obtained by $S=XC$, whose dimension is $n\times p$. In order to reduce the dimension of variables, we do as follows: Denoting the $i$-th column of $C$ by $c_i$, which is a loading vector with respect to $i$-th principal component. Hence, deleting $c_{k+1}$ to $c_{p}$, I can obtain $C_k = [c_1~c_2~\cdots~c_k]$. The dimension-reduced score matrix, $S$, is obtained by $S_k=XC_k$. In this situation, how can I run KNN? Actually, I already made my own KNN code, whose input is trainX, trainY, and testX, and whose output is evaluatedY. I want to compare (i) basic KNN and (ii) dimension-reduction-applied KNN. I obtained error rate of (i) basic KNN using 5-fold cross validation method. yhat = myKNN(trainX, trainY, testX); errRate = myErr(y, yhat); To obtain error rate of (ii) dimension-reduction-applied KNN, is the following correct? trainS= myDimensionReduction(trainX, k) testS= myDimensionReduction(testX, k) yhat = myKNN(trainS, trainY, testS); errRate = myErr(y, yhat); where myDimensionReduction is a function that converts a data matrix $(n\times p)$ into a reduced data matrix $(n\times k)$. AI: Everything looks good. However, what I doubt about is the myDimensionReduction function. When you apply the PCA on the test data, you have to multiply $X_{test}$ by the matrix $C$ that has been generated from the training data, not by the test data. This is to stress that the test data should not be used in any step of the training process. If this is what the myDimensionReduction function does, that is alright. Otherwise, if this function creates the matrix $C$, then it is not right. I know you didn't ask for the library, but I think it makes sense in here: The whole philosophy of scikit-learn is to create objects that correspond to algorithms, use the training data with the fit method and apply the transform method to the test data.
H: removing special character from CSV file I read my csv file as pandas dataframe. Originally it's a dict with multiple entries per keys. Its looks like this after reading as pandas dataframe: aad,"[1,4,77,4,0,0,0,0,3]" bchfg,"[4,1,7,8,0,0,0,1,0]" cad,"[1,2,7,6,0,0,0,0,3,]" mcfg,"[0,1,0,0,0,5,0,1,1]" so I want to firstly remove the double quotes" symbol from the file and then want to create a new csv file from previous one with consecutive three entries in each row. aad,1,4,77 aad,4,0,0 aad,0,0,3 bchfg,4,1,7 bchfg,8,0,0 bchfg,0,1,1 cad,1,2,7 cad,6,0,0 cad,0,0,3 mcfg,0,1,0 mcfg,0,0,5 mcfg,0,1,1 AI: from pandas import read_csv, concat from ast import literal_eval df = read_csv('file.csv',header=None,names=['name','value']) split = df.value.apply(literal_eval).apply(Series).set_index(df.name) part1 = split.ix[:,:2] part2 = split.ix[:,3:5] part3 = split.ix[:,6:] part2.columns=part3.columns=range(3) stacked = concat([part1,part2,part3]) Note that this yields a different order than what you requested: aad 1 4 77 bchfg 4 1 7 cad 1 2 7 mcfg 0 1 0 aad 4 0 0 bchfg 8 0 0 cad 6 0 0 mcfg 0 0 5 aad 0 0 3 bchfg 0 1 0 cad 0 0 3 mcfg 0 1 1
H: Using Policy Iteration on an automaton I've read many explanation on how do to policy iteration, but I can't find an example, so I'm stuck right now trying to figure out to Policy Iteration. The numbers next to each state show the reward received for arriving in that state. For example, if the agent started in $S_0$ and took the $Blue$ action, and ended up in state $S_1$, the immediate reward would be $-5$. The discount value is 0.1, and the initial policy is $\pi(S_0)=Blue$ and $\pi(S_1)=Red$ $S_2 $ state is the terminal state - game over. The two possible actions are Blue and Red as can be seen on the image. I just need something to help me start, because no explanation really made me understand how do I start the policy iteration till convergence. AI: Policy Iteration is essentially a two step process: Evaluate the current policy by calculating $v(s)$ for every non-terminal state. Internally this requires multiple loops over all the states until the value calculations are accurate enough Improve the policy by choosing the best action $\pi(s)$ for every non-terminal state The overall process terminates when it is not possible to improve the policy in the second part. When that happens, then - from the Bellman equations for the optimal value function - the value function and policy must be optimal. In detail for your example, it might work like this: Initialise our value function for $[S_0, S_1, S_2]$ as $[0,0,0]$ - note that by definition for the terminal state $v(S_2)=0$ so we never need to recalculate that (this is also consistent with the diagram, which shows the "absorbing" actions) so we could actually calculate it if we want, but it's a waste of time. Choose a value for accuracy limit $\theta$. To avoid going on forever in the example, let's pick a high value, say $0.01$. However, if this was computer code, I might set it lower e.g. $10^{-6}$ Set our discount factor, $\gamma = 0.1$. Note this is quite low, heavily emphasising immediate rewards over longer-term ones. Set our current policy $\pi = [B, R]$. We're ready to start. First we have to do policy evaluation, which is multiple passes through all states, updating $v(s) = \sum_{s',r} p(s',r|s,\pi(s))(r + \gamma v(s'))$. There are other ways of writing this formula, all basically equivalent, the one here is from Sutton & Barto 2nd edition. We also can choose between a "pure" iteration calculating $v_{k+1}(s)$ from $v_{k}(s)$, but I will do the update in place to the same $v$ array, because that is easier and usually faster too. Note each sum here is over the two possible $(s', r)$ results from each action. Pass 1 in detail, iterating though states $[S_0, S_1]$: Set max delta ($\Delta = 0$) to track our accuracy $v(S_0) = \sum_{s',r} p(s',r|S_0,B)(r + \gamma v(s')) = 0.5(-5 + 0.1 v(S_1)) + 0.5 (-2 + 0.1 v(S_0)) = -3.5$ This sets max delta to 3.5 $v(S_1) = \sum_{s',r} p(s',r|S_1,R)(r + \gamma v(s')) = 0.6(-5 + 0.1 v(S_1)) + 0.4 (0 + 0.1 v(S_2)) = -3$ Max delta is still 3.5 The $v$ table is now $[-3.5, -3, 0]$ Pass 2, because $\Delta \gt 0.01$: Reset $\Delta = 0$ $v(S_0) = 0.5(-5 + 0.1 v(S_1)) + 0.5 (-2 + 0.1 v(S_0)) = -3.825$ $\Delta = 0.325$ $v(S_1) = 0.6(-5 + 0.1 v(S_1)) + 0.4 (0 + 0.1 v(S_2)) = -3.180$ $\Delta = 0.325 \gt 0.01$ The $v$ table is now $[-3.825, -3.180, 0]$ Pass 3, $v = [-3.850, -3.191, 0]$ Pass 4, $v = [-3.852, -3.191, 0]$, this is small enough change that we'll say evaluation has converged. Now we need to check whether the policy should be changed. For each non-terminal state we need to work through all possible actions and find the action with the highest expected return, $\pi(s) = \text{argmax}_a \sum_{s',r} p(s',r|s,a)(r + \gamma v(s'))$ For $S_0$: Action B scores $0.5(-5 + 0.1 v(S_1)) + 0.5 (-2 + 0.1 v(S_0)) = -3.852$ (technically, we already know that because it is the current policy's action) Action R scores $0.9(-2 + 0.1 v(S_0)) + 0.1 (0 + 0.1 v(S_2)) = -2.147$ R is the best action here, so change $\pi(S_0) = R$ For $S_1$: Action B scores $0.8(-2 + 0.1 v(S_0)) + 0.2 (-5 + 0.1 v(S_1)) = -2.971$ Action R scores -3.191 B is the best action here, so change $\pi(S_1) = B$ Our policy has changed, so we go around both major stages again, starting with policy evaluation and the current $v = [-3.852, -3.191, 0]$, using the new policy $[R, B]$ I leave it up to you to do the second pass through evaluation and improvement. I suspect it will show that the new policy is optimal, and the policy will remain $[R, B]$ (although $[R,R]$ seems possible without doing the calculation). If the policy stays the same on a complete pass through evaluation and improvement, then you are done.
H: How to make sense of confusion matrix Consider a binary classification problem with 0 labels denoting normal and 1 abnormal or rare. The number of instances with 0 classes are more in comparison to 1. In general, 1) Does 0 always refer to positive or a negative depending on what we define as a positive and negative? What if the labels are reversed? 2) Is there a particular order that the confusion matrix if displayed? If the confusion matrix is given as: 1 4 0 5 I got this confusion matrix in Matlab. How do I know that the first row is for class 0 or for class 1? AI: Adding to the answer above, The labeling totally depends on how you define it. You can define 0 as negative or as positive. However, for the sake of understanding and ease of readability, keep it meaningful. The instances that are correctly predicted are given by the diagonal. Here, '1' is True Negative or for the class labelled as 0 and '5' is True Positive or for the class labelled as 1. If you find it difficult to interpret by a simple confusion matrix, you could plot it. Check out plotConfusion by MathWorks.
H: Algorithm for multiple input single output ML As an ML newbie, I have a question. I have a set of data with 2 inputs and 1 output. I'm trying to predict the output. input1 is an integer number, input2 is like a category between 1-5. Output is also a number. input1=25 input2=2 output=25 input1=34 input2=2 output=35 input1=12 input2=5 output=29 input1=3 input2=4 output=48 input1=45 input2=1 output=36 With this data, I want to predict the output for input1=27 and input2=2 I have a small set of data (10-20 items). I wonder which ML algorithm should I learn for this kind of multiple inputs and single output small sets of data? Edit With a high probability, while calculating the output, there is a mathematical relation between input1 and input2 like: output = (input1)*x + (input2)*y (x and y is unknown of course and the equation can be linear or logarithmic or something else. No idea.) AI: Since you believe the output can be predicted by a linear combination of the inputs, a reasonable approach to try is Linear Regression, specifically Multiple Regression since you have more than one input variable. Linear regression will attempt to fit the best parameters $\beta_0$ and $\beta_1$ to model your output as a weighted sum of your inputs, ie $\beta_0*input_1 + \beta_1*input_2$. This is exactly the same as the expression you gave, but it's more standard to call the weights $\beta_i$s instead of $x$ and $y$. The most standard form of linear regression using Ordinary Least Squares will find $\beta_0$ and $\beta_1$ that minimize the sum of the squared errors over your dataset, which are the differences between the actual values of output and the predicted values generated by computing $\beta_0*input_1 + \beta_1*input_2$ for each row. EDIT: To answer your question in the comments: It is always reasonable to try linear model first since it is simple and efficient, and it will give you a good baseline. However, if you suspect there is a non-linear relationship between your inputs and outputs you can also try more flexible models such as gradient boosting regression trees or a neural network. You do not need to know what the exact relationship is to use these models - they will learn it for you. In theory a neural network can fit any function. As you use more complex models, however, you should be increasingly wary of overfitting.
H: Train, test split of unbalanced dataset classification I have a model that does binary classification. My dataset is highly unbalanced, so I thought that I should balance it by undersampling before I train the model. So balance the dataset and then split it randomly. Is this the right way ? or should I balance also the test and train dataset ? I tried balancing only the whole dataset and I get train accuracy of 80% but then on the test set I have 30% accuracy. This doesn't seem right ? But also I don't think that I should balance the test set because it could be considered as bias. What is the right way to do this? Thanks UPDATE: I have 400 000 samples, 10% are 1s and 90% 0s. I cannot get more data. I tried to keep the whole dataset but I don't know how to split it into train and test set. Do I need the same distribution in the train and test dataset ? AI: Best way is to collect more data, if you can. Sampling should always be done on train dataset. If you are using python, scikit-learn has some really cool packages to help you with this. Random sampling is a very bad option for splitting. Try stratified sampling. This splits your class proportionally between training and test set. Run oversampling, undersampling or hybrid techniques on training set. Again, if you are using scikit-learn and logistic regression, there's a parameter called class-weight. Set this to balanced. Selection of evaluation metric also plays a very important role in model selection. Accuracy never helps in imbalanced dataset. Try, Area under ROC or precision and recall depending on your need. Do you want to give more weightage to false positive rate or false negative rate?
H: Training a regression algorithm with a variable number of features I need to train a regression algorithm with multiple features and a single label (predicted value). The problem is that this algorithm has to be able to do on-line learning and the number of features it will receive will vary. Let me give a clear example: The algorithm is trained on a dataset of shape: [--------Features-----------------] [Label] [-- context11 -- | -- context12 --] [label1] Then, for the next training example, one of the contexts might be missing, so the training example might either be: [--------Features-----------------] [Label] [-- context21 -- ] [label2] or [--------Features-----------------] [Label] [-- context22 --] [label2] How can I deal with this situation? So far, I thought about two possibilities: Replace the missing part of the features with zeros. I am not sure how this will affect the algorithm though. Use a decision tree or random forest? Do these have a more natural way of dealing with a variable number of features? Any other ideas? AI: I think in the specific case that you have given as an example, you can use approach 1 which you have mentioned above i.e. either replace the missing features with a zero value or otherwise you can also implement your algorithm in a sparse manner so that it doesn't need to always have all the features for an instance. Using a sparse format will not affect the runtime (or any other aspect) of your algorithm. Whereas, if you use the zeroing features approach, then your algorithm will have to do the unnecessary multiply by zero operations and this will increase your runtime ( How much? that depends on how sparse is your actual data). Hope that helps.
H: Error while using flow_from_generator Getting this error while using flow_from_generator in keras 63/3851 [..............................] - ETA: 6:41:59 - loss: 12.8586 - acc: 0.1930 64/3851 [..............................] - ETA: 6:41:40 - loss: 12.8544 - acc: 0.1934Traceback (most recent call last): File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/utils/data_utils.py", line 551, in get inputs = self.queue.get(block=True).get() File "/tools/anaconda3/envs/py35/lib/python3.5/multiprocessing/pool.py", line 644, in get raise self._value File "/tools/anaconda3/envs/py35/lib/python3.5/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/utils/data_utils.py", line 391, in get_index return _SHARED_SEQUENCES[uid][i] File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/preprocessing/image.py", line 761, in __getitem__ return self._get_batches_of_transformed_samples(index_array) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/preprocessing/image.py", line 1106, in _get_batches_of_transformed_samples interpolation=self.interpolation) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/preprocessing/image.py", line 364, in load_img img = img.resize(width_height_tuple, resample) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/PIL/Image.py", line 1743, in resize self.load() File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/PIL/ImageFile.py", line 233, in load "(%d bytes not processed)" % len(b)) OSError: image file is truncated (42 bytes not processed) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "convolutional.py", line 394, in <module> runCNNconfusion() File "convolutional.py", line 379, in runCNNconfusion epochs=epochs,verbose=1, callbacks = [MetricsCheckpoint('logs')]) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/models.py", line 1227, in fit_generator initial_epoch=initial_epoch) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/engine/training.py", line 2115, in fit_generator generator_output = next(output_generator) File "/tools/anaconda3/envs/py35/lib/python3.5/site-packages/keras/utils/data_utils.py", line 557, in get six.raise_from(StopIteration(e), e) File "<string>", line 3, in raise_from StopIteration: image file is truncated (42 bytes not processed) Can someone please help, totally blank on how to proceed AI: I guess based on the answer here and the point referred here your problem is not from Keras. Where you can read ... we see that PIL is reading in blocks of the file and that it expects that the blocks are going to be of a certain size. It turns out that you can ask PIL to be tolerant of files that are truncated (missing some file from the block) by changing a setting. Somewhere before your code block, simply add the following: from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True ...and you should be good. EDIT: It looks like this helps for the version of PIL bundled with Pillow ("pip install pillow"), but may not work for default installations of PIL.
H: Why do we need the sigmoid function in logistic regression? What is the purpose of the logistic sigmoid function as it is used in logistic regression? Why does it need to be part of the hypothesis function h(x) ? As I understand it, the logistic sigmoid function gives the probability that a certain input vector x is contained within a class C1 for a label y. In the binary-class case, it seems that if h(x) >= 0,5, we say that x belongs to one class, otherwise it belongs to the other. In the logistic regression model, our hypothesis function h(x) is of the form g(p^T * x), where p is the parameter vector (p^T is the transpose) and g is the sigmoid function. Since the y-intercept of the logistic sigmoid is 0.5, saying that h(x) >= 0.5 is the same as saying p^T * x >= 0. What I'm getting at is why do we need the logistic sigmoid function at all to define some threshold for separating the classes? Why not just let the hypothesis function be of the form h(x) = p^T * x, and claim that y = 1 if p^T * x >= 0? Why complicate things unnecessarily with the logistic sigmoid? AI: If you go and define the hypothesis function as h(x) = p^T * x, and claim that y = 1 if p^T * x >= 0, you are complicating the things even more. This is due to the fact that during training time you will have to calculate the derivative of the function h(x). Now your function can lead to exploding or vanishing gradients depending on your input and initialization. Logistic Sigmoid solves this problem and it also has a nice gradient g(x)(1 - g(x)). So rather that complicating things, sigmoid actually simplifies the calculation.
H: can't get variables from TF collection Aurelion Geron's example shows how to stored selected operations in a collection, so they can then be easily accessed later: Adding: for op in (X, y, accuracy, training_op): tf.add_to_collection("my_important_ops", op) Then retrieving: X, y, accuracy, training_op = tf.get_collection("my_important_ops") However when I try to do this I get this error: X, y, accuracy, training_op = tf.get_collection("my_important_ops") ValueError: too many values to unpack (expected 4) What am I doing wrong? AI: A simple solution is to do the following: temp = tf.get_collection("my_important_ops") print(temp.get_shape().as_list()) Then you can evaluate the outputs.
H: How to set input for proper fit with lstm? My input training and test dataset is the following size: print(trainX.shape):(53394, 3) print(testX.shape):(17799, 3) print(trainY.shape):(53394,) print(testY.shape):(17799,) I reshaped it as follows: trainX.shape:(1, 53394, 3) testX.shape: (1, 17799, 3) trainY.shape: (1, 53394) testY.shape: (1, 17799) Now, I pass it as the input layer of a LSTM: model = Sequential() model.add(LSTM(66, input_shape=(len(trainX),3))) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=100,batch_size=1, verbose=2) I am getting the error: Error Message: ValueError: Error when checking input: expected lstm_6_input to have shape (1, 3) but got array with shape (53394, 3) Please help me to fit my data properly into a LSTM. AI: You should be structuring your data as a tuple: Number of samples, timesteps, features. In your case number of samples is 53394, timesteps is 1 and number of features is 3. So the input shape will be (53394, 1, 3). You can use this snippet of code for the tranformation: trainX = trainX.reshape((trainX.shape[0], 1, 3)) And set the argument input_shape as below: input_shape = (trainX.shape[1], trainX.shape[2]) Hope this helps!
H: convert list of tuple of tuple to list of tuple in pySpark Code: def find_collinear(rdd): op = rdd.map( lambda x: (find_slope(x)[0][1],x) ) op = op.groupByKey().mapValues(lambda x:[a for a in x]) op = op.map(lambda x:x[1]) return op def find_slope(x): p1 = x[0] p2 = x[1] if p1[0] == p2[0] : slope = "inf" else: slope = (p2[1] - p1[1]) / (p2[0] - p1[0]) t1 = tuple([x[0], slope]) t2 = tuple([t1, x[1]]) return t2 test_rdd = sc.parallelize( [((4, 2), (2, 1)), ((4, 2), (-3, 4)), ((4, 2), (6, 3)), ((2, 1), (4, 2)), ((2, 1), (-3, 4)), ((2, 1), (6, 3)), ((-3, 4), (4, 2)), ((-3, 4), (2, 1)), ((-3, 4), (6, 3)), ((6, 3), (4, 2)), ((6, 3), (2, 1)), ((6, 3), (-3, 4))]) temp1 = find_collinear(test_rdd).collect() Output [[((4, 2), (2, 1)), ((4, 2), (6, 3)), ((2, 1), (4, 2)), ((2, 1), (6, 3)), ((6, 3), (4, 2)), ((6, 3), (2, 1))], [((4, 2), (-3, 4)), ((-3, 4), (4, 2))], [((2, 1), (-3, 4)), ((-3, 4), (2, 1))], [((-3, 4), (6, 3)), ((6, 3), (-3, 4))] ] Expect output: [((6, 3), (4, 2), (2, 1)), ((4, 2), (-3, 4)), ((-3, 4), (2, 1)), ((6, 3), (-3, 4))] How can I get the expected output from/instead of the actual. AI: To get the unique elements you can convert the tuples to a set with a couple of comprehensions like: Code: [tuple({t for y in x for t in y}) for x in data] How: Inside of a list comprehension, this code creates a set via a set comprehension {}. This will gather up the unique tuples. Two loops are needed inside of the set comprehension: for y in x for t in y because the tuples of interest are themselves inside of a tuple. Test Code: data = [ [ ((4, 2), (2, 1)), ((4, 2), (6, 3)), ((2, 1), (4, 2)), ((2, 1), (6, 3)), ((6, 3), (4, 2)), ((6, 3), (2, 1)) ], [ ((4, 2), (-3, 4)), ((-3, 4), (4, 2)) ], [ ((2, 1), (-3, 4)), ((-3, 4), (2, 1)) ], [ ((-3, 4), (6, 3)), ((6, 3), (-3, 4)) ] ] expected = [ ((6, 3), (4, 2), (2, 1)), ((4, 2), (-3, 4)), ((-3, 4), (2, 1)), ((6, 3), (-3, 4)) ] assert expected == [tuple({t for y in x for t in y}) for x in data]
H: Preprocessing and dropout in Autoencoders? I am working with autoencoders and have few confusions, I am trying different autoencoders like : fully_connected autoencoder convolutional autoencoder denoising autoencoder I have two dataset , One is numerical dataset which have float and int values , Second is text dataset which have text and date values : Numerical dataset looks like: date , id , check_in , check_out , coke_per , permanent_values , temp 13/9/2017 142453390001 134.2 43.1 13 87 21 14/9/2017 142453390005 132.2 46.1 19 32 41 15/9/2017 142453390002 120.2 42.1 33 99 54 16/9/2017 142453390004 100.2 41.1 17 39 89 Any my text dataset looks like : data text 13/9/2017 i totally understand this conversation about farmer market and the organic products, a nice conversation ’cause prices are cheaper than traditional 14/9/2017 The conversation was really great. But I think I need much more practice. I need to improve my listening a lot. Now I’m very worried because I thought that I’d understand more. Although, I understood but I had to repeat and repeat. See you!!! So my questions are: Should i normalize my numerical data values before feeding to any type of autoencoder? if they are int and float values still i have to normalize? Which activation function should i use in autoencoder? Some article and research paper says , "sigmoid" and some says "relu" ? Should i use dropout in each layer ? like if my artichare for autoencoder looks like encoder (1000 --> 500 -- > 256 ----> 128 ) --> decoder (128 --> 256 --> 500--> 784) something like this? encoder(dropout(1000,500) --> dropout( 500,256) --> dropout (256,128) )----> decoder(dropout(128,256),dropout(256,500),dropout(500,784)) For text dataset , If i am using word2vec or any embedding to convert text into vector then i would have float values for each word , should i normalize that data too ? text ( Hello How are you ) -- > word2vec(text) ----> ([1854.92002 , 54112.89774 ,5432.9923 ,5323.98393]) should i normalize this values or directly use in autoencoder ? AI: You should always normailze your input data, because the NN can learn faster with normalized data You can not generalize this questions, but in my experience, relu is better The use of dropout depends on your application for the model. E.g. image inpainting can be improved by dropout. What do you want to do with the model?
H: Is it valid to include your validation data in your vocabulary for NLP? At the moment, I am following best practices and creating a "bag of words" vector with a vocabulary from the training data. My cross validation (and test) datasets are transformed using this model, using the same vocabulary created by the training set. They don't contribute any vocabulary, or affect the document frequency (for "term-frequency inverse document frequency" calculation). However, this is restrictive in a few ways. Firstly, calculating the bag of words model is expensive, and so this prohibits me carrying out k-folds cross validation (since it would require constant re-calculation of the bag of words). My dataset is around 10 million words, and I'm calculating bag of words and bag of bi-grams, which takes around 5 minutes each time. This also means I currently have holdout data for both my cross validation and test sets, which is data I can't use for training. Would I be biasing my results significantly if I fit the bag of words on both the training set, and the cross validation set? In other words, if I use the vocabulary in the validation set to calculate the vocabulary for the bag of words? The way I figure, even though they might contribute to the vocabulary, there's no risk of overfitting since the frequency for those specific samples won't be seen at training. This allows me to slice the validation set later however I like, and I still have a "test" set for an accurate predictor of generalisation error (the test set won't be seen at all until test time). I wonder if there's any precedent for something like this, and what your experiences are doing anything similar. AI: Yes. That should be fine. What you suggest makes sense and should not bias the results. The reasons you give are good ones. For any reasonable classifier, if the value of an attribute is always zero in the training set, that should cause the attribute to be essentially ignored. There is a simple test to let you confirm this. You can try, for each document in the validation set, zeroing out the entries in the feature vector that correspond to words that were not present in any document in the training set, and see if that changes the classification. If it doesn't, then you know that your method has had no effect and hasn't introduced any bias. Here is another alternative, if you prefer: Conceptually, in an ideal world the vocabulary for each fold would only include words in the training set. As a matter of implementation, it's certainly possible to implement that in a more efficient way than re-generating the vocabulary and re-generating the feature vectors for each fold. As long as your implementation generates the same vectors, it doesn't matter how it obtains them. Thus, you could have an implementation that works like the following: Generate a "superset vocabulary" that includes all of the words found anywhere in the dataset (not just the training set). Generate a sparse feature vector for each document, based on the "superset vocabulary". I suggest that you represent these in an efficient way, e.g., as a Python dictionary that maps from word to count. For each fold: Split into training and validation sets. Generate a subset vocabulary, containing only the words in the training set. Probably there is a small set $S$ of words that are in the superset vocabulary but not the subset vocabulary, so I suggest storing that set $S$. For each document in the training set and validation set, generate a derived feature vector for the document, using only the subset vocabulary. I suggest generating this by starting from the feature vector for the superset vocabulary, then removing the words in $S$. This should be more efficient than regenerating the feature vector from scratch. Now train and validate these derived feature vectors. This is equivalent to the "conceptually ideal" approach, but will run faster. That said, I think your proposal is the pragmatic one, and in practice should yield equivalent results, and be even faster still.
H: Using Machine Learning to Predict Temperature I am a beginner in ML and I want to create a smart thermostat, that after collecting enough data from the interaction with the user, it will start to set the home temperature by itself. What I got so far is the hardware prototype that lets the user set the temperature, and in the same time it posts the Environment and the UserSetTemperature to ThingSpeak (to easily store the data for later access) The other part is a python algorithm that gets the data from ThingSpeak and it converts it into a Pandas DataFrame. The data frame looks like bellow: timeStamp environment_temp user_set_temp 2018-05-27T00:12:43Z 20 21 2018-05-27T00:17:27Z 20 22 2018-05-27T00:17:59Z 20 24 2018-05-27T00:20:01Z 20 21 2018-05-27T00:23:14Z 20 24 2018-05-28T09:39:07Z 20 22 2018-05-28T10:40:17Z 20 23 2018-05-28T20:12:47Z 20 25 2018-05-28T20:14:16Z 23 25 2018-05-30T20:29:30Z 18 24 And here is where I got stuck. I don't know how to use this data with the ML libraries in order to make predictions on how the temperature should be set when the environment temperature is x. I tried to use the sklearn train_test_split() and LinearRegression(), but with no significant result. I really don't know how to use this data Every suggestion will be highly appreciated!! AI: I would not recommend going ahead with the data that you know might be wrong. Looking at your current data, the reason you got a bad result with linear regression is because the relation between them is not linear for the current data. For eg. There is high variation in response (i.e. user_set_temp) for same value of your predictor (i.e. environment_temp). First, get hold of correct temperature for your recorded timestamps from local weather data to replace environment temp with this data, till you get the issue rectified with your original recorded environment_temp. When you rectify the issue, then I would recommend you to use both weather and environment temp to predict, as a person might set the temperature depending upon a combination of both. After you get a good representative data, this should be a reasonable procedure to help you predict the temperature: Exploratory data analysis for uni-variate timeseries: This will help you observe patterns in data, which will help you decide on which new features to engineer/ create from timestamps (step 2) you have of recordings. Also, look at the acf/ pacf plots to help figure out the lags in data that might help you predict better. Fit a basic uni-variate model like STL or ARIMA model to get a good base model for prediction (this model you will compare new models with to see how they perform) Feature Engineering: Time of day, day of week, week of month etc identified in step 1. Also, look for any other features that might help your algorithm find patterns. Regression Models: Test with advanced algorithms like RandomForest, Gradient Boosting and RNN-LSTM to check how they perform against each other and the base model.
H: Visualize the predicted and actual class after training and testing The data set X has 10 features with 50 instances labelled as 0 and 1. Considering only 6 instances as an example here, let YPred labels are [1,0,1,0,1,1] and the actual ground truth labels are YTest = [0,0,1,1,1,0]. I cannot draw a decision boundary after classification since the data set is multi-dimensional. In the following code, pred = predict(svmModel, X(testIdx,:)); runs k times. I cannot understand which predicted class labels should be taken after the cross-validation ends to say that this is the final "good" prediction. AI: The whole essence of cross-validation is to check the model with various sets of data and to know how well it predicts with 'unseen' data. So, with the 10 folds, you create 10 different training and test/validation set. But, the model remains the same. What you are trying to ask is which model to choose. There is only one model. We are only checking the model that's already built using cross-validation. After checking the model you have built, use the complete dataset to train and predict the labels on test set using this final model. Usually, the dataset is split into training, validation and test set. Test set is not touched till the final model has been built. The test set you create during cross-validation is actually the validation set. The ability of the model to predict well is validated on this set. Hope it helps! Update: fitcsvm() is what trains the data. This is the modelling bit. What you are doing is right. What you need to understand is that cross-validation is used for checking the model. There would be cases where one model wouldn't predict as well as another one. Testing with only one test would not give good insights on how well the model is working. This is why, we use cross-validation - to test the model with different training and test sets. 10 folds for only 50 records might be an overkill. 2-3 folds would suffice.
H: Making predictions from keras with SciKit Ive been thinking about combining some processes between keras and Sci-kit Learn and am looking to the this group to either validate my process or tell Im crazy. Im creating a simple Regression problem using 17 inputs like this: Creating test/train here: X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.25, random_state=7) Building the network here: model = Sequential() model.add(Dense(34, input_dim=17, kernel_initializer='normal', activation='relu')) model.add(Dense(17, kernel_initializer='normal', activation='relu')) model.add(Dense(8, kernel_initializer='normal')) model.add(Dense(1, kernel_initializer='normal')) compiling the model with: model.compile(loss='mean_squared_error', optimizer='adam') fitting the model here: model.fit(X_train,y_train,validation_data=(X_test,y_test), epochs=100, batch_size=10) Now that Ive fit the model is the any reason I can use some of the SciKit functions and do the following? Make predictions y_pred = model.predict(X_test) Assess the model results: mse = mean_squared_error(y_test, y_pred) rmse = sqrt(mse) r2score = r2_score(y_test,y_pred) Am I way off-base here? AI: This seems about right. You can use SciKit learn quite easily, as the predictions and test results you have should all be in NumPy arrays anyway. Take a look at the regression metrics. The metrics you have named are shown in the documentation to take lists, but here is an example showing they work just as well with NumPy arrays: In [1]: from sklearn.metrics import r2_score In [2]: y_true = [3, -0.5, 2, 7] In [3]: y_pred = [2.5, 0.0, 2, 8] In [4]: r2_score(y_true, y_pred) Out[4]: 0.94860813704496794 In [6]: import numpy as np In [13]: y_true = np.array([3, -0.5, 2, 7]) In [14]: y_pred = np.array([2.5, 0.0, 2, 8]) In [15]: r2_score(y_true, y_pred) Out[15]: 0.94860813704496794 # identical result One last comment: 17 examples doesn't sound like a lot. Look at model.summary() after compilation to see how many parameters your model has (spoiler - it's 1360). I would expect that you model (with your number of epochs and batch size etc.) will overfit, just memorising the dataset, and probably score 100%. While this is a good sanity check to make sure your model can indeed learn, it might be a good idea to split a larger dataset (if available) into train, validation and test datasets. The simply use train and val as you have above, but in the prediction line, us the test set, which the model has never seen before. Unless your data is extremely homogenous, I wouldn't expect a accuracy metric near 100%.
H: LOF gives same number of outliers irrespective of parameters I am running lof algorithm for around 100k 2d points. Each time, I run the lof algorithm with different n_neighbours parameter, I get the same number of points as outliers. It's always 10% of the points as outliers. Is this how this algorithm is supposed to work? Why does it occur this way? AI: As per sklearn documentation, you have default parameter contamination set at 0.1. contamination : float in (0., 0.5), optional (default=0.1) The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the decision function. No matter how many times you will run the algorithm with different n_neighbours, 10% of your dataset will be retained as outliers. Keep in mind that, when running a LOF, you are first executing a k-nearest-neighbour algorithm, which is dependent of the n_neighbours parameter. You then evaluate the local density of your points (respectively with their nearest neighbours), which returns a numerical value. However, you have yet to decide over which threshold value a point should be considered as outlier, and this is precisely where the contamination parameter intervenes. It allows you to calculate the threshold with respect to the contaminated proportion you have specified. I suggest you visualize the results of the LOF to determine whether your contamination proportion is relevant, or ought to be adjusted.
H: Transfer learning by concatenating the last classification layer Before going into an obvious XY problem, I will explain you what I'm trying to do. I'm training a simple MobileNet pre-trained with Imagenet for multiclass classification. What I do is freeze all the convolutional part, and then create a new Prediction Layer (conv2D 1x1xN where N is the number of classes). Let's say I trained with 5 classes, and then I got a new set of images of 5 new classes. I want to keep the knowledge from the last training and be able to train only with the new classes. What I do is to train a new model with the same frozen pre-trained weights, but only with the new 5 classes, then concatenate the weights from the old and the new model into a last layer that outputs for 10 classes. The concatenation works perfectly, but when I run the evaluation I got a horrible accuracy (like if it was predicted randomly). For example, for 30 clasess trained in "sets" of 5, I got an accuracy of 0.03. Is what I'm doing viable? A problem that I think it can be causing this is that the "labels" for every prediction are not kept in the same order or something, so that even if I copy the weights, the predictions may be unordered. AI: It turns out that it was an issue related with the order of classes being loaded in the model. Let's assume I have the following structure root -train1 +a +r -train2 +b +y -all +a +r +b +y Where a, r, b and y are classes (folders with images). I first train with the ones at train1, so the network assigns each output to a class. Then I train a second network (everything but last prediction layer frozen) with the second training folder train2. The order of the outputs of the prediction layer will change depending on the algorithm, but assuming it's loading the paths in a sorted way: In the first network, the first output will be for a and the second for r. In the second network, the first output will be for b and the second one for y, and when concatenated to the first network, it will be a, r, b, y When I load the whole (merged) model, and I load a folder will all classes inside (all), the network assumes the outputs are for a, b, r, y in this order, whilst the concatenated network had a, r, b, y as the output order. In this example, the outputs for a and y will be in the same position, while r and b will be reversed, which will lead to a bad accuracy. TL;DR: The order in which classes are loaded while you do the "incremental learning" needs to be the same always, so be careful in which order your classes are being loaded.
H: Backpropagation - softmax derivative I have a question on the backpropagation in a simple neural network (I am trying to derive the derivative for the backpropagation). Suppose that the network is simple like so (forward pass): $$\begin{aligned}z_1 &= xW_1 + b_1 & (1) \\ a_1 &= \tanh{(z_1)} & (2) \\ z_2 & = a_1W_2 + b_2 & (3)\\ a_2 & = \text{softmax}(z_2)=\hat{y}_n & (4) \\ l_n & = \log{(\hat{y}_n)} & (5) \\ PL_n & = l_n\dot{y}_n^T & (6) \\ L & = -\frac{1}{N} \sum_{n \in N} PL_n & (7) \end{aligned} $$ So only one hidden layer. Let the dimensions be the following: $x$ is $1 \times 2$, $W_1$ is $2 \times 500$, $b_1$ is $1 \times 500$, $a_1$ is $1 \times 500$, $W_2$ is $500 \times 2$, $b_2$ is $1 \times 2$, then $\log{(\hat{y_n})}$ is $1 \times 2$, then $\dot{y}_n^T$ is $2 \times 1$. Then the backpropagation algorithm (for a single sample), for this staged computation would proceed as follows (equipped with the knowledge that the derivative of $X$ whose dimensions are $N \times M$ will preserve those dimensions): $$\begin{aligned} dPL_n & = -1/N & (7) \\ d\dot{y}^T_n & = l_n^T dPL_n & (6) \\ dl_n & = \dot{y}^T_n dPL_n & (6) \\ d\hat{y}_n & = 1/\hat{y}_n & (5) \end{aligned} $$ Backpropagation equation $(5)$ above is a bit of an abuse of notation, but what I am trying to say that it is a vector, whose values are $[1/\hat{y}_n^{(1)}, 1/\hat{y}_n^{(2)}]$. But I am stuck at this step. Because I am not sure about the softmax. I have found this. Which tells me that if $a_2$ is the resulting softmax vector, then I can take following four derivatives: $\frac{\partial a_2^{(1)}}{\partial a_2^{(1)}}$,$\frac{\partial a_2^{(1)}}{\partial a_2^{(2)}}$,$\frac{\partial a_2^{(2)}}{\partial a_2^{(1)}}$,$\frac{\partial a_2^{(2)}}{\partial a_2^{(2)}}$. However, the dimensions of $z_2$ are $1 \times 2$, thus there should only be two partial derivatives. It makes conceptual sense to me, that $dz_2$ should equal to $\left[\frac{\partial a_2^{(1)}}{\partial a_2^{(2)}}, \frac{\partial a_2^{(2)}}{\partial a_2^{(1)}} \right]$, but I cannot motivate my intuition. Also, if I generalize this. Suppose $z_2$ is $1 \times 10$, what symbolic expression for derivative: $dz_2$ would we have then? (I was motivated to implement my neural network from scratch after reading this post by Stanford; hence this question). Another resource I am using. I am inclined to believe that $dz_2 = \hat{y}_n - \dot{y}_n$, but why, I do not know ;( I am also missing $dl_n$ in backprop in $(5)$, the only way to end up at $1 \times 2$, is if we do: $d\hat{y}_n = 1/\hat{y}_n \circ dl_n$. I have just looked at the porblem in the following way, which did not help me much. We need to find $dz_2 = \frac{\partial L}{ \partial z_2}$, we can rewrite this using backpropagation chain rule $$\begin{align} \frac{\partial L}{ \partial z_2} &= \frac{\partial l_n}{\partial z_2} \frac{\partial L}{\partial l_n} \\ &= \frac{\partial l_n}{\partial z_2} dl_n \\ &= \frac{\partial}{\partial z_2} \log{(\text{softmax}(z_2))} dl_n \\ &=\frac{\partial}{\partial u} \log{u} \frac{\partial u}{\partial z_2} dl_n \\ &= \frac{1}{\text{softmax}(z_2)} \frac{\partial}{\partial z_2} \text{softmax}(z_2) dl_n \end{align}$$ Also, same question on Maths.SE. AI: The partial derivative of the cost function with respect to the neuron $i$ ,in the layer $Z_2$ defined in your questions above, is $\frac{\partial L}{\partial z_i}$, where $L$ is the cost function. please note that the derivation will be on the neuron level not at the layer level, where $z_i$ is the neuron number $i$ in the output layer/softmax layer/ $Z_2$ layer. In other words $Z_2$ layer consists of 2 elements $z_0$ and $z_1$ and the derivation will be w.r.t the neurons/elements. I re-arrange your equations to keep things as simple as possible: combining equations 5,6,7 yields: $L = -\ \sum_{n} \dot{y}_n$ x $ log(y_n)$ Here is a brief derivation $$\frac{\partial L}{\partial z_i}=-\sum_n\dot{y}_n\frac{\partial \log y_n}{\partial z_i}=-\sum_n\dot{y}_n\frac{1}{y_n}\frac{\partial y_n}{\partial z_i}$$ The softmax function $y_n$ is defined as $$y_n = \frac{e^{z_n}}{\sum_ke^{z_k}}$$ when n=i $$\frac{\partial y_n}{\partial z_i} = \color{blue}{y_i(1-y_i)}$$ when n <> i $$ \frac{\partial y_n}{\partial z_i} = \color{red}{-y_ny_i}$$ now let's substitute $\frac{\partial y_n}{\partial z_i}$ $$\frac{\partial L}{\partial z_i}=-\dot{y}_i\frac{1}{y_i}\color{blue}{y_i(1-y_i)}-\sum_{n\neq i}\dot{y}_n\frac{1}{y_n}\color{red}{({{-y_ny_i}})}$$ $$ \\=-\dot{y}_i(1-y_i)+\sum_{n\neq i}\dot{y}_n({{y_i}})\\=-\dot{y}_i+{\dot{y}_iy_i+\sum_{n\neq i}\dot{y}_n({y_i})} \\={y_i\left(\sum_n\dot{y}_n\right)}-\dot{y}_i=y_i-\dot{y}_i$$ where $\sum_n\dot{y}_n = 1$ For detailed derivation and explanations please see https://deepnotes.io/softmax-crossentropy
H: Is it better to optimize hyperparameters or run multiple epochs? Whenever I train a neural network I only have it go through a few epochs ( 1 to 3). This is because I am training them on a bad CPU and it would take some time to have the neural network go though many epochs. However, whenever my neural network performs poorly, rather than have it go through more epochs, I try to optimize the hyperparameters. This approach has generally been successful as my neural networks are pretty simple. But is training a neural network in this manner a bad practice? Are there disadvantages to immediately going to optimize the hyperparameters rather than running the neural network for more epochs? AI: Count of epochs is also a hyper-parameter. However, if you meant to ask what to choose to work upon, whether increasing epochs or some other methods like feature engineering , then below is my answer. Increasing number of epochs is often attributed as hammer stroke to train the model where you yourself often don't have to think much about the data and deepNet does it for you generally. However, it comes at a cost of computational complexity, risk of overfitting, etc. Instead, it is always advisable to work as much on tasks like feature engineering, tuning different other hyper-parameters while training. In fact, a data scientist is supposed to work on these things preferably and not just blindingly utilising the black-box concept of deepNet. However, even if you choose to increase the count of epochs, you may utilise the technique called early stopping. It basically says to stop the training at a certain epoch if the validation loss does not improve which you can use without much thinking about the impacts of increasing the count of epochs because if the validation loss does not improve, it will stop there.
H: Transforming words in sentences into vector form to prepare a model I want to build a simple classifier that classifies if the text is a question or just a simple message. I understand logistic regression and can work to create a simple neural network. I have the labeled input data in English, Japanese, Korean, Thai. How could I transform this data before I feed it into the classifier? AI: An approach would be to sort out all the words in your data according to how often they appear, i.e. their "frequency". After that, pick the "X" most frequent words in your dataset to use them for the classification of your dataset. Assuming that you are working with Python and Keras, you should use the Embedding layer. For more details about how to use that layer, check this. Shortly, what this layer does is that it maps the input to a high dimensional vector domain. A word is converted to a real-valued vector and word similarity is evaluated by the "closeness" of two word-vectors in the high-dimensional vector space. Also make sure that your dataset consists of texts of fixed-length, by truncating long sequences or zero-padding short ones. After all this is done, you can train a recurrent neural network with LSTM neurons as a text classifier. LSTMs have been proven very successful in text processing due to their inherent memory. A hands-on Python/Keras tutorial that demonstrates all the above can be found here, I am sure it will be of high help :)
H: Making sense of how R works internally As what type of "thing" are various objects and functions one deals frequently with in R stored internally? To clarify: For languages like Python it is very easy to understand conceptually how your data is stored internally: Everything is stored as an object and this uniform way of handling data makes it very to do advanced stuff like assigning new meaning to methods or writing functions that take other functions as input. What I am looking for is a similar conceptual understanding for R (e.g. I presume not everything in R an object as well). (One concrete point that I would hope to understand as a consequence of the answer to this question would be: why setting,e.g., the second column of a dataframe to NULL like sodf[2] <- NULL deletes it. This means: You don't have to answer this question particularly, but please answer in such a way that I understand how various recipes spread out on the internet for dealing dataframes actually make sense.) Also, please don't just point me to the documentation. I'm sure somewhere deep inside it is the answer, but reading the documentation is just too time consuming. AI: I am not sure I can go as deep as you'd like, but I can give the basics. The base types in R are C-structs. Taken from Hadley Wickham's Advanced R: Base types Underlying every R object is a C structure (or struct) that describes how that object is stored in memory. The struct includes the contents of the object, the information needed for memory management, and, most importantly for this section, a type. This is the base type of an R object. Base types are not really an object system because only the R core team can create new types. As a result, new base types are added very rarely: the most recent change, in 2011, added two exotic types that you never see in R, but are useful for diagnosing memory problems (NEWSXP and FREESXP). Prior to that, the last type added was a special base type for S4 objects (S4SXP) in 2005. One layer higher (or at least different in parallel to the C base types), R itself has a few Object Oriented Systems at work. Major data containers that you likely use, such as vectors, dataframe, methods are going to be of type S3. The newer major object system defines S4 objects. There is a great overview on Hadley's webpage Things such as removing a column if it is set to NULL (as opposed to Panda's approach, of making each column take that value) I would guess are design choices. It is likely more common to drop a column than it is to fill it with null-values - in the context of 99% of the algorithms R packages the null values would simply be ignored anyway. So for convenience; not that DF.drop() in Pandas is a major inconvenience. This behaviour is called an idiom in this blog. They also show that assigning NULL to a vector element also deletes that element. Additionally, NULL and NA behave differently: NA is a missing value, NULL is not! IMO you should take the time to at least skim some documentation. With the help of search engines and control-F, you rarely need to drudge through pages of docs to find what you are looking for.
H: Classification/Prediction based on Multivariate Time Series So, I have a time series with many independent variables (X's) and an outcome variable Y (that I want to predict, think a 2 class logistic regression where output would either be 1 or a 0). Kindly see a sample below: Timestamp X1 X2 X3 X4 Y 1:00 1 0.5 23.5 0 0 1:01 1 0.8 18.7 0 0 1:02 0 0.9 4.5 1 0 …. 1:30 1 1.9 5.5 1 1 1:31 0 1.7 4.3 0 1 … … Now I want to predict or rather classify Y as 0 (stable) or 1 (unstable) (Note that when Y becomes 1 it remains 1 for certain interval of time, same when it is 0) So Y will be dependent on sequence variables (Please note that it is a time series, and not a standard regression problem where every row can be fed to an Algorithm for classification, the output here is dependent on a sequence of inputs/rows), for instance Y may become 1 when X2 starts increasing and X3 starts decreasing and so on (there are many independent variables X1…XN). The way I was thinking in order to solve this problem was to extract say m hours of data before Y becomes 1 and do some descriptive statistics on X in order to derive new features (like mean of X1, std of X2, last change point of X4 and so on for the set of this extracted data) to convert the X’s to a single row feature vector. The outcome ‘Y’ of this single row feature vector is 1 as we have just extracted the data before Y became 1. So this way I am able to convert a time series into a standard classification/prediction problem. Similarly I can take the other class i.e. Y=0 and follow the same process. The other approach that I thought about was to incorporate a sequence model, something like Hidden Markov Model where the hidden states might be stable (say for Y=0) and unstable (for Y=1) and then I go about emission and transition probabilities. But this HMM will be multivariate considering there are many X’s on which Y is dependent. This seems a bit complex? Any ideas on modeling the above problem will be appreciated. AI: Train an LSTM-RNN to perform direct sequence classification. This essentially means that it will have multiple inputs and 1 output, i.e. the label (0 or 1). In Keras/Python this is very easy to implement, just make sure that you have a Dense layer in the end with sigmoid activation so that the output is between 0 and 1. You train the network based on your labeled data and then it outputs the label by itself. A useful tutorial on how to do this can be found here. The most important thing is that it inherently deals with linear/nonlinear cross-correlation between inputs so you don't have to explore them yourself. It is also capable of learning the dynamics of the input signals, because of its inherent memory. Keep in mind that in overall it is a very convenient solution because it works like a black box that accepts time series and "spits" out their labels. This approach has worked successfully for me for time series classification :)
H: Reshaping big dataset with MinMaxScaler giving error My data set is of shape (1249, 228). Most of the entries are zero and other are integers like 1,2,5,10,20 etc. I want to transform this set for the input into LSTM. But when I am applying MinMaxScaler. It is giving the following error: load the dataset: dataset1 = pd.read_csv('g:/hello.csv', engine='python') dataset1= dataset1.drop('packages', axis=1) dataset1 = dataset1.astype('float32') normalizing the set scaler = MinMaxScaler(feature_range=(0, 1)) dataset1 = scaler.fit_transform(dataset1) ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). How can I transform this data set according to the input in LSTM. AI: I assume you checked for NaN and Inf values manually. According to the solutions posted here: https://stackoverflow.com/a/44869902/6204860 and https://datascience.stackexchange.com/a/11933/52089 if dataset1 is a Pandas DataFrame try converting it to a matrix by running this: dataset1 = dataset1.as_matrix().astype(np.float) np.nan_to_num(dataset1) and then run dataset1 = scaler.fit_transform(dataset1). Let me know if this works :)
H: Error while using the pandas_datareader package I am trying to do a basic project where I grab some data from Morningstar or Google Finance, but when I import the package according to the usage instructions on GitHub and run Python in Pycharm, it returns the error: ImportError: cannot import name 'is_list_like' What should I do? Obviously, I am doing something wrong. AI: BEFORE you import pandas_datareader, run this: pd.core.common.is_list_like = pd.api.types.is_list_like where pd stands for your pandas import. Let me know if this works :)
H: how can autoencoder reduce dimensionality? I can't understand how is dimensionality reduction achieved in autoencoder since it learns to compress data from the input layer into a short code, and then uncompress that code into the original data I can' t see where is the reduction: the imput and the putput data have the same dimensionality? AI: Autoencoders are trained using both encoder and decoder section, but after training then only the encoder is used, and the decoder is trashed. So, if you want to obtain the dimensionality reduction you have to set the layer between encoder and decoder of a dimension lower than the input's one. Then trash the decoder, and use that middle layer as output layer.
H: Similarity between two scatter plots I would like to know if there is a metric used to compute the similarity between two scatter plots? AI: The simplest method is to calculate the euclidean distance between the baricenters of the two distributions; This do not take in account of the variance between the distributions, however. If you want something more accurate, you can add to that distance the spread of the distributions; between the others, you can calculate the spread as the standard deviation of the distances between the baricenter (already calculated) and each of the points. You can also think of using a join metric of the first two (baricenter distance and spread distance). An example in Python. Take the three distributions in the image: As you can see, distribution A and B are centered around the same point, but they have different spreads. Distribution A and C have different centers, but their spreads are more similar. Here the code to calculate the distances I described. The more the distances are small, the more the distributions are similar. #create three dummy distributions dist_a=[] dist_b=[] dist_c=[] for i in range (100): dist_a.append(np.random.randn(2)+10) dist_c.append(np.random.randn(2)+25) dist_b.append(np.random.randn(2)*5.5+10) plt.scatter([a for a, _ in dist_a], [b for _, b in dist_a], label='distribution a') plt.scatter([a for a, _ in dist_b], [b for _, b in dist_b], label='distribution b') plt.scatter([a for a, _ in dist_c], [b for _, b in dist_c], label='distribution c') plt.legend() #calculate baricenters bc_a=np.mean(dist_a, axis=0) bc_b=np.mean(dist_b, axis=0) bc_c=np.mean(dist_c, axis=0) #calculate the distance between baricenters dist_a_b=np.linalg.norm(bc_a-bc_b) dist_a_c=np.linalg.norm(bc_a-bc_c) dist_b_c=np.linalg.norm(bc_b-bc_c) print("baricenter distante between distribution A and distribution B=", dist_a_b) print("baricenter distante between distribution A and distribution C=", dist_a_c ) print("baricenter distante between distribution B and distribution C=", dist_b_c ) print ("\n") #calculate the spread of the distributions, e.g. their standard deviation spread_a=np.std(dist_a) spread_b=np.std(dist_b) spread_c=np.std(dist_c) dist_spread_a_b=np.abs(spread_a-spread_b) dist_spread_a_c=np.abs(spread_a-spread_c) dist_spread_b_c=np.abs(spread_b-spread_c) print("spread distance between distribution A and distribution B=", dist_spread_a_b) print("spread distance between distribution A and distribution C=", dist_spread_a_c) print("spread distance between of distribution B and distribution C=", dist_spread_b_c) print ("\n") #put in a single metric. NB, the paramenter of this join is subjective, and depend on the usecase #alpha=0 : don't care about the euclidean distance between the baricenters #alpha=1 : don't care about the spread distance between the baricenters alpha=0.3 joint_metric_a_b=alpha*dist_a_b + (1-alpha)*dist_spread_a_b joint_metric_a_c=alpha*dist_a_c + (1-alpha)*dist_spread_a_c joint_metric_b_c=alpha*dist_b_c + (1-alpha)*dist_spread_b_c print("joined metric distance between distribution A and distribution B=", joint_metric_a_b) print("joined metric distance between distribution A and distribution C=", joint_metric_a_c) print("joined metric distance between distribution B and distribution C=", joint_metric_b_c) Outputs: baricenter distante between distribution A and distribution B= 0.22454217332627005 baricenter distante between distribution A and distribution C= 21.028862497007008 baricenter distante between distribution B and distribution C= 20.98580645790957 spread distance between distribution A and distribution B= 4.153324630270008 spread distance between distribution A and distribution C= 0.004700454831506384 spread distance between of distribution B and distribution C= 4.158025085101515 joined metric distance between distribution A and distribution B= 2.974689893186887 joined metric distance between distribution A and distribution C= 6.311949067484157 joined metric distance between distribution B and distribution C= 9.206359496943932 Following the first metric (euclidean distance between baricenters), distribution A and B are more similar. Following the second metric (spread distance), distribution A and C are more similar. The third metric is tunable, by the parameter alpha, which accept values between 0 and 1. Depending on your usecase, you can be more interested that the distributions lie around the same point, and so you care more on the distance between their baricenters, or that the distributions have the same spread, even if their baricenters are slightly displaced. So, you have to adapt the paramenter alpha to your case.
H: How is the property in eq 15 obtained for Xavier initialization I am new in this field so please be gentle with terminology. In the original paper; "Understanding the difficulty of training deep feedforward neural networks", I dont understand how equation 15 is obtained, it states that giving eq 1 : $$ W_{ij} \sim U\left[−\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\right] $$ it gives rise to variance with the following property: $$ n*Var[W]=1/3 $$ where $n$ is the size of the layer. How is this last equation(15) obtained? Thanks!! AI: The fixed answer of $1/3$ is a result of their decision to use the uniform distribution along with the parameterised arguments, namely $1/\sqrt{n}$. For a uniform distribution, denoted with lower and upper bounds $a$ and $b$: $$ U(a, b) $$ the variance is defined as: $$ \frac{1}{12} (b - a)^2 $$ So in the case of the authors, Glorot and Bengio, the two bounds are simply the square root of the number of neurons in the layer of interest (generally referring to the preceding layer, as they put it). This size is called $n$, and they set the bounds on the uniform distribution as: $$ a = - \frac{1}{\sqrt{n}} $$ $$ b = \frac{1}{\sqrt{n}} $$ So if we plug these values into equation 15, we get: $$ Var = \frac{1}{12}(\frac{1}{\sqrt{n}} - -\frac{1}{\sqrt{n}})^2 $$ $$ Var = \frac{1}{12}(\frac{2}{\sqrt{n}})^2 $$ $$ Var = \frac{1}{12} * \frac{4}{n} $$ And so finally: $$ n * Var = \frac{1}{3} $$
H: Why do we double the number in a quadratic cost function or MSE? $$ C(w,b) = \frac{1}{2n}\sum_{x}||y(x)-a||^2 $$ Where y is a 10-dimensional vector, a is the output, w is the weight and b is the bias and n is the number of inputs. If this is the MSE, shouldn't it be $\frac{1}{n}$ instead? Link AI: This is really just like a convention that appears in some places because we normallt want to take the derivative of the cost function (i.e. compute gradients), which means the power of 2 would be taken to the front. If we put the $\frac{1}{2}$ at the front to begin with, it just looks nicer once we have finished. I have seen this written somewhere before in a paper, but can't find a reference right now. Because the nominal values of cost themselves (the scale of the values) is not of importance, we can scale it as we like really. Multiplying by a constance of $0.5$ does not change the algebraic behaviour.
H: Why TF-IDF is working with Sentiment Analysis? Word2vec looks excellent to me as representation of corpus for sentiment analysis. It has relations between words etc. TF-IDF has only weight of the word how important it is. Results with sentiment analysis using both of these representation are quite similar ~90% Why TF-IDF has such a good results? AI: One important factor to take into account is how you use the numerical representation of words / embeddings from either TF-IDF or Word2Vec to then compute sentiments. Without knowing how you do this, it is difficult to give a concrete answer. Also, which task are you working on, what does a result of 90% mean? Regardless of how you compute TF-IDF (there are several definitions - shown below), it is essentially assigning a numerical value to a word, thus creating a mappng of sorts. Word2Vec technically created an embedding, as it maps individual words into a vector space. Images taken from here The final TF_IDF is simply the multiplication of the term frequency and the inverse document frequency. I won't go into details as to how the vectors in Word2Vec are computed, but they also define a way to assign a numerical vector (an embedding) to a single word. In essence, both of these are saying how important a word is, in the context of your documents (your corpus), with Word2Vec also having the interpretability of comparison between word vectors. For example, doing this with the associated vectors of the words actually works really well: King - Man + Woman = Queen Perhaps the sentiments you are computing, based on either of these methods , does something similar to taking an average over many words appearing in a sentence, and you end up with a normalised and similar results. TF-IDF takes a more intuitive approach, looking at how many times a word appears in general, in how many of the documents does it appear and how many times. Word2Vec instead looks at which words often appear together (there is a related quote that normally gets brought up here: "You can judge a person by the company they keep"). So the intuition behind each on is slightly different, but you have numerical values for each word. Perhaps a closer comparison of TF-IDF would be to look at Doc2Vec. There is also the GLoVE embedding model, which scores well in many NLP tasks - on the same level as Word2Vec embeddings.
H: Is this a good practice of feature engineering? I have a practical question about feature engineering... say I want to predict house prices by using logistic regression and used a bunch of features including zip code. Then by checking the feature importance, I realize zip is a pretty good feature, so I decided to add some more features based on zip - for example, I go to census bureau and get the average income, population, number of schools, and number of hospitals of each zip. With these four new features, I find the model performances better now. So I add even more zip-related features... And this cycle goes on and on. Eventually the model will be dominated by these zip-related features, right? My questions: Does it make sense doing these in the first place? If yes, how do I know when is a good time to stop this cycle? If not, why not? AI: If you can keep adding new data (based on a main concept such as area i.e. the ZIP code) and the performance of your model improves, then it is of course allowed... assuming you only care about the final result. There are metrics that will try to guide you with this, such as the Akaike Information Criterion (AIC) or the comparable Bayesian Information Criterion (BIC). These essentially help to pick a model based on its performance, being punished for all additional parameters that are introduced and that must be estimated. The AIC looks like this: $${\displaystyle \mathrm {AIC} =2k-2\ln({\hat {L}})}$$ where $k$ is the number of parameters to be estimated, i.e. number of features you apply, because each one will have one coefficient in your logistic regression. $\hat{L}$ is the maximum value of the Maximum Likelihood (equivalent to the optimal score). BIC simply uses $k$ slightly differently to punish models. These criteria can help tell you when to stop, as you can try models with more and more parameters, and simply take the model which has the best AIC or BIC value. If you still have other features in the model, which are not related to the ZIP, they could potentially become overwhelmed - that depends on the model you use. However, they may also explain things about the dataset which simply cannot be contained in the ZIP information, such as a house's floor area (assuming this is relatively independent from ZIP code). In this case you might compare these to something like Principal Component Analysis, where a collection of features explain one dimention of the variance in data set, while other features explain another dimension. So no matter how many ZIP-related features you have, you may never explain importance of floor area.
H: What does depth mean in the SqueezeNet architectural dimensions table? First time reading the SqueezeNet paper. Based on my understanding, a fire module contains a squeeze layer of 1x1 filters and a expand layer of 1x1 and 3x3 filters. If we take fire2 for instance, the input dimension is 55x55x96 and we take 16 1x1 filters to convolve over it. This returns a 55 x55x16 output. We then take the output and apply two convolutions, one with 64 1x1 filters and the other with 64 3x3 filters. We then concatenate the two results to create a final output of 55x55x128. In this case, what does the depth of 2 mean? Also, how do I calculate the # of parameters for each layer? AI: In this paper the depth is defined as the number of layers. The table shows a depth=2 for the fire layers because each is comprised of 2 layers. First is the squeeze layer and then followed by the expand layer. To calculate the number of parameters of a CNN we can do as follows for a single layer. Assume an input that is 28*28*64. This is the size of the MNIST dataset with 64 channels. Now assuming we want to convolve this input with 32 filters of a 3*3 kernel. Let us give the following variable names, $l=64$ is the number of channels in the input, $m=n=3$ is the size of the kernel and $k = 32$ is the number of filters. The number of parameters is then calculated as $((n*m*l)+1)*k = ((3*3*64)+1)*32 = 18,464$. The $+1$ is used to add the biases. To calculate the number of parameters in fire2 we first note that this module is comprised of 2 distinct layers. Layer 1 - The squeeze layer - $s_{1x1}$ $l = 96$, $m = n = 1$ and $k = 16$. Thus giving a total of 1,552. Layer 2 - The expansion layer Kernel size 1 - $e_{1x1}$ $l = 16$, $m = n = 1$ and $k = 64$. Thus giving a total of 1,088. Kernel size 3 - $e_{3x3}$ $l = 16$, $m = n = 3$ and $k = 64$. Thus giving a total of 9,280. Total Thus, a total of $1,552 + 1,088 + 9,280 = 11,920$.
H: Keras exception: ValueError: Error when checking input: expected conv2d_1_input to have shape (150, 150, 3) but got array with shape (256, 256, 3) I am working on multiclass classification of images. For this I created a CNN model in keras. I already pre-processed all images to size (150,150,3). Here is model summary- Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 146, 146, 32) 2432 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 64) 18496 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 64) 36928 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 128) 73856 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 6272) 0 _________________________________________________________________ dense_1 (Dense) (None, 300) 1881900 _________________________________________________________________ dense_2 (Dense) (None, 10) 3010 ================================================================= Total params: 2,016,622 Trainable params: 2,016,622 Non-trainable params: 0 I am also using data augmentation and flow_from_directory method- train_datagen = image.ImageDataGenerator( rescale = 1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') test_datagen = image.ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( new_train, batch_size=20) validation_generator = test_datagen.flow_from_directory( new_valid, batch_size=20) Then I compile the model and run fit_generator- model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics = ['acc',metrics.categorical_accuracy]) history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=10, validation_data=validation_generator, validation_steps=50) At this part I get error- ValueError: Error when checking input: expected conv2d_1_input to have shape (150, 150, 3) but got array with shape (256, 256, 3) I don't understand when all input images have size (150, 150, 3), how can it get (256, 256, 3)? Please tell me where I am going wrong. EDIT The code with which I created model is- model = models.Sequential() model.add(layers.Conv2D(32, (5, 5), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(300, activation='relu')) model.add(layers.Dense(10, activation='softmax')) For image preprocessing, I used following- for image_name in os.listdir(train_dir): im = cv2.resize(cv2.imread(os.path.join(train_dir,image_name)), (150, 150)).astype(np.float32) if image_name in validation_img: cv2.imwrite(os.path.join(new_valid,image_name), im) else: cv2.imwrite(os.path.join(new_train,image_name), im) AI: [EDIT:] Your problem is definitely in the generators, in that you do not set the target size, and its default it (256, 256) - as seen in the documentation fro flow_from_directory: flow_from_directory(directory, target_size=(256, 256), color_mode='rgb', ...) target_size: Tuple of integers (height, width), default: (256, 256). The dimensions to which all images found will be resized. Try setting the target size parameter to (150, 150) and I think it will work. That default seems to be overwriting your preprocessing. It must be in your generators - I ran the following code and a model trained as expected: from keras import models, layers, metrics import numpy as np model = models.Sequential() ...: model.add(layers.Conv2D(32, (5, 5), activation='relu', input_shape=(150, 150, ...: 3))) ...: model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) ...: model.add(layers.Conv2D(64, (3, 3), activation='relu')) ...: model.add(layers.MaxPooling2D((2, 2))) ...: model.add(layers.Conv2D(64, (3, 3), activation='relu')) ...: model.add(layers.MaxPooling2D((2, 2))) ...: model.add(layers.Conv2D(128, (3, 3), activation='relu')) ...: model.add(layers.MaxPooling2D((2, 2))) ...: model.add(layers.Flatten()) ...: model.add(layers.Dense(300, activation='relu')) ...: model.add(layers.Dense(10, activation='softmax')) In [10]: model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 146, 146, 32) 2432 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 64) 18496 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 64) 36928 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 128) 73856 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 6272) 0 _________________________________________________________________ dense_1 (Dense) (None, 300) 1881900 _________________________________________________________________ dense_2 (Dense) (None, 10) 3010 ================================================================= Total params: 2,016,622 Trainable params: 2,016,622 Non-trainable params: 0 _________________________________________________________________ In [17]: model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics = [' ...: acc',metrics.categorical_accuracy]) # Create some fake data to match your inputs. Each label seems to be 10 points: (1, 10) In [11]: fakes = np.random.randint(0, 255, (100, 150, 150, 3)) In [24]: labels = np.random.randint(0, 2, (100, 10)) In [25]: model.fit(fakes, labels, validation_split=0.2) Epoch 1/10 80/80 [==============================] - 2s - loss: 62.8913 - acc: 0.1125 - categorical_accuracy: 0.1125 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 2/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 3/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 4/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 5/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 6/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 7/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 8/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 9/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00 Epoch 10/10 80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
H: Stratify on regression I have worked in classification problems, and stratified cross-validation is one of the most useful and simple techniques I've found. In that case, what it means is to build a training and validation set that have the same prorportions of classes of the target variable. I am wondering if such an strategy exists in regression. A simple approach would be to split the data in quartiles or deciles and make sure that the proportions of training and validation instances in the respective quartiles and deciles are the same. The question is: is there a standard way to do this? if so, is there an implementation in sklearn? AI: I have done this before and didn't find a default implementation - the StratifiedKFold and RepeatedStratifiedKFold are only documented to work with classes. The way I ended up doing it was not quite as you are thinking with quartiles/deciles, but rather using a histogram (it matched my needs). NumPy has a nice method for doing this, with many different formulas for computing the bin sizes that you can play around with to be match your data i.e. if it is normally distributed or not. I can't post the entire method code, but here is the gist: samples_per_bin, bins, = np.histogram(data, bins='doane') # Doane's method worked best for me min_bin_size = samples_per_bin.min() # compute the maximum batch size possible, using all samples from the bin with lowest population n_bins = len(samples_per_bin) max_batch = min_samples_single_bin * n_bins I then put the data into a Pandas DataFrame and added a column indicating which bin each data point was in - finally doing something like this to perform the sampling from each of the bins: df.groupby('bin_name', group_keys=False).apply( lambda x: x.sample(n_per_group, replace=True)) Obviously you can allow duplicates in a batch or not by changing the replace argument. It might be necessary if you want a larger batch size while forcing stratification. It is definitely a limitation of my approach, which you might be able to overcome by using quartiles etc as you suggest.
H: Estimate battery voltage based on scheduled events and previous behaviour My goal is to estimate if a battery will have enough charge for certain other systems to be powered. The power state of the other systems is recorded (i.e. if they are turned on or not), as well as the times when the battery is being charged by solar panel during sunlight. These records are several month in the past. My question is now, in what topic/algorithms should I research when I want to use the previous data and events to estimate/extrapolate the voltage into the future (to make scheduling of powering systems easier, such as canceling them if the battery will be drained too much)? The upcoming charging times can be calculated, and it'll also be known when a system will be powered. All the datasets (i.e. voltage, charging times, events) are available as timeseries (but with different timestamps/sample rates). AI: One of the simplest ways to get to some predictions would be to use a model like ARIMA, which looks at recent previous observations to predict a number of steps ahead. ARIMA stands for Autoregressive Integrated Moving Average: Autoregressive means that something looks at its own (auto) historical values Integrated refers to the _differencing, a step tp help remove non-stationarity Moving Average just means the model also takes the moving average into account (helps keep predictions in reasonable ball park) Here is a good little explanation of the main points in more detail. In your case, given there are factors such as weather and time of data to be used, you would probably benefit from using seasonality. For this there is an extended ARIMA, called SARIMA - where S stands for seasonal. There is an implementation in Python's statsmodels package - or if you want to use R, then maybe take a look at the forecast package by the great Rob Hyndman, or the sarima package. If you want to use something a little more modern and cutting edge, then you are really talking about Recurrent Neural Networks (RNNs). I am not sure how familiar you are with there? One key term you should understand: LSTM - a type of unit that considers past data, current data and maintains a state of your model at a specific point in time. Have a look at a walkthrough (example) to see if it makes sense to you. ARIMA type models are easy to get running and the results are easy to interpret, so you really know what is going on. The performance of things like RNNs should be better, but it is rarely a short walk to success... of course it will depend on your data.
H: How to customize word division in CountVectorizer? >>> from sklearn.feature_extraction.text import CountVectorizer >>> import numpy >>> import pandas >>> vectorizer = CountVectorizer() >>> corpus1 = ['abc-@@-123','cde-@@-true','jhg-@@-hud'] >>> xtrain = vectorizer.fit_transform(corpus1) >>> xtrain <3x6 sparse matrix of type '<class 'numpy.int64'>' with 6 stored elements in Compressed Sparse Row format> >>> xtraindf = pd.DataFrame(xtrain.toarray()) >>> xtraindf.columns = vectorizer.get_feature_names() >>> xtraindf.columns Index(['123', 'abc', 'cde', 'hud', 'jhg', 'true'], dtype='object') I see that the special characters(-@@-) are omitted and "abc" and "123" are considered seperately. But, I want "abc-@@-123" to be treated as a single word. Is it possible to achieve? If yes, how? Any help would be much appreciated. AI: It's possible if you define CountVectorizer's token_pattern argument. If you're new to regular expressions, Python's documentation goes over how it deals with regular expressions using the re module (and scikit-learn uses this under the hood) and I recommend using an online regex tester like this one, which gives you immediate feedback on whether your pattern captures precisely what you want. token_pattern expects a regular expression to define what you want the vectorizer to consider a word. An example for the string you're attempting to match would be this pattern, modified from the default regular expression that token_pattern uses: (?u)\b\w\w+\-\@\@\-\w+\b Applied to your example, you would do this vectorizer = CountVectorizer(token_pattern=r'(?u)\b\w\w+\-\@\@\-\w+\b') corpus1 = ['abc-@@-123','cde-@@-true','jhg-@@-hud'] xtrain = vectorizer.fit_transform(corpus1) xtraindf = pd.DataFrame(xtrain.toarray()) xtraindf.columns = vectorizer.get_feature_names() Which returns Index(['abc-@@-123', 'cde-@@-true', 'jhg-@@-hud'], dtype='object') An important note here is that it will always expect your words to have -@@- nested in your tokens. For instance: corpus2 = ['abc-@@-123','cde-@@-true','jhg-@@-hud', 'unexpected'] xtrain = vectorizer.fit_transform(corpus2) xtraindf = pd.DataFrame(xtrain.toarray()) xtraindf.columns = vectorizer.get_feature_names() print(xtraindf.columns) Would give you Index(['abc-@@-123', 'cde-@@-true', 'jhg-@@-hud'], dtype='object') If you need to match words that don't have that exact special character structure, you can wrap the string of special characters in a group and use the non-matching group modifier ?: more_robust_vec = CountVectorizer(token_pattern=r'(?u)\b\w\w+(?:\-\@\@\-)?\w+\b') xtrain = more_robust_vec.fit_transform(corpus2) xtraindf = pd.DataFrame(xtrain.toarray()) xtraindf.columns = more_robust_vec.get_feature_names() print(xtraindf.columns) Which prints Index(['abc-@@-123', 'cde-@@-true', 'jhg-@@-hud', 'unexpected'], dtype='object') I hope this helps!
H: Correlate an array of categorical features to binary outcome I have a data set that looks like this: target,items 1,[i1,i3] 1,[i4,i5,i9] 0,[i1] ... The variable target is 0-1 outcome. The feature "items" is a set of items (variable length). Each item is a categorical variable (one of: i1, i2, .., i_N). There's no order/relationship between the items. A business example would be "set of products in a cart, outcome whether the customer abandons cart". The size of data is approx. 1,000,000 by 5,000 (I have ~1 million examples, and N is approximately 5,000) I want to do the following analysis. I want to find the items that influence (or lead to) target = 1. I don't have extra features to add. What is the type of statistical analysis or machine learning modelling technique that I should use? AI: How large is N? Can you reshape your data into something like: target i1 i2 i3 i4 i5 ... i9 ... iN 1 1 0 1 0 0 ... 0 ... 0 1 0 0 0 1 1 ... 1 ... 0 0 1 0 0 0 0 ... 0 ... 0 Once you have fit everything into a data frame, you can use any two-class supervised classification algorithm to build your model. There is no "best" model in general, but try a few to see which one works the best with your data. Sorry for posting as an answer; comment doesn't allow preformatted text.
H: XGboost - Choice made by model i am using XGboost to predict a 2 classes target variable on insurance claims. I have a model ( training with cross validation, hyper parameters tuning etc...) i run on another dataset. My question is : is there a way to know why a given claim has been affected to one class i.e. the features that explain the choice made by the model? The purpose is to be able to justify the choice made by the machine to third party human. Thanks for your answer. AI: I suggest you to go for Shap. It uses the Shapley values (concept borrowed from the Game Theory) to describe the model behavior, and with that it can explain a single prediction. It's graphical interface uses Force Plots, like the one that you see below. The red bar is built by the features that lead the prediction to positive values, and the blue by the other ones. In your case (a classifier) the number in bold will be the one just before the sigmoid function that will limit the output value between zero and one (one class or the other). So don't be scared if in some cases it will be greater than one, or negative. The size of the segments represent how much that feature contribute to the prediction, and under the segments you see the name of the feature (ex. LSTAT) and it's actual value (ex. 4.98). So, in this case, LSTAT is the mean feature that lead the prediction for that element of the dataset to the value of 24.41 (the number in bold). Enjoy!
H: Correcting ALL CAPS for human and algorithmic consumption United States federal tax returns tend to be written in ALL CAPS to facilitate OCR. This practice has persisted even when returns are filed electronically. Thus, much of the text in the IRS 990 dataset is in all caps. This makes it hard to read, and limits the ability of algorithms such as Treebank to accurately tag part of speech. I understand that the approach of the Stanford POS tagger may be more amenable to correction of capitalization, but in practice, I have not had much luck in using it to correct the text in the IRS 990 corpus, in which nearly every sentence contains one or more proper nouns. Are there any "tricks of the trade" for improving the performance of an off-the-shelf POS tagger when using ALL CAPS text, and/or an algorithm that may do better at identifying the proper nouns therein? AI: If sensitivity to case is breaking your models you have two options: Train or find a new model that's case-insensitive. This is probably the easiest thing to do. The Stanford parser has one. Train a model to correct the case of your input, this is sometimes called truecasing. The Stanford Parser has this functionality too.
H: Using Mean Squared Error in Gradient Descent I've recently been writing linear regression algorithms from scratch to gain an understanding of how the maths behind it works (something that was a bit of a black box beforehand), and so I got around to differentiating the cost function. Without realising it I used the Squared Error for the cost function - the MSE but without dividing by the dataset length. Is there any benefit (faster approach of the minimum or other) to using the Mean Squared Error over just summing the squares of the error? AI: No, it is exactly the same. Optimizing a function and the same function divided by a constant is equivalent, both in the analytical and the numerical sense. You will get exactly the same optimal parameters.
H: How is the equation for the relation between prediction error, bias, and variance defined? I'm reading this article Understanding the BiasVariance Tradeoff. It mentioned: If we denote the variable we are trying to predict as $Y$ and our covariates as $X$, we may assume that there is a relationship relating one to the other such as $Y=f(X)+\epsilon$ where the error term $\epsilon$ is normally distributed with a mean of zero like so $\epsilon\sim\mathcal{N}(0,\,\sigma_\epsilon)$. We may estimate a model $\hat{f}(X)$ of $f(X)$. The expected squared prediction error at a point $x$ is: $$Err(x)=E[(Y-\hat{f}(x))^2]$$ This error may then be decomposed into bias and variance components: $$Err(x)=(E[\hat{f}(x)]-f(x))^2+E\big[(\hat{f}(x)-E[\hat{f}(x)])^2\big]+\sigma^2_e$$ $$Err(x)=Bias^2+Variance+Irreducible\ Error$$ I'm wondering how do the last two equations deduct from the first equation? AI: If: $$Err(x)=E[(Y-\hat{f}(x))^2]$$ Then, by adding and substracting $f(x)$, $$Err(x)=E[(Y-f(x)+f(x)-\hat{f}(x))^2] $$ $$= E[(Y-f(x))^2] + E[(\hat{f}(x)-f(x))^2] + 2E[(Y-f(x))(\hat{f}(x)-f(x))]$$ The first term is the irreducible error, by definition. The second term can be expanded like this: $$E[(\hat{f}(x)-f(x))^2] = E[\hat{f}(x)^2]+E[f(x)^2] -2E[f(x)\hat{f}(x)] $$ $$=E[\hat{f}(x)^2]+f(x)^2-2f(x)E[\hat{f}(x)] $$ $$= E[\hat{f}(x)^2]-E[\hat{f}(x)]^2+E[\hat{f}(x)]^2+f(x)^2-2f(x)E[\hat{f}(x)] $$ $$= E\big[(\hat{f}(x)-E[\hat{f}(x)])^2\big] + (E[\hat{f}(x)]-f(x))^2 $$ $$= Bias^2+Variance$$ Then the only thing that is left is to prove that the third term is 0. This is seen using $E[Y] = f(x)$. Edit I am not that sure on how to prove $$E[(Y-f(x))(\hat{f}(x)-f(x))] = 0$$ If we assume independence between $\epsilon = Y - f(x)$ and $\hat{f}(x)-f(x)$, then the proof is trivial, as we can split the expected value in two products, the first of them being $0$. However, I am not so sure about the fact that we can assume this independence.
H: Subtracting grand mean from train and test images I am building an image classifier based off the VGG_face keras implementation. It is easiest for me to extract a csv file full of the representations and then try classifiers on those representations. When I got the representations, I first subtracted the mean of the entire dataset from each image. Then I realized... am I cheating, so to speak? In other words, since I included the test images when calculating the grand mean to be subtracted, does this now then overestimate my accuracy measurements? AI: There is a kind of bias that you are introducing, yes. You are basically extracting some statistics (i.e. the mean) from your hold-out set and using that to train, which makes your final claims of accuracy a little weaker (some people might say they are useless). The general approach is to compute the mean of your training data, then you may subtract that from all of the data, including hold-out data. You can do the mean subtraction, in general, using something like the ImageDataGenerator. The mean to be subtracted can be computed using all or some of the training data. That class also offers other augmentation functionalities, such as normalising the dataset too, adding rotations etc. you mentioned you read features from a CSV file, so if you are not talking about images, as long as you can use e.g. NumPy, you can perform is manually on all data at the beginning.
H: What approach other than Tf-Idf could I use for text-clustering using K-Means? I am working on a text-clustering problem. My goal is to create clusters with similar context, similar talk. I have around 40 million posts from social media. To start with I have written clustering using K-Means and Tf-Idf. The following code suggests what I am doing. Here are main steps: Do some pre-processing Create tfidf_matrix while using tokenization and stemming Run K-Means on the tf-idf matrix Have the result csvRows = [] nltk.download('stopwords') title = [] synopses = [] filename = "cc.csv" num_clusters = 20 pkl_file = "doc_cluster.pkl" generate_pkl = False if len(sys.argv) == 1: print("Will use "+pkl_file + " to cluster") elif sys.argv[1] == '--generate-pkl': print("Will generate a new pkl file") generate_pkl = True # pre-process data with open(filename, 'r') as csvfile: # creating a csv reader object csvreader = csv.reader(csvfile) # extracting field names through first row fields = csvreader.next() # extracting each data row one by one duplicates = 0 for row in csvreader: # removes the characters specified if line not in synopses: synopses.append(line) title.append(row[0]) else: duplicates += 1 stopwords = nltk.corpus.stopwords.words('english') stemmer = SnowballStemmer("english") def tokenize_and_stem(text): # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token tokens = [word for sent in nltk.sent_tokenize( text) for word in nltk.word_tokenize(sent)] filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search('[a-zA-Z]', token): filtered_tokens.append(token) stems = [stemmer.stem(t) for t in filtered_tokens] return stems def tokenize_only(text): # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search('[a-zA-Z]', token): filtered_tokens.append(token) return filtered_tokens totalvocab_stemmed = [] totalvocab_tokenized = [] for i in synopses: # for each item in 'synopses', tokenize/stem allwords_stemmed = tokenize_and_stem(i) # extend the 'totalvocab_stemmed' list totalvocab_stemmed.extend(allwords_stemmed) allwords_tokenized = tokenize_only(i) totalvocab_tokenized.extend(allwords_tokenized) vocab_frame = pd.DataFrame( {'words': totalvocab_tokenized}, index=totalvocab_stemmed) print 'there are ' + str(vocab_frame.shape[0]) + ' items in vocab_frame' # define vectorizer parameters tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000, min_df=0.0, stop_words='english', use_idf=True, tokenizer=tokenize_and_stem, ngram_range=(1, 3)) tfidf_matrix = tfidf_vectorizer.fit_transform(synopses) terms = tfidf_vectorizer.get_feature_names() # dist = 1 - cosine_similarity(tfidf_matrix) km = KMeans(n_clusters=10, max_iter=1000, verbose=1).fit(tfidf_matrix) clusters = km.labels_.tolist() # uncomment the below to save your model # since I've already run my model I am loading from the pickle if(generate_pkl): joblib.dump(km, pkl_file) print("Generated pkl file " + pkl_file) km = joblib.load(pkl_file) clusters = km.labels_.tolist() films = {'title': title, 'synopsis': synopses, 'cluster': clusters, } total_count = len(films['synopsis']) csvRows = [] for idx in range(total_count): csvRows.append({ 'title': films['title'][idx], 'cluster': films['cluster'][idx] }) print('Creating cluster.csv') with open('cluster.csv', 'w') as output: writer = csv.DictWriter(output, csvRows[0].keys()) writer.writeheader() writer.writerows(csvRows) print("\ncreated cluster.csv") The results are not very satisfactory. They are very average. What could be done to improve my clustering algorithm? I would still want to use K-Means but what another approach could be used in place of Tf-Idf? Also, if you guys think that there is a better alternative to K-Means, please suggest and it even more helpful, if you could point me to sources/examples, where people have already done similar stuff. I will always run the clustering on the volume close to 40 Million. AI: You will likely see an improvement by using an algorithm like GloVe in place of Tf-Idf. Like Tf-Idf, GloVe represents a group of words as a vector. Unlike Tf-Idf, which is a Bag-of-Words approach, GloVe and similar techniques preserve the order of words in a tweet. Knowing what word comes before or after a word of interest is valuable information for assigning meaning. This Article runs through different techniques and gives a good description of each one. Also, This Script on Kaggle shows how to use pretrained word vectors to represent tweets. For your clustering, I recommend checking out Density-Based clustering. K-means is a decent all-purpose algorithm, but it's a partitional method and depends on assumptions that might not be true, such as clusters being roughly equal in size. This is almost certainly not the case. This Blog has a great discussion on clustering for text. If you go with Density-Based and you use Python, I highly recommend HDBSCAN by Leland McInnes. Good luck!
H: Toolbox for handling NaNs in Python 2.7 Is there a good toolbox for handling and analyzing missing values in Python 2.7? There is a good toolbox for doing this in Python 3.6 here (missingno): https://github.com/ResidentMario/missingno I need to work in Python 2.7. so, this is why I ask. AI: First of all, visualize the missing values : I am using Python 2.7 import pandas import numpy from pandas import DataFrame import seaborn as sns df = DataFrame({'A' : [0,1,numpy.nan, 5 ,6],'B':[30,numpy.nan,numpy.nan,8,10]}) df A B 0 0.0 30.0 1 1.0 NaN 2 NaN NaN 3 5.0 8.0 4 6.0 10.0 sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='BuPu') If u wanna drop rows containing missing values, df.dropna(axis=0) A B 0 0.0 30.0 3 5.0 8.0 4 6.0 10.0 If u wanna drop columns containing missing values, df.dropna(axis=1) If you wanna fill NaN with a value: df.fillna(0) A B 0 0.0 30.0 1 1.0 0.0 2 0.0 0.0 3 5.0 8.0 4 6.0 10.0 You can do a ffill/pad, bfill/backfill. Hit shit tab and expand the documentation if you are using jupyter. For more info refer : https://pandas.pydata.org/pandas-docs/stable/missing_data.html I hope this answers your question.
H: How much neural network theory required to design one? So I have looked at some of the literature on neural networks and read some chapters, but the learning curve is so steep that I have had trouble even getting started on designing the neural network to solve my problem. From what I understand, the architecture (or arrangement of neurons and their connections) should be designed according to the nature of the problem to be solved. Other parameters to be set according to the nature of the problem include the loss index (how the error will be calculated and if there should be a regularization term), whether or not there should be any scaling/unscaling, bounding, or conditions, and the training algorithm (such as the quasi-Newton method). The particular type of problem I am interested in is using neural networks to figure out unknown functions (with unknown complexity) that input and output integers (as opposed to continuous values), given a large collection of inputs and outputs. An example function takes 4 byte inputs and returns 2 byte outputs. This is done by first taking the first 2 bytes of input and XORing them with the last 2 bytes, to produce an intermediate result. This two byte value is then XORed with a copy of itself that is shifted left by 5 bits. This result is then XORed with a copy of itself shifted right by 7 bits. Then this result is then XORed with a copy of itself shifted left by 2 bits, and this value is outputted by the function, giving the final 2-byte result. Note: More than one unique input can produce the same output. So given a large set of inputs and outputs of an unknown function, the neural network should then optimize itself to reproduce this function given new inputs. I am not sure how to get started designing this neural network, and I am not sure fully reading through neural networks textbooks is the optimal way to get started. I am using a software library meant for designing neural networks, and I can simply set the network architecture and the parameters described above. How much theory do I need to know in order to get started solving my problem? Where should I start with learning how to design this neural network? EDIT: The main goal of all this is to use an existing tool to simplify producing an output (a 2-byte code, in the example above) given a new input, in a situation where the function is unknown. Neural networks seem to match the function-finding-through-trial-and-error characteristics that I need. This tool should be able to try all sorts of possible functions of increasing complexities in order to mimic the working of the actual unknown function. AI: Here is a good way to get going: C++ or c# spend a week learning vectors and matrices. Be able to code a "class vector" and "class matrix" Get intuition on how matrices represent weights between 2 layers Learn the chain rule and the "tree diagram" when computing composite (nested) functions Spend a week learning backprop. Be able to compute derivatives for all layers, using pen an paper a toy example with 4 layers, 3 neurons each. be able to make code that reads a .txt file character by character. Learn about one-hot, to describe your character to the network (it only understands vectors) - or see how your byte-problem might be used instead of characters. Learn about hard-max softmax and their derivatives, tanh and sigmoid code fwd prop code bkprop test if it can predict the next char debug debug DEBUG Gradient checking
H: Does central limit theorem work well for Pareto distribution? I am new to Data Science. Recently I was studying a course about statistics. One of the tasks there was to check the central limit theorem in practice. The idea was quite simple: take a continuous random variable; generate, say, 1000 samples from it, each of size n; draw a histogram of the samples. Then find the parameters of the normal distribution using the central limit theorem and draw the PDF of the distribution. As a result, the histogram and the PDF should be, roughly speaking, "similar" (and become more "similar" as n grows). I chose Pareto distribution and, with this Python code, import numpy as np import matplotlib.pyplot as plt import scipy.stats as sts from math import sqrt n = 10 b = 3.0 random_value = sts.pareto(b) mean = random_value.mean() variance = random_value.var() plt.hist(samples(random_value, n, 1000), bins=20, normed=True) x = np.linspace(0, 6, 100) pdf = sts.norm(mean, sqrt(variance / n)).pdf(x) plt.plot(x, pdf, color='r', label='theoretical PDF') where the samples function was like this def samples(random_value, sample_size, number_of_samples): result = np.asarray([]) for i in range(number_of_samples): result = np.append(result, np.mean(random_value.rvs(sample_size))) return result I got the next histogram and PDF as a result The result looked suspicious to me: there was no expected similarity. I checked the same code with another continuous distribution (with a uniform distribution), and the graphs looked much more similar. Is it a feature of the Pareto distribution which makes it a bad choice to demonstrate how the central limit theorem works? Or something is wrong with the code itself (e.g., the parameters of the normal distribution are calculated in a wrong way)? Thank you in advance. AI: One key assumption of the central limit theorem is that the variance $\sigma^2$ of the underlying distribution is finite. I would guess that you are using a parameterization of the Pareto for which $\sigma^2 = \infty$, in which case the CLT will not hold. If you tweak your scale parameter so that $\sigma^2 < \infty$ you should see the result you want.
H: Should I eliminate all ID columns and similar columns from training data? This is a basic question so bear my ignorance. I feel like they contribute collectively in no way to the target. This is for performance and accuracy. The target is polar (0,1). AI: It depends. If your data samples are IID (independent and identically distributed) then you can remove the sample ID, given that all samples come from or refer to the same source/object and they don't somehow identify the sample class. But, if your data is sequential, such as time series, it would be a big mistake to neglect the ID, as it identifies the time order between the samples. The time dependencies are a great source of information for problems such as time series forecasting, regression and classification. If you treat sequential data (time series) as IID (by removing the ID), you will have inferior performance, because you neglect one source of information, i.e. temporal dependencies. You should not include the ID as input to your model, but you should respect the order of the samples, which is shown by the ID.
H: How to print a Confusion matrix from Random Forests in Python I applied this random forest algorithm to predict a specific crime type. The example I took from this article here. import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder import random from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier import matplotlib import matplotlib.pyplot as plt import sklearn from scipy import stats from sklearn.cluster import KMeans import seaborn as sns # Using Skicit-learn to split data into training and testing sets from sklearn.model_selection import train_test_split # Import the model we are using from sklearn.ensemble import RandomForestRegressor import os os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/' features = pd.read_csv('prueba2.csv',sep=';') print (features.head(5)) # Labels are the values we want to predict labels = np.array(features['target']) # Remove the labels from the features # axis 1 refers to the columns features= features.drop('target', axis = 1) # Saving feature names for later use feature_list = list(features.columns) # Convert to numpy array features = np.array(features) # Split the data into training and testing sets train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size = 0.25, random_state = 42) baseline_preds = test_features[:, feature_list.index('Violent crime')] # Baseline errors, and display average baseline error baseline_errors = abs(baseline_preds - test_labels) print('Error: ', round(np.mean(baseline_errors), 2)) # Instantiate model with 1000 decision trees rf = RandomForestRegressor(n_estimators = 1000, random_state = 42) # Train the model on training data rf.fit(train_features, train_labels); # Use the forest's predict method on the test data predictions = rf.predict(test_features) # Calculate the absolute errors errors = abs(predictions - test_labels) # Print out the mean absolute error (mae) print('Promedio del error absoluto:', round(np.mean(errors), 2), ' Porcentaje.') # Calculate mean absolute percentage error (MAPE) mape = 100 * (errors / test_labels) # Calculate and display accuracy accuracy = 100 - np.mean(mape) print('Precision:', round(accuracy, 2), '%.') # Get numerical feature importances importances = list(rf.feature_importances_) # List of tuples with variable and importance feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(feature_list, importances)] # Sort the feature importances by most important first feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True) # Print out the feature and importances [print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances]; # Import tools needed for visualization from sklearn.tree import export_graphviz import pydot # Pull out one tree from the forest tree = rf.estimators_[5] # Import tools needed for visualization from sklearn.tree import export_graphviz import pydot # Pull out one tree from the forest tree = rf.estimators_[5] # Export the image to a dot file export_graphviz(tree, out_file = 'tree.dot', feature_names = feature_list, rounded = True, precision = 1) # Use dot file to create a graph (graph, ) = pydot.graph_from_dot_file('tree.dot') # Write graph to a png file graph.write_png('tree.png') So my question is: how can I add a Confusion matrix to measure accuracy? I tried this example from this here, but it doesn't work. The following error appears: Any advice? AI: From the code and task as your present it, a confusion matrix wouldn't make sense. This is because it shows how well a model is classifying samples i.e. saying which category they belong to. Your problem (as the author in your link states) is a regression problem, because you are predicting a continuous variable (temperature). Have a look here for more information. In general, if you do have a classification task, printing the confusion matrix is a simple as using the sklearn.metrics.confusion_matrix function. As input it takes your predictions and the correct values: from sklearn.metrics import confusion_matrix conf_mat = confusion_matrix(labels, predictions) print(conf_mat) You could consider altering your task to make it be a classification problem, for example by grouping the temperatures in to classes of a given range. You could say transform the target temperature to be a new_target_class, then change your code to use the [RandomForestClassifier][3]. I have done a quick and dirty conversion on the same data linked in that article, check it out here. I basically use the minimum and maximum values of the target variable to set a range, then aim for 10 different classes of temperature and create a new column in the table which assign that class to each row. The top it looks like this (click on picture to enlarge): If you can get those predictions going using the RandomForestClassifier, you can then run the confusion matrix code above on the results.
H: Can pinball loss be used to construct a prediction interval? I'm modeling some time series data ($\{y_t\}_t$) and would like to construct a model that is able to return not just a single-value prediction $\hat{y_t}$, but an interval $C_t=(\hat{y}_{t, lower}, \hat{y}_{t, upper})$ such that $y_t \in C_t$ with some probability. Now, I've learned about the pinball loss: $$L(q, z)=\cases{qz, & z >= 0 \\ (q-1)z,& z<0}$$ If I understand it correct, $q \in (0,1)$ is the quantile that I want to predict, and $z$ is the difference between the actual value $y$ and what my model predicted $\hat{y}$. If the model is trained to optimize this loss for some $q$, it will return me the estimate $\hat{y}^{(q)}_t$ such that $P(y_t < \hat{y}^{(q)}_t)=q$. Is this interpretation correct? Then, can I simply train two models, say one for $q=0.05$ and the other for $q=0.95$ in order to get the estimates of the intervals that contain the actual values with the probability $0.95-0.05=0.9$? AI: Yes, your interpretation regarding the pinball loss function seems right. For a given quantile value t between 0 and 1, it gives you the threshold value v. Then, can I simply train two models, say one for q=0.05 and the other for q=0.95 in order to get the estimates of the intervals that contain the actual values with the probability 0.95−0.05=0.9? ` And of course if you had trained two models to give threshold values for two quantiles, you can say that $p(\hat{y}_{t, lower} \le y_t < \hat{y}_{t, upper}) = |\delta(q_1-q_2)|$
H: How do i convert a Named num [1:4] to a data frame in R? newbie to r, taking The R Programming Environment from coursera. one of the assignments is to select some columns from a data frame and find the means. the code below seems to get the correct answer, but the answer should be a data frame. wc_2 <- worldcup %>% select(Time, Passes, Tackles, Saves) %>% colMeans() How do i convert this to a data frame? i tried: wc_2<-as.data.frame(wc_2) but that gets it column wise. i do not see any way to pass the rows and columns or do a transpose in what i assume is a data frame constructor. perhaps this can be done as part of the pipe? thanks edit: i got it to work: means <- worldcup %>% select(Time, Passes, Tackles, Saves) %>% colMeans() wc_2<-data.frame(t(matrix(means))) colnames(wc_2)<-names(means) AI: My preferred way to transpose a data.frame (or data.table) is to use the transpose function found in the data.table package. It means you might have to install it: install.packages("data.table"). This give you a function that will do what you want. Here is a demo how to use it: library(data.table) # makes the transpose function available col_names <- colnames(worldcup) # keep track of original column names wc_2 <- colMeans(worldcup) # compute the means wc_2 <- transpose(as.data.frame(wc_2)) # this gives you generic column names colnames(wc_2) <- col_names # reapply the column names Or combining it with your example (not fully tested): library(magrittr) # to import the pipe operator: %>% wc_2 <- worldcup %>% select(Time, Passes, Tackles, Saves) %>% colMeans() %>% as.data.frame() %>% transpose() # you might need to put a dot (`.`) in the empty brackets to pass the argument before the pipe operator then maybe add the names you would like: colnames(wc_2) <- col_names If you like the sound of the package, I recommend going through the mini intro, built in to the package: install.packages("data.table") # install it library(data.table) # load it example(data.table) # run the examples section of ?
H: Generating ordinal data I would like to generate synthetic data which are ordinal, i.e. ordered, in Python. But how would I do this? What are the differences in generating ordinal data vs categorical data? I'm reading the paper "Automatic Discovery of the Statistical Types of Variables in a Dataset," by Valera and Ghahramani. In it, they write: "We account for categorical data by sampling a multinomial variable with $R$ categories, where the probability of the categories is sampled from a Dirichlet distribution....To account for ordinal observations, we first sample the first variable in our dataset from a uniform distribution in the interval $(0,R)$, which we randomly divide into $R$ categories that correspond to the ordinal variable in our dataset." Can someone help me understand the latter part about generating ordinal data? Thank you! AI: Ordinal data deals with categories, which themselves are ordered by meaning somehow, but we cannot strictly talk about the distance between the categories. In the example below, you can see that the ordinal data runs from awesome to terrible, with 5 categories explaining an entire ordinal range: Having a look at the paper, it seems the define a uniform distribution $U(a, b)$ (probably continuous), which they then divide into some random categories, let's say 5, to follow the table above. So if we set the bound of the distribution, $a = 0$ and $b = 1$, we might randomly create five categories, such as: Category : a b ---------------------- Awesome : 0.00 - 0.09 Great : 0.10 - 0.44 OK : 0.45 - 0.46 Bad : 0.47 - 0.59 Terrible : 0.60 - 1.00 We can see that they are ordinal, because they numerical values do signify some kind of meaningful order, but the size of the categories is not necessarily uniform. The authors generated them randomly.
H: On minimizing matrix norm (AB-C) Given A, B and C are matrices with dim(A) = m x n, dim(B) = n x p and dim (C) = m x p, the problem asks to evaluate I need to learn $$\tilde{A}$$ such that $$\min_{\tilde{A}}||\tilde{A}^TB-C||$$ and $$\min_{\tilde{A}}||\tilde{A}-A||$$ AI: In general, you can't. You can find a matrix that will minimize $||\tilde{A}^TB-C||$, and another that minimizes $||\tilde{A}-A||$, but, in general, the two matrices you find won't be the same. In fact, the matrix that solves the second problem is always $\tilde{A} = A$, and, if $||{A}^TB-C||$ is not the minimum of the first function, then your problem doesn't have a solution. However, what would a machine learning practicioner do? Instead of finding a matrix that solves both of the problems (that might not exist), you can aim to solve a combination of the two problems. You can aim to find the matrix that minimizes $$||\tilde{A}^TB-C||^2 + ||\tilde{A}-A||^2 $$ or, in general, the matrix that minimizes $$||\tilde{A}^TB-C||^2 + \lambda ||\tilde{A}-A||^2 $$ for some $\lambda >0$. If $\lambda \approx 0$, then you are just solving the first problem, and if $\lambda >>0$, then you are just solving the second one. And, this is just a quadratic optimization problem, that can be solved using gradient descent and will provide a unique global maximum.
H: When is feature transformation required? I was fitting machine learning models to clean data(Imputed missing values, removed unnecessary features etc). I didn't transform the features that are skewed. Before moving forward, I want to understand how important feature transformation is to fit data into a model. Any opinions? (I know what happens in Random Forest, but unable to comprehend for other ML models) AI: Even if there are models that are robust w.r.t. feature transformations (like Random Forest), in general it's a good practice to transform the features in order to have better performance in a ML model. For three reasons: 1) Numerical stability: computers cannot represent every number, because the electronic which make them exist deals with binaries (zeros and ones). So they use a representation based on Floating Point arithmetic. In practice, this means that the numerical behavior in the range [0.0, 1.0] is not the same of the range [1'000'000.0, 1'000'001.0]. So having two features that have very different scales can lead to numerical instability, and finally to a model unable to learn anything. 2) Control of the gradient: imagine that you have a feature that spans in a range [-1, 1], and another one that spans in a range [-1'000'000, 1'000'000]: the weights associated to the first feature are much more sensitive to small variations, and so their gradient will become much more variable in the direction described by that feature. This can lead to other instabilities: some values of learning rate (LR) can be too small for one feature (and so the convergence will be slow) but too big for the second feature (and so you jump over the optimal values). And so, at the end of the training process you will have a sub-optimal model. 3) control of the variance of the data: if you have skewed features, and you don't transform them, you risk that the model will simply ignore the elements in the tail of the distributions. And in some cases, the tails are much more informative than the bulk of the distributions.
H: Customer-Product Analytics I am new to Data Science and I want to make Customer Product Analytics for my company(bank). I can have a data of customers, their income, daily transactions, average balance etc and what product(saving certificates etc) they have taken according to their account balance. Can i have a prediction for new customers or existing customers that what product will be suitable for them according to their average balance , income etc? Can the machine learning algorithm predict each product to a particular customer? I got to know cluster analysis and predictive analysis can be useful for such task. But i want to recommend a particular product to a particular customer. Which algorithm can be useful? And from where I have to begin? AI: If you have historic data of earlier purchase by customers. Try to build any classification algo ( Decision tree/ RF) to build rule associated with customer and product. Now you can suggest product to a new customer associating his properties to rules which are already created. Just goole classification algos in R/Python Clustering ( K means/Hiearchical etc) would be good for customer segmentation, not directly useful for prediction. https://machinelearningstories.blogspot.com/2017/09/hierarchical-clustering-bottom-up.html You can also use market Basket analysis for recommending similar products to existing customer. https://machinelearningstories.blogspot.com/2016/11/recommendation-engine-market-basket.html http://courseprojects.souravsengupta.com/cds2016/product-recommendations-a-multi-classification-problem/
H: ValueError: not enough values to unpack (expected 4, got 2) I have written this code fig, (axis1, axis2,axis3, axis4)=plt.subplots(2,2,figsize=(10,4)) and I am getting this error ValueError: not enough values to unpack (expected 4, got 2) I tried many ways to remove this error but all was in vain. Can you explain to me why I am getting this error? AI: Its because you have not looked how the values are packed in plt.subplot function. >>> plt.subplots(2,2,figsize=(10,4)) (<matplotlib.figure.Figure at 0xa3918d0>, array([[<matplotlib.axes._subplots.AxesSubplot object at 0x000000000A389470>, <matplotlib.axes._subplots.AxesSubplot object at 0x000000000A41AD30>], [<matplotlib.axes._subplots.AxesSubplot object at 0x000000000A6F7EB8>, <matplotlib.axes._subplots.AxesSubplot object at 0x000000000BC232E8>]], dtype=object)) Instead of unpacking all values at once, unpack in steps. You will get a better idea then. For your solution, to unpack - >>> fig, [[axis1, axis2],[axis3, axis4]] = plt.subplots(2,2,figsize=(10,4))
H: Predict ratings for Item Based Collaborative Filtering Given the (cosine) similarity score of top 100 neighbors of every item, how do I predict ratings for unrated items? Please explain in simple terms. Item 1 260 0.577305 780 0.5655413 1210 0.5529503 3114 0.5425038 1270 ..... 2 367 0.5202925 364 0.5093084 500 0.5082204 586 0.4978301 480 ...... . . . AI: You should at first calculate the similarity between co-rated items (items which both active user and another user rated them). then you can predict the rate of active user to the unrated item. one of the methods for this work is Slope One
H: Shallow Neural Net for predicting numbers other then 1 or 0? I'm not gonna lie, I'm very new to neural networks but am also so interested in them and learning the way they work and what can be made from them. So in my endeavors for learning, I stumbled upon Siraj Ravel's youtube channel and furthermore, his Github where he posted this 1 neuron neural network that predicts the output given inputs.* I looked at it and his accompanying video and got the basic gist of how it works. After running it a few times making minor adjustments to say the inputs, I decided why not try to add numbers other than 1 and 0. I made a pattern with the output being the inputs added, however, I ran into an error where the output of its test was '1.'. I assumed this was because of the sigmoid function forcing it to be in between 1 and 0. Summary: How can I change that program so I can give it an input of patterns with different numbers than 1 and 0 and it will output correctly (examples below) *In his program inputs were like [1, 0, 1] = 1, [0, 0, 1] = 0 and [1, 0, 0] = 1 (pattern is first column of matrix is answer) and he asked the program to give output for [1, 1, 0] and it correctly outputted 1. *My goal for it was so given "[2, 2, 2] = 6, [1, 2, 0] = 3, and [1, 1, 2] = 4 (more training included if necassary)" as input could it get 4 as output for "[2, 2, 0]? Code for program AI: Well, the main problem is that the existing code and the desired codes solve different problems. The existing code solves a classification problem, when you predict one of certain classes. And you want to solve a regression problem, so that neural net will predict an arbitrary number. Sigmoid function is necessary when you predict classes, you need to remove it for regression. Or in other words you need not Logistic Regression, but Linear Regression. Here is a great article with explanations on this: http://peterroelants.github.io/posts/neural_network_implementation_part01/ I have written an example of implementation: https://gist.github.com/Erlemar/6a5cfcca423ef3b5f6e890c6bef6d5ed
H: Pandas index error I am trying to use train_test_split to split my data. However, I am getting an index error. I pasted part of the error message below. I am using Python 3.5 version and sklearn 0.18.1. The code worked with my previous dataset that was different. Features here are in Pandas DataFrame and labels are in Pandas Series. from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.4, random_state=1) KeyError Traceback (most recent call last) /apps/anaconda/anaconda-3.5/lib/python3.5/site- packages/pandas/indexes/base.py in get_loc(self, key, method, tolerance) 2133 try: -> 2134 return self._engine.get_loc(key) 2135 except KeyError:<br><br> pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4443)()<br><br> pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4289)()<br><br> pandas/src/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:13733)()<br><br> pandas/src/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:13687)()<br><br> KeyError: 0 AI: Pandas indexes differently: X[some_slice] # in Numpy is NOT equal to df[some_slice] # in Pandas, but is instead equal to df.iloc[some_slice] You can cast your features dataframe into numpy array by calling .values on them right before splitting: X_train, X_test, y_train, y_test = \ train_test_split(features.values, labels.values, test_size=0.4, random_state=1)
H: Can the learning rate be considered both a parameter AND a hyper-parameter? Here is my understanding of those 2 terms: Hyper-parameter: A variable that is set by a human before the training process starts. Examples are the number of hidden-layers in a Neural Network, the number of neurons in each layer, etc. Some models don't have any hyper-parameters, like the linear model. Parameter: A variable in which the training process will update. For instance, the weights of a Neural Network are parameters as they are updated as we train the network and there is no human intervention on the process. Another example would be the slope and the y-intercept in a simple linear model. Having said that, what would the learning rate parameter ($\eta$) be? $$ \Theta_{i+1} = \Theta_{i} + \eta \nabla J(\Theta_{i} ) $$ My understanding is that $\eta$ is set before the training starts to a large value but then, as the training progresses and the function gets closer and closer to a local minimum, the learning rate is decreased. In that case, doesn't the learning parameter satisfy both the definitions of a parameter and of a hyper-parameter? AI: My understanding is that $\eta$ is set before the training starts to a large value but then, as the training progresses and the function gets closer and closer to a local minimum, the learning rate is decreased. In that case, doesn't the learning parameter satisfy both the definitions of a parameter and of a hyper-parameter? No it does not, because you are at all times controlling the learning rate without reference to the data. Typically both the learning rate and the learning rate schedule are considered hyper-parameters, so by adding a learning rate schedule you have not transformed learning rate into a parameter, but instead added a new hyper-parameter that may need tuning! The learning rate is also is not part of the model as it would be used in production, so it is a training hyper-parameter. Although that is not a strict separation, this also applies to things that are neither hyper-parameters nor parameters, such as the current momentum values, which are used to improve training rate, and are usually stored in a some data structures alongside the model being trained. Likewise most adaptive learning rate structures (used in e.g. RMSProp, Adagrad etc) are in this category - neither model parameter nor hyper-parameter. It might be possible to argue that for a continuous online model, the learning rate could be a model parameter, as it is part of the production model being used to make predictions - provided it was still being used, but adaptive, depending on data seen to date, and/or current learning progress. Then if you had to feed in an initial learning rate you could maybe say that the learning rate was both a parameter and hyper-parameter of the ANN model. Although you would likely have to explain the scenario in full to be understood, it would be more usual to just say you were using an adaptive learning rate.
H: What can be done to increase the accuracy of a biological dataset? I have a biological unbalanced dataset on which I have applied deep learning, Support Vector Machine (all the kernel functions) and Artificial Neural network for multiclass classification (size: 139 samples , 5 attributes) in python. Unfortunately the accuracy is not exceeding 55%. What can be done to increase the accuracy? If a dataset cannot ever go beyond such an average accuracy, what is the solution? AI: A shallow neural network is the wrong approach for a problem with a small training set. Deep learning is even worse for small training sets. 139 samples is severely insufficient to train any deep learning model, or even a shallow neural network. As a very general rule of thumb I use 100 examples for each feature in my dataset for deep learning. This then increases exponentially with every single different class you expect. I suggest you use a machine learning technique such as SVM. This will likely result in better results given the size of your dataset. Try these techniques instead and see what results you get: k-NN, kernel SVM, k-means clustering. If you have an unbalanced dataset then you would want to use anomaly detection algorithms which can be trained on a single distribution. You can learn the distribution of each output class you want. From there, novel examples can be classified based on the likelihood they fit within a given distribution.
H: what does eta constant mean in numpy I am using numpy to implement some neural network tutorials. There is a constant("eta") used in codes. What does it means and what it stands for? AI: Usually "eta" means learning rate, but it would be better if you could show an example.
H: How to use the output of GridSearch? I'm currently working with Python and Scikit learn for classification purposes, and doing some reading around GridSearch I thought this was a great way for optimising my estimator parameters to get the best results. My methodology is this: Split my data into training/test. Use GridSearch with 5Fold Cross validation to train and test my estimators(Random Forest, Gradient Boost, SVC amongst others) to get the best estimators with the optimal combination of hyper parameters. I then calculate metrics on each of my estimators such as Precision, Recall, FMeasure and Matthews Correlation Coefficient, using my test set to predict the classifications and compare them to actual class labels. It is at this stage that I see strange behaviour and I'm unsure how to proceed. Do I take the .best_estimator_ from the GridSearch and use this as the 'optimal' output from the grid search, and perform prediction using this estimator? If I do this I find that the stage 3 metrics are usually much lower than if I simply train on all training data and test on the test set. Or, do I simply take the output GridSearchCV object as the new estimator? If I do this I get better scores for my stage 3 metrics, but it seems odd using a GridSearchCV object instead of the intended classifier (E.g. a random Forest) ... EDIT: So my question is what is the difference between the returned GridSearchCV object and the .best_estimator_ attribute? Which one of these should I use for calculating further metrics? Can I use this output like a regular classifier (e.g. using predict), or else how should I use it? AI: Decided to go away and find the answers that would satisfy my question, and write them up here for anyone else wondering. The .best_estimator_ attribute is an instance of the specified model type, which has the 'best' combination of given parameters from the param_grid. Whether or not this instance is useful depends on whether the refit parameter is set to True (it is by default). For example: clf = GridSearchCV(estimator=RandomForestClassifier(), param_grid=parameter_candidates, cv=5, refit=True, error_score=0, n_jobs=-1) clf.fit(training_set, training_classifications) optimised_random_forest = clf.best_estimator_ return optimised_random_forest Will return a RandomForestClassifier. This is all pretty clear from the [documentation][1]. What isn't clear from the documentation is why most examples don't specifically use the .best_estimator_ and instead do this: clf = GridSearchCV(estimator=RandomForestClassifier(), param_grid=parameter_candidates, cv=5, refit=True, error_score=0, n_jobs=-1) clf.fit(training_set, training_classifications) return clf This second approach returns a GridSearchCV instance, with all the bells and whistles of the GridSearchCV such as .best_estimator_, .best_params, etc, which itself can be used like a trained classifier because: Optimised Random Forest Accuracy: 0.916970802919708 [[139 47] [ 44 866]] GridSearchCV Accuracy: 0.916970802919708 [[139 47] [ 44 866]] It just uses the same best estimator instance when making predictions. So in practise there's no difference between these two unless you specifically only want the estimator instance itself. As a side note, my differences in metrics were unrelated and down to a buggy class weighting function. [1]: http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV.fit
H: Keras input dimension bug? Keras has a problem with the input dimension. My first layer looks like this: model.add(Dense(128, batch_size=1, input_shape=(150,), kernel_initializer="he_uniform", kernel_regularizer=regularizers.l2(0.01), activation="elu")) As you can see the input dimension should be (150,) and with the fixed batch_size it is (1, 150) My data has dimension (150,) and could be for example a numpy array with 150 zeros. old_qval = model.predict(old_state_m) Here I call the model to make a prediction. Normally Keras should automatically add the batch size as an extra dimension so I should end up with (1, 150) which would work. But Keras adds the dimension for the batch size at the wrong place and I end up with (150, 1). I tried tensorflow and theano backend. Do I have a bug in my code or is it a problem with Keras? How can I fix the problem? I could reshape my input data but it already has the needed shape of (150,) and should be fine. What else could I do? If I should provide more data or code feel free to ask. AI: When you do model.predict(X) the first axis in X is always an index in a batch. So if you want to predict on one sample, do something like X = np.expand_dims(X, axis=0) In your case, this should work: old_qval = model.predict( np.expand_dims(old_state_m, axis=0) )
H: How much data warrants building a pipeline? If I'm doing simple aggregation dashboards, what's the minimum amount of data which justifies building a full blown data processing pipeline? Is it reasonable to build a complicated pipeline with Mysql -> Hadoop -> Redshift and then analytics and visualizations on top of redshift if my database only has about 10million transactions per month. Would it be better to just run something like ReDash visualizations on top of my unaggregated data in MySql? AI: You should only complicate it as much as necessary to meet your requirements. What kind of aggregations are you doing? Most DBs can handle basic counts pretty easily. 10 million in a month sounds like something a large MySQL instance could handle. How quickly does everything have to update? If the users only expect the summarized data to be updated monthly, you have a lot of time. If you have to update it more often, it might make sense to have MySQL collecting it but Redshift presenting it. Redshift's analytical focus and columnar storage means you may not need pre-aggregated data until your volume gets much higher than what would require aggregation in MySQL. I was using a 4 node Redshift cluster with the smallest instance types and it could do a count across >300 million rows in 17 seconds. Can you not build aggregate tables in MySQL? What is making you want to put Hadoop in between MySQL and Redshift?
H: What's a good Python HMM library? I've looked at hmmlearn but I'm not sure if it's the best one. AI: SKLearn has an amazing array of HMM implementations, and because the library is very heavily used, odds are you can find tutorials and other StackOverflow comments about it, so definitely a good start. http://scikit-learn.sourceforge.net/stable/modules/hmm.html PS: Right now its outdated in June 2019 As of July 2019 you can use hmmlearn (pip3 install hmmlearn)
H: Perceptron weight vector update I read about the Rosenblatt Perceptron Learning Algorithm. Often there is an explicit note: It is important to note that all weights in the weight vector are being updated simultaneously But why are all weights updated simultaneously? I tried another approach where I iterated over all weights and updated them in different iterations. It also worked on some simple test cases. Could someone explain to me, why they are updated simultaneously and why should this approach be better? AI: The algorithm works by adding or subtracting the feature vector to/from the weight vector. If you only add/subtract parts of the feature vector your a not guaranteed to always nudge the weights in the right direction, which could mess with the convergence of the procedure. The idea is that in the weight space every input vector is a hyperplane. You need to find a weight vector that is on the correct side of all the hyperplanes of your data inputs. The correct weight therefore is in a convex cone. If you observe a misclassification, that means your weight vector is on the wrong side of the hyperplane and therefore outside the convex cone of possible solutions. Now, by adding/subtracting the input vector to the weight vector you make sure that this data input vector is now correctly classified. You also make sure, that you decrease the distance of your weight vector by a sufficient margin (at least the length of the input vector) towards the cone of all possible solutions. If you only add parts of the vector (i.e. not update all weights) at each iteration, you cannot be sure that you make sufficient progress or even move in the right direction of the solution space.
H: Linear Discriminant Analysis, which parameters can be tunned in cross validation set up? I am implementing Linear Discriminant Analysis in R, which parameters can be tunned in cross validation set up? In regularized mode called penalizedLDA there are parameters which are optimised but I want to know which parameters are turned in case of simple LDA method? AI: LDA has a closed-form solution and therefore has no hyperparameters. The solution can be obtained using the empirical sample class covariance matrix. Shrinkage is used when there are not enough samples. In that case the empirical covariance matrix is often not a very good estimator.
H: Tensorflow regression model giving same prediction every time import tensorflow as tf x = tf.placeholder(tf.float32, [None,4]) # input vector w1 = tf.Variable(tf.random_normal([4,2])) # weights between first and second layers b1 = tf.Variable(tf.zeros([2])) # biases added to hidden layer w2 = tf.Variable(tf.random_normal([2,1])) # weights between second and third layer b2 = tf.Variable(tf.zeros([1])) # biases added to third (output) layer def feedForward(x,w,b): # function for forward propagation Input = tf.add(tf.matmul(x,w), b) Output = tf.sigmoid(Input) return Output Out1 = feedForward(x,w1,b1) # output of first layer Out2 = feedForward(Out1,w2,b2) # output of second layer MHat = 50*Out2 # final prediction is in the range (0,50) M = tf.placeholder(tf.float32, [None,1]) # placeholder for actual (target value of marks) J = tf.reduce_mean(tf.square(MHat - M)) # cost function -- mean square errors train_step = tf.train.GradientDescentOptimizer(0.05).minimize(J) # minimize J using Gradient Descent sess = tf.InteractiveSession() # create interactive session tf.global_variables_initializer().run() # initialize all weight and bias variables with specified values xs = [[1,3,9,7], [7,9,8,2], # x training data [2,4,6,5]] Ms = [[47], [43], # M training data [39]] for _ in range(1000): # performing learning process on training data 1000 times sess.run(train_step, feed_dict = {x:xs, M:Ms}) >>> print(sess.run(MHat, feed_dict = {x:[[1,15,9,7]]})) [[50.]] >>> print(sess.run(MHat, feed_dict = {x:[[3,8,1,2]]})) [[50.]] >>> print(sess.run(MHat, feed_dict = {x:[[6,7,10,9]]})) [[50.]] In this code, I am trying to predict the marks M obtained by a student in a test out of 50 given how many hours he/she slept, studied, used electronics and played the day before the test. These 4 features come under the input feature vector x. To solve this regression problem, I am using a deep neural network with an input layer with 4 perceptrons (the input features), a hidden layer with two perceptrons and an output layer with one perceptron. I have used sigmoid as the activation function. But, I am getting the exact same prediction([[50.0]]) for M for all possible input vectors I feed in. Can someone please tell me what is wrong with the code above, and why I get the same result each time? AI: Your network design/logic is basically correct, but you are seeing some very common problems with neural network numerical stability. This results in your weights diverging and not training accurately. Here are the fixes, any one of them might help a little, but the first two should be used for nearly all neural network projects. 1. Inputs need to be scaled to work with neural networks. This is called input normalisation. Usually you would do this in data preprocessing, but for your simple network we can include the scaling at the input: x_normalised = x * 0.2 - 0.5 # Arbitrary scaling I just made up Out1 = feedForward(x_normalised,w1,b1) # output of first layer The most common scaling operation is to take all the training data and convert it so that it has mean $0$ and standard deviation $1$ (typically per feature, as opposed to global scaling for all data as I have done here) - store the values used to achieve that and apply them to all following data for training, tests etc. 2. Adjust learning rate until you get a working value. train_step = tf.train.GradientDescentOptimizer(0.001).minimize(J) A value that is too high will cause your training to fail. A value that is to low will take ages to learn anything. 3. For small training sets, use more iterations than you might think This is not a general thing, but specific to demos with tiny amounts of training data like your example, or the commonly-used "learning XOR function". for _ in range(10000): # performing learning process sess.run(train_step, feed_dict = {x:xs, M:Ms}) With your very simple network actually this may cause over-fitting to the training data, so you will have to play with a value that gives you "sensible" results. However, in general how to spot and minimise over-fitting is a whole broad subject in itself, based on how you test and measure generalisation. This will need other questions if you are not sure when you learn it. It should be high on your list of things to learn though . . . it is a critical skill in producing useful neural networks that solve real problems.
H: Which is better: Out of Bag (OOB) or Cross-Validation (CV) error estimates? I have seen other posts in this forum but didn't find any convincing answer. Random Forest has an another way of tuning hyperparameter via OOB by design. OOB and CV are not the same as OOB error is calculated based on a portion of trees in Forest rather by full Forest. So what are the advantages and disadvantages of using OOB instead of a CV? Is getting to train on more data by using OOB correct to say? AI: OOB samples are a very efficient way to obtain error estimates for random forests. From a computational perspective, OOB are definitely preferred over CV. Also, it holds that if the number of bootstrap samples is large enough, CV and OOB samples will produce the same (or very similar) error estimates. Thus, if you perform many bootstrap samples, I would recommend performing cross-validation along the way with OOB samples until the OOB error converges.
H: Evaluating loss for non classifying convolutional neural network Sorry if my question is kind of dumb, I am very new to this field. I am trying to create a CNN that plays a variant of chess (for the examples, we'll use chess as it is close enough). My network , which is a policy network, outputs a vector of planes of probabilities. (e.g 1st pawn layer, 2nd rook layer etc..) each layer contains scalars determining how "good" would a move of the piece to this case be, according to the network. My question is, given the input channels I. And expected outpus (0s everywhere and 1 for the move that was played), how to calculate loss for gradient descent ? (e.g : how to evaluate the "closeness" of a move to another.). My wild guess is to weight each layer a "wrongness" factor (e.g if instead of wanting to move the first pawn , it tries to move the queen , it is labelled as really "wrong" and then apply some kind of spacial locality (e.g if it doesn't move the first pawn but the second and on the very right case, it is not very wrong). But is it correct ? And in general , how to compute the loss of a non classifying convolutional neural network ? AI: My wild guess is to weight each layer a "wrongness" factor (e.g if instead of wanting to move the first pawn , it tries to move the queen , it is labelled as really "wrong" and then apply some kind of spacial locality (e.g if it doesn't move the first pawn but the second and on the very right case, it is not very wrong). But is it correct ? You can relatively simply teach a policy network to predict human-like moves in your scheme using a database of moves. Actually your "wrongness" is probably well-enough represented by classification (your positive class might be "This is a good move") and the usual log loss that goes with it. After you have trained a policy network, you will want to look in depth at the literature for game-playing bots. Your policy network might work quite well alongside Monte Carlo Tree Search, provided you have some kind of evaluation heuristic for the resulting position. A reinforcement learning approach to learn from self-play would take you further, enabling the bot to teach itself about good and bad moves/positions, but is too complex to explain in an answer here. I suggest look into the subject after training your network and seeing how good a player you can create using just the policy network and a move search algorithm. And in general , how to compute the loss of a non classifying convolutional neural network ? There are a few common options available for regression, such as mean square error (MSE) i.e. $\frac{1}{2N}\sum_{i=1}^{N}(\hat{y}_i - y_i)^2$ where $\hat{y}_i$ is your prediction and $y_i$ is the ground truth for each example. If you use this loss function, and want to predict values outside of range 0-1, remember to use a linear output layer (i.e. no activation function after the last layer), so that the network can actually output close to the values you need - that's about the only difference in network architecture you need to care about. In the more general case of game-playing bots, it is usual (but not required) to calculate a predicted "return" or "utility" which is the sum of all rewards that will be gained by continuing to act in a certain way. MSE loss is a good choice for that. Although for zero-sum two player games where the reward is simply win/lose, you can use a sigmoid output layer (predicting chance of a win) and cross-entropy loss, much like a classifier. For your specific case, you can treat your initial policy network as a classifier. This immediately gives you some probability weightings for the predicted move, which you can use to pick the predicted best play, or maybe to guide Monte Carlo Tree Search. The kind of network that predicts utility or return from a position (or combination of position and action) is called a value network (or an action-value network) in reinforcement learning . If you have both a policy network and a value network, then you are on the way to creating an actor-critic algorithm.
H: Breaking down a column in Pandas into a separate CSV for display in Tableau My data is coming from a CSV, which should be visualized in Tableau. However, the data contains the column category_list, which consists of values separated by a vertical bar (|). Since Tableau can't handle arrays inside of attributes, I used Python (Pandas) to load the CSV and manipulate the data: import pandas as pd companies = pd.read_csv("companies.csv") I assume that the category_list column needs to be broken down and stored into another CSV (containing the permalink (unique ID) and category pairs). Something like this: permalink,category /organization/-qounter,Application Platforms /organization/-qounter,Real Time /organization/-qounter,Social Network Media /organization/-the-one-of-them-inc-,Apps /organization/-the-one-of-them-inc-,Games /organization/-the-one-of-them-inc-,Mobile /organization/1-4-all,Entertainment /organization/1-4-all,Games /organization/1-4-all,Software /organization/1-800-publicrelations-inc-,Internet /organization/1-800-publicrelations-inc-,Marketing /organization/1-800-publicrelations-inc-,Media /organization/1-800-publicrelations-inc-,Public Relations /organization/1-mainstream,Apps /organization/1-mainstream,Cable /organization/1-mainstream,Distribution /organization/1-mainstream,Software ... How to achieve it? Excerpt of the original CSV: permalink,category_list,... /organization/-qounter,Application Platforms|Real Time|Social Network Media,... /organization/-the-one-of-them-inc-,Apps|Games|Mobile,... /organization/1-4-all,Entertainment|Games|Software,... /organization/1-800-publicrelations-inc-,Internet|Marketing|Media|Public Relations,... /organization/1-mainstream,Apps|Cable|Distribution|Software,... ... AI: If you don't need other columns, here is a solution. It splits the column, stacks in vertically and combines with "permalink" column df.set_index('permalink').category_list.str.split('|', expand=True).stack().reset_index('permalink').rename(columns={0:'category'}) permalink category 0 /organization/-qounter Application Platforms 1 /organization/-qounter Real Time 2 /organization/-qounter Social Network Media 0 /organization/-the-one-of-them-inc- Apps 1 /organization/-the-one-of-them-inc- Games 2 /organization/-the-one-of-them-inc- Mobile 0 /organization/1-4-all Entertainment 1 /organization/1-4-all Games 2 /organization/1-4-all Software 0 /organization/1-800-publicrelations-inc- Internet 1 /organization/1-800-publicrelations-inc- Marketing 2 /organization/1-800-publicrelations-inc- Media 3 /organization/1-800-publicrelations-inc- Public Relations 0 /organization/1-mainstream Apps 1 /organization/1-mainstream Cable 2 /organization/1-mainstream Distribution 3 /organization/1-mainstream Software Then you can save it to csv.
H: COUNT on External Table in HIVE I have been trying around the EXTERNAL table concepts in HIVE CREATE EXTERNAL TABLE IF NOT EXISTS MovieData (id INT, title STRING,releasedate date, videodate date, URL STRING,unknown TINYINT, Action TINYINT, Adventure TINYINT, Animation TINYINT,Children TINYINT, Comedy TINYINT, Crime TINYINT, Documentary TINYINT, Drama TINYINT, Fantasy TINYINT, Film-Noir TINYINT, Horror TINYINT, Musical TINYINT, Mystery TINYINT, Romance TINYINT, Sci-Fi TINYINT, Thriller TINYINT, War TINYINT, Western TINYINT) COMMENT 'This is a list of movies and its genre' ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' STORED AS TEXTFILE; Created a table using the above statement and then used the LOAD statement to get the data populated. LOAD DATA LOCAL INPATH '/home/ubuntu/MovieLens.txt' INTO TABLE MovieData; Next time I DROP the table in HIVE and recreate it again and LOAD the data... But when I do a COUNT operation on the table I get double the values that's present in the file that I loaded. I read through few articles that EXTERNAL table does not delete the data but the schema alone from the HIVE metastore... External Table Can you please advise why does HIVE behave this way... AI: In Hadoop framework, there are multiple way to analyze the data. This depends on your use case, expertise and preference. Hive EXTERNAL tables are designed in a such way that other programmer can also share the same data location from other data processing model like Pig, MapReduce Programming, Spark and other without affecting each other work. In case of external table you will not loose the data if you have accidentally dropped your table as you already know it only drops the meta data and delete the schema and data remain untouched. This method is useful if there is already legacy data in HDFS on which the user wants to put some metadata so that the data can be queried and manipulated using Hive. Or you are loading data on HDFS from other ETL tools. Since EXTERNAL table doesn't delete the data and you are loading file again you are getting the count difference. if you are on your own to do all operation like load, analysis, drop etc, Hive support the INTERNAL table as well. If you want to delete the data when you drop table you can use Hive INTERNAL table. To create internal table you just need to remove the EXTERNAL keyword from your query and when you will drop this table it will delete the data as well.
H: Name of this algorithm for supervised cluster assignment Is there the name for an algorithm of cluster assignment that is based uniquely on the distance between the data point to classify and the center of the cluster? Let me be more clear: Let's say that I have two clusters $A,B$ made by $N$ points $x(i)_A$ and $x(j)_B$. I have to assign a new point $x_{new}$ to it's cluster. The point is assigned to the cluster that minimize the distance between the point and the center of the cluster: $$ min_{A,B} ( d_A(x_{new}, \frac{1}{N}\sum_{i \in A}x_i), d_B(x_{new}, \frac{1}{N}\sum_{j \in B}x_j) ) $$ Does this procedure has a name? Is there an algorithm similar to this? The closest algorithm that I can think of is k-Nearest Neighbour, but it's easy to think of cases where kNN performs properly and this algorithm performs very poorly. For instance, this algorithm as such will perform badly on data shaped non-uniformly / non-gaussian. This question arise from a paper that offer a procedure for doing supervised quantum machine learning, and claims that: Consider the task of assigning a post-processed vector $u \in R^n$ to one of two sets $V, W$, given $M$ representative samples $v_j ∈ V$ and $M$ samples $w_k ∈ W$. A common method for such an assignment is to evaluate the distance $|u−\frac{1}{M}\sum_{j} v_j |$ between $u$ and the mean of the vectors in $V$ , and to assign $u$ to $V$ if this distance is smaller than the distance between $u$ and the mean of $W$. AI: K means clustering kind of works like this. It uses the Lloyd algorithm, which basically has two steps. Per each new sample: 1) Choose the best cluster, by calculating the minimum distance to available clusters mean. 2) Recalculate chosen cluster mean.
H: Binary predication from binary variables Logistic Regression generates a binary outcome for a non-binary variable. I need a binary outcome from a binary variable. This is the requirement. How to predict binary A using previous values of A? or How to predict binary A,B,C,D values from previous A,B,C,D values? I know I can gather data and calculate basic probability. I that the most suitable approach? Thank you in advance. AI: This is really a job for Logistic Regression. Input variables can be categorical/boolean and the prediction can be categorical/boolean as well. However, if your target variable contains multiple categories, you should use multinomial logistic regression or Ordinal Logistic Regression (if there is an order in your categories). It also changes the way you interpret the output. The another approach would be to take any nonparametric method, Decision Tree for example. Where input and target can be boolean/categorical/continuous. Note that in case of a categorical variable, it is often recommended to transform it into dummy variables.
H: What is the meaning of spherical dataset? In the following article, one of the statement is as follows: The K-means algorithm is effective only for spherical datasets What does spherical dataset mean? AI: In this case, a picture is a worth a thousand words. They literally mean data whose distribution on X,Y is roughly a sphere. Different clustering algorithms work better on different distributions. For example, K means does poorly on the arrangement in the first two rows but OK on the last row.
H: How does LightGBM deal with value scale? I understand that the loss metric can be used as linear, or log, or other things. This is documented at http://lightgbm.readthedocs.io/en/latest/Parameters.html?highlight=logloss#metric-parameters I would like to understand how LightGBM works on variables with different scale. In other words, is it necessary for me to harmonize scale when running LightGBM? (I am used to linear regression where you need to get into linear scale.) If I had inputs x1, x2, x3, output y and some noise N then here are a few examples of different scales. $y = x1 + x2 + x3 + N $ $y = exp(x1 + x2 + x3 + N) $ $y = log(x1 + x2 + x3 + N) $ $y = sqrt(x1 + x2 + x3 + N) $ $y = log(x1 * x2 * x3 * N) $ AI: Generally, in tree-based models the scale of the features does not matter. This is because at each tree level, the score of a possible split will be equal whether the respective feature has been scaled or not. You can think of it like here: We're dealing with a binary classification problem and the feature we're splitting takes values from 0 to 1000. If you split it on 300, the samples <300 belong 90% to one category while those >300 belong 30% to one category. Now imaging this feature is scaled between 0 and 1. Again, if you split on 0.3, the sample <0.3 belong 90% to one category while those >0.3 belong 30% to one category. So you've changed the splitting point but the actual distribution of the samples remains the same regarding the target variable.
H: What does a predicted probability really mean, without considering the accuracy of the underlying model? Say I've built a (completely unrealistic) classification model in Keras that gives me 1.00 accuracy. And next, I would like to use my model on some new, unseen data, and use model.predict_proba to get a probability that the observation belongs to class "A". Say this returns to me a 0.75. Am I interpreting this correctly in English: "100 percent of the time, the model is confident that this new observation is 75 percent likely to be class A" ? If this is correct, then let's consider if my model was not totally perfect, like in real life, and instead it gave me a 0.40 accuracy. Say my predict_proba is still 0.75. Then, is this correct: "40 percent of the time, the model is confident that this new observation is 75 percent likely to be class A." ? If so...this makes it seem like predict_proba() is not tell a complete story. I could mislead someone (say a journalist...or a judge, whoever) by saying, "There's a 75 percent chance this unseen observation belongs to class A"...and that might sound great, if I fail to reveal that this statment was based on a model that had a low accuracy like 0.40. Am I stating this correctly, and does my apprehension have validity? AI: Accuracy is measured in classification model by comparing the predicted labels to the actual known labels. The predicted labels are a function of both the predicted probabilities for each class and a predefined threshold(binary classification usually is 0.5) So if sample A got predict_proba of {0: 0.2, 1: 0.8} it will be labeled as 1(since 0.8 > 0.5). Accuracy is measure of classification correctness and predict_proba is a direct result of the model underlying function.
H: Training the Discriminative Model in Generative Adversarial Neural Network What I know so far in DCGAN is that a discriminator is trained using the labeled data (so maybe that occurs before training the generative model). Also, I know that there is race between the generator and the discriminator, so maybe training occur online. So I have some concerns here: How many outputs the discriminator should have (Is it one output that describes the probability, ex: P(x))? How do we chose its output when feeding fake data vs. real data? Is the discriminator trained before using it with the DCGAN or the training is done online (It is mentioned in the Original Paper: Generative Adversarial Nets https://arxiv.org/pdf/1406.2661.pdf, that the whole network is trained using the back propagation), hence I think its online? Any help is much appreciated!! AI: In normal GANs, there are no labels, the training is completely unsupervised. The role of the discriminator is to tell apart samples generated by the generator from those taken from the training dataset. The training dataset is just a bunch of images. The discriminator is trained to output 0 for data generated by the generator (i.e. fake data) and 1 for real data (so the discriminator has a single output). This should answer points 1 and 2. The training of discriminator and generator takes place alternatively in a loop: first we train the discriminator, then the generator, then the discriminator again, etc. It is possible (and common) to train the discriminator a few times per each time we train the generator. This should answer point 3. It is also possible to use labels, but not in the way you were suggesting. When labels are used, we have Conditional GANs (https://arxiv.org/abs/1411.1784). In this case, the label is supplied as input to both the generator and the discriminator. The generator has to generate data that is associated to the supplied label. The discriminator has to tell apart fake data from real data, given the label.
H: What feature engineering is necessary with tree based algorithms? I understand data hygiene, which is probably the most basic feature engineering. That is making sure all your data is properly loaded, making sure N/As are treated as a special value rather than a number between -1 and 1, and tagging your categorical values properly. In the past I've done plenty of linear regression analysis. So feature engineering mainly concerned with: Getting features into the correct scale using log, exponent, power transformations Multiplying features: if you have height and width, multiply to make area Selecting features: remove features based on P value But, for LightGBM (and Random Forest) it seems like the scale of the features doesn't matter because orderable items are ordered and then randomly bisected. Interactions of features don't matter because one of the weak classifiers should find it if it is important. And feature selection isn't important because if the effect is weak then those classifiers will be attenuated. So, assuming you can't find more data to bring in, what feature engineering should be done with decision tree models? AI: Feature engineering that I would consider essential for even tree based algorithms are: Modular arithmetic calculations: e.g. converting a timestamp into day of the week, or time of day. If your model needs to know that something happens on the third Monday of every month, it will be nearly impossible to determine this from timestamps. On a similar vein, creating new features from the data you have available can drastically improve your predictive power. This is where domain knowledge is extremely important - if you know of, or think you know of a relationship then you can include variables that describe that relationship. This is because tree based methods can only create splits that are horizontal or vertical (i.e. orthogonal to your data). Dimension reduction is typically performed by either feature selection or feature transformation. Reducing the dimension through feature selection will likely not help much with the models you mention, but an algorithm may or may not benefit from feature transformation (for example principal component analysis) depending on how much information is lost in the process. The only way to know for sure is to explore whether feature transformation provides better performance.
H: Grid Search and High Variance I am currently trying to optimise some parameters on my model (15000 samples). What I am finding is a relatively large variance in the loss function 2%-10% which makes it hard to identify which parameter is the best. This appears to happen based on how the random number generator splits the data into train/test sets. I have tried : CV 5-fold Split of 75% Fixing the random seed does help (or using the same test set), but it concerns me that I get such variations based on what samples are in the test set. It seems alarming that the 'best parameter' is so dependent on a particular shuffle of the data and I worry how it translates to real world use. What are people's approach to situations like this? I was thinking I could just repeat each test multiple times and take the average, but that has very large computation costs and seems very inefficient. AI: As mentioned it can be a good idea to repeat CV a few times and average the results to obtain a more reliable estimate If you find many parameter constellations that are within one standard deviation (or in that neighborhood) of the best performing model, it can make sense to choose a model with a slightly worse CV performance, but with a simpler decision boundary (e.g. shallower decision trees, smaller gammas in an RBF-kernel SVM, stronger regularization parameters etc.) - something that was suggested for example here.
H: Loss function in GAN Since the aim of a Discriminator is to output 1 for real data and 0 for fake data, hence, the aim is to increase the likelihood of true data vs. fake one. In addition, since maximizing the likelihood is equivalent to minimizing the log-likelihood, why are we updating the discriminator by ascending its stochastic gradient as mentioned in Algorithm 1 in https://arxiv.org/pdf/1406.2661.pdf. Shouldn't we update the discriminator by descending its stochastic gradient? Any help is much appreciated! AI: In algorithm 1 of the original GAN article (https://arxiv.org/pdf/1406.2661.pdf), the discriminator is said to be updated by "ascending its stochastic gradient". This is referring to equation 1: $$ \min_G \max_D V(D, G)= \mathbb{E}_{x\sim p_{data}(x)}[\log D(x)] + \mathbb{E}_{z\sim p_z(z)}[\log(1 - D(G(z)))] $$ When we want to minimize something, we do grandient descent. When we want to maximize something, we do gradient ascent. In this context, we want to maximize $V(D, G)$ with respect to the discriminator $D$, that is, the $\max_D V(D, G)$ part from equation 1. I recommend you have a look at NIPS 2016 GAN tutorial video and text. They are very enlightening.
H: Why should the initialization of weights and bias be chosen around 0? I read this: To train our neural network, we will initialize each parameter W(l)ijWij(l) and each b(l)ibi(l) to a small random value near zero (say according to a Normal(0,ϵ2)Normal(0,ϵ2) distribution for some small ϵϵ, say 0.01) from Stanford Deep learning tutorials at the 7th paragraph in the Backpropagation Algorithm What I don't understand is why the initialization of the weight or bias should be around 0? AI: Assuming fairly reasonable data normalization, the expectation of the weights should be zero or close to it. It might be reasonable, then, to set all of the initial weights to zero because a positive initial weight will have further to go if it should actually be a negative weight and visa versa. This, however, does not work. If all of the weights are the same, they will all have the same error and the model will not learn anything - there is no source of asymmetry between the neurons. What we could do, instead, is to keep the weights very close to zero but make them different by initializing them to small, non-zero numbers. This is what is suggested in the tutorial that you linked. It has the same advantage of all-zero initialization in that it is close to the 'best guess' expectation value but the symmetry has also been broken enough for the algorithm to work. This approach has additional problems. It is not necessarily true that smaller numbers will work better, especially if the neural network is deep. The gradients calculated in backpropagation are proportional to the weights; very small weights lead to very small gradients and can lead to the network taking much, much longer to train or never completing. Another potential issue is that the distribution of the outputs of each neuron, when using random initialization values, has a variance that gets larger with more inputs. A common additional step is to normalize the neuron's output variance to 1 by dividing its weights by $sqrt(d)$ where $d$ is the number of inputs to the neuron. The resulting weights are normally distributed between $\left[\frac{-1}{\sqrt{d}}, \frac{1}{\sqrt{d}}\right]$
H: How should values that "don't exist" sometimes be handled as input data? I'm currently training an agent to learn how to fight in a shooting game. I'm using the bullet positions of the agent's opponent as one of the features. The features "don't exist" when the opponent isn't firing a bullet. What should I substitute the feature with when the opponent of the agent doesn't fire a bullet? Right now, I'm considering using "0", but are there better alternatives? AI: Having to input a non-existing feature is a common problem in machine learning models. Entering 0 and 0 could mean the position { x: 0, y: 0 }. But if you'd input "nothing", that still would be 0 (because nothing * weight = 0). The best you can do is figure out what works best through trial and error. I could think of a few options: If your network supports negative activation values (range: -1, 1), you should try to input it as -1. I don't think the learning model will have a hard time if you'd just input 0 as input, like you proposed. Add an extra input feature, 'bullet exists'. 0 if there is no bullet, 1 if there is a bullet. Then just input 0 for the x and y coordinates.
H: How to use isolation forest from sklearn to return the positions of anomalies? Assuming there is a $n$ x $m$ matrix with $n$ features and $m$ samples (each row is a feature and each column is a sample). I would like to use isolation forest in sklearn to return the positions of anomaly samples, for example, it returns an array [3,9,28,66] which contains the positions of columns of anomalies. Could anyone help me to do this by giving a small example code? Thank you very much. AI: You only have to call the predict method on your matrix, which returns an array of +1/-1 for inliers/outliers, assuming your IsolationForest is already fitted. Note that the matrix should be of the form (rows=samples x columns=features). Something like this should do it mdl = IsolationForest(...) mdl.fit(...) #your matrix "X" should be transposed to the form (m samples x n features) pred = mdl.predict(X.T) anomaly_columns = np.where(pred==-1)