text
stringlengths
83
79.5k
H: is it possible to do feature selection for unsupervised machine learning problems? I started looking for ways to do feature selection in machine learning. By having a quick look at this post , I made the assumption that feature selection is only manageable for supervised learning problems: Still, I have to ask: are there methods to do feature selection without having a known variable that will be used for a classification/regression problem? AI: Feature Selection is a technique which is used when we you know the target variable(Supervised Learning) When we talk with respect to Unsupervised Learning, there is no exact technique which could do that. But there is something which can help us in those lines i.e., Dimensionality Reduction, this technique is used to reduce the number of features and give us the features which explains the most about the dataset. The features would be derived from the existing features and might or might not be the same features. There are different techniques which are available for doing so: PCA Linear discriminant analysis Non-negative Matrix Factorization Generalized discriminant analysis and many more. The outcome of Feature Selection would be the same features which explain the most with respect to the target variable but the outcome of the Dimensionality Reduction might or might not be the same features as these are derived from the given input.
H: Python - Todoist API - Store data into Dictionary I've this Python code that allows me to collect data from Todoist API. Now I want to store this data into a Dictionary in order to make some data processing and some advanced analytics. Using this code how can I store this values into the structure ? from pytodoist import todoist user = todoist.login('####','""""') projects = user.get_projects() for project in projects: print(project.name) tasks = user.get_tasks() for task in tasks: print("Project: ", task.project.name, ", Task: ", task.content, ", Date: ", task.date_string, ", Priority: ", task.priority, ", Added in: ", task.date_added) My key_pais: The strings that I've in For loop "Project", "Task", etc. Values: the values from taks Many thanks! AI: At first remember that a dictionary is just bunches of keys and value pairs... So, if the key is $a$ and the value is $10$ , you can access the value by doing d[a] (d is the dictionary) Beaware that there is no order to storing the values, you can use OrderedDict for that.. Also you can have two values for the same key in a simple dictionary. Here's the complete answer
H: Does classification of a balanced data-set lead to any problem? So I came across a bioinformatics paper, where I found a line which says: One potential problem with using a training set with equal numbers of positive and negative examples in cross-validation is that it can artificially inflate performance estimates because the number of false-positive classifications is proportional to the number of examples classified. So applying these methods to all proteins in an organism may result in a large number of false-positive identifications. I am unable to understand how classification of balanced dataset is a problem. Can someone please explain this to me? AI: Actually, I guess it highly depends on the real data-set and its distribution. I guess the paper has referred to that is that on occasions that the distribution of each class varies, your model won't work well because of changing the distribution of each class. In cases like a disease prediction where the number of each class varies for different places, a model that is trained in the U.S won't work in African countries at all. The reason is that the distribution of classes has been changed. So in such cases that usually the negative and positive classes are not balanced in practice, balancing them will cause the problem of distribution changes. On these occasions, people usually use the real data-set which is not balanced and use F1 score for evaluation.
H: What is Big Data? I want to know what is Big Data? Can I have practical example. How big data can be? I need numbers where big data term is applicable. If you can provide link for case study with actual numbers, with reference to V's of Big Data. AI: If I want to quote from Wikipedia, Big data is data sets that are so voluminous and complex that traditional data-processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy and data source. There are five concepts associated with big data: volume, variety, velocity and, the recently added, veracity and value. Big data can be described by the following characteristics: Volume The quantity of generated and stored data. The size of the data determines the value and potential insight, and whether it can be considered big data or not. Variety The type and nature of the data. This helps people who analyze it to effectively use the resulting insight. Big data draws from text, images, audio, video; plus it completes missing pieces through data fusion. Velocity In this context, the speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development. Big data is often available in real-time. Variability Inconsistency of the data set can hamper processes to handle and manage it. Veracity The data quality of captured data can vary greatly, affecting the accurate analysis. To me, big data is highly connected to the deep-learning era. The reason is that during past decades, people could make good descriptions and models of data using machine-learning and data-mining but because everyday new data is coming out, social networks increase rapidly and digital gadgets' popularity is increasing among different nations, the demand for processing data and converting them to information and knowledge is increasing. If we want to use previous techniques to gather information from raw data, it will take too much time, if possible, to reach to appropriate results. In big data and deep-learning era, we need more complicated algorithms and more powerful hardware to deal with difficulties. You can also take a look at here and here which have relatively different perspective. Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis. But it’s not the amount of data that’s important. It’s what organizations do with the data that matters. Big data can be analyzed for insights that lead to better decisions and strategic business moves.
H: Python clustering and labels i'm currently experimenting with scikit and the DBSCAN algorithm. And i'm wondering how to combine the data with the labels to write them into a new file. I'd also like to understand how the labels array is used to filter the examples. Please correct me anytime i say something wrong because i'd like to understand the whole process better. For example my data looks like this: city x y A 1 1 B 1 1 C 5 5 D 8 8 So if i understand it correctly i first need to split the data i'd like to cluster. If i use the data like above it will consider the city column aswell (or even fail). So my next array would look like this: x y 1 1 1 1 5 5 8 8 Now i'll use the DBSCAN on my Array and it will create a cluster model. The clusterlabels are now stored in the array foo.labels_. As far as i know i can filter the data with those labels to get the items within the clusters. Let's assume my data is in the dataframe cities: cluster0 = cities[foo.labels_ == 0] What i don't understand is how this works. I somehow don't get what exactly happens here. I know that the labels are in an array where the column is a number and the value of the column is the cluster. So how do i get the correct index of my cities? So after the clustering i'd like to export my data back into a CSV file with the following format: city x y cluster A 1 1 0 B 1 1 0 C 5 5 1 D 8 8 2 My guess is to use to original dataframe and add another column like this: cities = cities.assign(cluster=p.Series(labels_)) But i'm absolutely unsure if that's the correct way to achieve what i want to do. I'd really appreciate some opinions and explanations. AI: As the algorithm should not change the order of the lists you could just add the clusters list cities["cluster"] = cluster If you are really paranoid you can add your input parameters a second time to the dataframe in the same way and compare the diff in values (should be 0).
H: When Should I Use ggplot2 Instead of Tableau? I am a student getting started with Tableau for the first time. My proficiency with ggplot2 is intermediate. I can create custom versions of the most popular kinds of charts in ggplot2 but nothing too fancy (and not very time-efficiently). I am interested in creating more polished visuals of my data in Tableau. My ggplot2 ones aren't bad, but they are not what I would call aesthetically pleasing either. What I would like to know is under what situations I should stick with ggplot2. The only advantages that come to mind with my current (albeit limited) understanding of Tableau is to keep my work reproducible and open source. ggplot2 is also probably more customizable, but I am continually impressed by Tableau's offerings as I learn to use it. This question is not intended to be a flippant jab at ggplot2. As I did not hear any talk about Tableau in the academic circles I was in, I am wondering what obvious drawbacks to Tableau may exist that would explain its unpopularity there. AI: There is one big economic difference between the two: ggplot2 is an open source package for an open source programming language. On the contrary Tableau is a proprietary software. That might be a dependency that you might not want to risk, e.g. you do everything in Tableau but then the license gets more expensive and your organization does not want to spend money for it anymore. Academics would often stress this reason, another reason of low adoption there is that Tableau has always been the BI tool and R the "research" tool. Apart from this, they are simply different tools (in the same domain of data viz) and as such will be suited for different problems. This can be subject to personal preference, but I would say that there is a few basic considerations: Speed: how fast can you get what you need? Use the tool that gets you results. Tableau might often be faster. Reproducibility & automation: If you know you might want to plot this chart many times in future in a consistent manner or embed your plots in some sort of notebook like R Markdown or dashboard, then R is most likely the better choice. You write the code once and produce thousands of plots within seconds if you need to. Type of project: If you are starting with messy data, or you are doing some modeling work, it seems as a good strategy for me to just stick to R, I see it as more efficient to have everything in one place. If you need to get awesome visualizations from clean data, Tableau will excel Based on the type of visualizations and output you want, one or the other might be better, ggplot2 is extremely flexible, Tableau has a lot of amazing functionality which is ready to go.
H: How do I convert strings in CSV into integer in Pandas? for my supervised classification problem, I have a train dataset which contains past purchase data of customers and 5 new products are purchased by these customers. I have a test dataset which contains past purchase data of customers. They never bought from these 5 products. I want to train a predictive model, I need to convert integer purchased data and new products. There are so many different products in my purchased data. I saw that the mapping was made for true-false to convert 1-0, but in this case what can I do that? AI: You cannot convert your string product names to integers and expect it to work, if you convert it to integers the algorithm you use will expect a linear relationship between these integers, a relationship that in fact does not exist. It is categorical data, and you should treat it like that, you have to one-hot encode it. If you have too many products to one-hot encode all of them try grouping them in categories and then one-hot encoding the categories.
H: How do machine learning models (e.g. neural networks) get better over time from new data? I'm a complete newbie to the world of machine learning, and I'm currently working on implementing a model that will need to incorporate feedback, as well as changes to the data set (both feature & label changes over time). The frequency of change isn't yet entirely known, but for simplicity could probably be rolled into a batch every day or so. I'm aware of how I can build a training & test set, and get a classifier up and running. My primary issue is that it's probably not going to be ideal to run a completely fresh training every time there's a change. Users will be interacting with the system via "this was helpful / not helpful" type feedback, which I want to use to strengthen / weaken its association model. I'm absolutely in the dark as to how once you have the model from the initial data, you can then get it to refine over time from this sort of feedback, and how to update (i.e. add/remove features & labels) without starting from scratch. tl;dr: What sort of classifier is best suited to this sort of refinement-over-time problem? I'll also add that the model needs to support multi-label classification, so any caveats / gotchas / information on how to do this in the broader context of my question would be helpful too. AI: If you only want to add more examples you can retrain the machine learning algorithm you had the day before. But if you want to add new features training it from the beginning is needed, you could train a new ML algorithm using only the new features and mix the outputs, but that is not a good solution, retrain the whole ML algorithm. I would use a neural network, which is very intuitive for your case, you calculate a set of weights and save them. When you get new data you load your old network and calculated set of weights and tune it using the new examples you have. NN natively support multilabel classification, and if one day you decide that you one to add a new label you dont need to retrain the whole NN, you could erase the last layer, add a new one, and only train this last layer weights.
H: Markov Chains for sequential data I am new to Markov chains and HMM and I am looking for help in developing a program (in python) that predicts the next state based on 20 previous states (lets say 20 states in last 20 months). I have a sequential dataset with 50 customers i.e. the rows contain sequence of 20 states for each of the 50 customers (dataset has 50 rows and 20 columns excluding the headers). I am trying to determine the next state using markov chains and all the literature in the web is focused around examples of text strings. I am looking something specific to the kind of example I have. Can somebody please help me come up with the initial probability matrix and then consider the 20 states to predict the next state? AI: If you know what the state history is, you don't need a 'hidden' Markov model, you just need a Markov model (or some other mechanism). The 'hidden' part implies a distinction between some sequence of unobservable states, and some observations that are related to them. In your case, you say you have observed the past states for each customer, so you don't necessarily need to infer anything 'hidden'. The simplest way to proceed in your case would be to calculate a transition matrix, i.e. probability of state given previous state. That's a very simple model but it might do what you want. To do this, just look at all state pairs, and count to get p(s2 | s1) = p(s1 & s2)/p(s1). This is equivalent to a 1-gram model that you've probably read about. Each state is akin to a word. You could also make a more complex model, like a 2-gram model or even an RNN. Honestly, since you have a fixed amount of history, you can just throw your data into an scikit-learn model or xgboost or something, where each customer's history is the vector of predictors and the next state is the outcome. It won't know the sequential dependencies, but you are essentially indexing the past states by time, so it may work pretty well. If you need more clarification about part of this, just ask.
H: Image Matching to solve captcha I am building a bot with python and I need some system to solve captchas like these: I think I need a deep learning algorithm, but coding one is a pain in the ass. Is there any easy solution to this? I can code the part that screenshots the images and clicks on the correct answer, image 4 or 5 in the example. The images get rotated and change every time. Thank you! AI: Here's one simple yet effective solution without using the deep learning algorithm. Divide the problem into 2 parts: Segmentation: You'll need to do some image processing like edge detection and segment all the objects in the given captcha and save the region of interest (OpenCV can help). Assign labels to the objects. Similarity Measurement: Use some scale and rotational invariant feature descriptor algorithm to extract features from the objects, and a matcher to compare the objects for a similarity measure (check SIFT/SURF features). Based on the similarity scores, make a decision.
H: How to count occurrences of values within specific range by row I have a data frame of 3000 rows x 101 columns like as follow: Time id0 id1 id2 ………… id99 1 1.71 6.99 4.01 ………… 4.98 2 1.72 6.78 3.15 ………… 4.97 . . 3000 0.36 0.23 0.14 ………… 0.28 Using Python, how could we add a column that counts for each row the number of values (in column id0, to id99) that are within a specific range? AI: You can apply a function to each row of the DataFrame with apply method. In the applied function, you can first transform the row into a boolean array using between method or with standard relational operators, and then count the True values of the boolean array with sum method. import pandas as pd df = pd.DataFrame({ 'id0': [1.71, 1.72, 1.72, 1.23, 1.71], 'id1': [6.99, 6.78, 6.01, 8.78, 6.43], 'id2': [3.11, 3.11, 4.99, 0.11, 2.88]}) def count_values_in_range(series, range_min, range_max): # "between" returns a boolean Series equivalent to left <= series <= right. # NA values will be treated as False. return series.between(left=range_min, right=range_max).sum() # Alternative approach: # return ((range_min <= series) & (series <= range_max)).sum() range_min, range_max = 1.72, 6.43 df["n_values_in_range"] = df.apply( func=lambda row: count_values_in_range(row, range_min, range_max), axis=1) print(df) Resulting DataFrame: id0 id1 id2 n_values_in_range 0 1.71 6.99 3.11 1 1 1.72 6.78 3.11 2 2 1.72 6.01 4.99 3 3 1.23 8.78 0.11 0 4 1.71 6.43 2.88 2
H: sagemath: compared to r.quantile, what is a faster way to find boundaries for a boxplot? I was using the r.quantile method in sagemath to find boundaries for a box plot. The plot was taking a long time using r.quantile. r.quantile took more than 20 seconds to find the quartiles for a data set that could be sorted and plotted point by point in less than half a second on the same machine. What is a faster alternative? AI: The following (crude) code is at least 50 times faster than r.quantile: def findFences (orderedList, outlierC = 1.5, farOutlierC = 3.0): """findFences: an ordered list of ints or floats -> tuple of 9 floats (aMin, outerLoFence, innerLoFence, Q1, Q2, Q3, innerHiFence, outerHiFence, aMax) keys: float for outlier constant [outlierC] and far outlier constant [farOutlierC]""" lenMod4, half, quarter = len(orderedList) % 4, int(len(orderedList)/2), int(len(orderedList)/4) aMin, aMax = orderedList[0], orderedList[-1] # find quartiles if not lenMod4: Q1, Q2, Q3 = (orderedList[half-quarter] + orderedList[half-quarter-1])/2.0, (orderedList[half] + orderedList[half-1])/2.0, (orderedList[half+quarter] + orderedList[half+quarter-1])/2.0 elif lenMod4 == 1: Q1, Q2, Q3 = float(orderedList[half-quarter]), float(orderedList[half]), float(orderedList[half+quarter]) elif lenMod4 == 2: Q1, Q2, Q3 = float(orderedList[half-quarter-1]), (orderedList[half] + orderedList[half-1])/2.0, float(orderedList[half+quarter]) else: Q1, Q2, Q3 = (orderedList[half-quarter] + orderedList[half-quarter-1])/2.0, float(orderedList[half]), (orderedList[half+quarter] + orderedList[half+quarter+1])/2.0 IQR = Q3 - Q1 outDist = IQR * outlierC farOutDist = IQR * farOutlierC innerLoFence, innerHiFence = Q1 - outDist, Q3 + outDist outerLoFence, outerHiFence = Q1 - farOutDist, Q3 + farOutDist return aMin, outerLoFence, innerLoFence, Q1, Q2, Q3, innerHiFence, outerHiFence, aMax
H: How to set batch_size, steps_per epoch, and validation steps? I am starting to learn CNNs using Keras. I am using the theano backend. I don't understand how to set values to: batch_size steps_per_epoch validation_steps What should be the value set to batch_size, steps_per_epoch, and validation_steps, if I have 240,000 samples in the training set and 80,000 in the test set? AI: batch_size determines the number of samples in each mini batch. Its maximum is the number of all samples, which makes gradient descent accurate, the loss will decrease towards the minimum if the learning rate is small enough, but iterations are slower. Its minimum is 1, resulting in stochastic gradient descent: Fast but the direction of the gradient step is based only on one example, the loss may jump around. batch_size allows to adjust between the two extremes: accurate gradient direction and fast iteration. Also, the maximum value for batch_size may be limited if your model + data set does not fit into the available (GPU) memory. steps_per_epoch the number of batch iterations before a training epoch is considered finished. If you have a training set of fixed size you can ignore it but it may be useful if you have a huge data set or if you are generating random data augmentations on the fly, i.e. if your training set has a (generated) infinite size. If you have the time to go through your whole training data set I recommend to skip this parameter. validation_steps similar to steps_per_epoch but on the validation data set instead on the training data. If you have the time to go through your whole validation data set I recommend to skip this parameter.
H: Supervised learning for variable length feature-less data I have data in following form: a: 1,2,3,2,3 a: 2,4,5,6,7,8,0,9,7,6,5,6,2 a: 7,8,9,3,4 b: 4,5,3,5,6,3,5,1,2 b: 1,6,3,2,4,5 b: 2,4,5,6,7,8,0,9,7,6,5,6,2 c: 7,8,9,3,4 c: 4,5,3,5,6,3,5,1,2 ... (in reality, each case has about 100-200 numbers, though the length is variable) Here, a, b and c are groups (their number is fixed - taken as 3 here) and the numbers indicate a vector associated with each case. How can I apply supervised machine learning with such a data so that if I get a new series of numbers, e.g. following: 3,2,3,4,1,5,6 I should be able to determine which group (a, b or c) does this case belongs to. Following features of each list of numbers may be important: length of series mean value of series variance of series maximum of series minimum of series type of distribution of series (normal or non-normal) How can I apply machine learning methods to such data. Thanks for your insight. AI: This is a typical scenario in language processing, e.g. if you want to read a product review representing each word with a number and output a 1-5 star rating. You can use a recurrent many-to-one RNN/GRU/LSTM network: After feeding each word it generates a feature vector which can be fed into a classifier. https://en.wikipedia.org/wiki/Recurrent_neural_network
H: Data Matching Using Machine Learning I have around 4000 customer records and 6000 user records and about 3000 customer records match leaving 1000 unmatched customers. I have created a fuzzy matching algorithm using Levenshtein and Hamming and added weights to certain properties, but I want to be able to match the remaining records without manually doing this. Ideally I want to implement an algorithm to take a customer and user and output match/no match. However, wouldn't I need to train with true negatives? Is there an algorithm that can train with just 1 label? Thanks AI: You can obtain one negative example by taking one of the 3000 customer records and pairing it with any user record that is known not to match. In this way, you can obtain $3000$ positives and $3000 \times 5999$ negatives. You could then train a boolean classifier on this entire training set. This might work better than using one-class classification on just the positives. Even better might be to use techniques for learning to rank. If $c$ is a customer record that is known to match a user record $u$, and $u'$ is any other user record (which $c$ doesn't match), then you want your classifier to rank the pair $(c,u)$ higher than $(c,u')$. In this way you can obtain $3000 \times 5999$ such ranking-pairs, and try to train a classifier to learn to rank, then use that to find the best match for each of the 1000 customer records.
H: Should I fit my parameters with brute force I am running analysis on data for this type of sensor my company makes. I want to quantify the health of the sensor based on three features using the following formula: sensor health index = feature1 * A + feature2 * B + feature3 *C We also need to pick a threshold so that if this index exceeds the threshold, the sensor is considered as bad sensor. We only have a legacy list which shows about 100 sensors are bad. But now we have data for more than 10,000 sensors. Anything not in that 100 sensor list is NOT necessarily bad. So I guess the linear regression methods don't work in this scenario. The only way I can think of is the brute force fitting. Pseudo code is as follows: # class definition for params(coefficients) class params{ a b c th } # dictionary of parameter and accuracy rate map = {} for thold in range (1..20): for a in range (1..10): for b in range (1..10): for b in range (1..10): # bad sensor list bad_list = [] params = new params[a, b, c, thold] for each sensor: health_index = sensor.feature1*a+sensor.feature2*b+sensor.feature3*c if health_index > thold: bad_list.append(sensor.id) accuracy = percentage of common sensors between bad_list and known_bad_sensors map[params] = accuracy # rank params based on accuracy rank(map) # the params with most accuracy is the best model print map.index(0) I really don't like this method since it is using 5 for loops which is very efficient. I wonder if there is a better way to do it. Using existing library such as sk-learn perhaps? AI: If you don't know which of the 10,000 sensors are good and which are bad, the data from those 10,000 sensors is useless for training a regression line / classifier. You need labelled data, where you know both the value of the features and the health of the sensor. Moreover, to be effective, you probably need your training set to contain both healthy sensors and bad sensors (where you know which ones are healthy and which ones are bad); it's not enough to just have data from bad sensors, because that doesn't tell you what healthy sensors look like. In a pinch you could use one-class classification, but I don't recommend it -- your results will probably be poor, and you'll probably be better off obtaining labelled data from both healthy sensors and bad sensors.
H: benchmark Result for MovieLens dataset? I am looking for a benchmark result or any kaggle competition held using MovieLens(20M or latest) dataset. Similar question has been asked here but, provided links are dead so re-raising the question. AI: One result for MovieLens 20M using Factorization Machine can be found here. They got MAE: 0.60 and RMSE: 0.80. Another result for MovieLens 20M using Autoencoders can be found here. They got RMSE: 0.81.
H: How to apply the gradient of softmax in backprop I recently did a homework where I had to learn a model for the MNIST 10-digit classification. The HW had some scaffolding code and I was supposed to work in the context of this code. My homework works / passes tests but now I'm trying to do it all from scratch (my own nn framework, no hw scaffolding code) and I'm stuck applying the grandient of softmax in the backprop step, and even think what the hw scaffolding code does might not be correct. The hw has me use what they call 'a softmax loss' as the last node in the nn. Which means, for some reason they decided to join a softmax activation with the cross entropy loss all in one, instead of treating softmax as an activation function and cross entropy as a separate loss function. The hw loss func then looks like this (minimally edited by me): class SoftmaxLoss: """ A batched softmax loss, used for classification problems. input[0] (the prediction) = np.array of dims batch_size x 10 input[1] (the truth) = np.array of dims batch_size x 10 """ @staticmethod def softmax(input): exp = np.exp(input - np.max(input, axis=1, keepdims=True)) return exp / np.sum(exp, axis=1, keepdims=True) @staticmethod def forward(inputs): softmax = SoftmaxLoss.softmax(inputs[0]) labels = inputs[1] return np.mean(-np.sum(labels * np.log(softmax), axis=1)) @staticmethod def backward(inputs, gradient): softmax = SoftmaxLoss.softmax(inputs[0]) return [ gradient * (softmax - inputs[1]) / inputs[0].shape[0], gradient * (-np.log(softmax)) / inputs[0].shape[0] ] As you can see, on forward it does softmax(x) and then cross entropy loss. But on backprop, it seems to only do the derivative of cross entropy and not of softmax. Softmax is left as such. Shouldn't it also take the derivative of softmax with respect to the input to softmax? Assuming that it should take the derivative of softmax, I'm not sure how this hw actually passes the tests... Now, in my own implementation from scratch, I made softmax and cross entropy separate nodes, like so (p and t stand for predicted and truth): class SoftMax(NetNode): def __init__(self, x): ex = np.exp(x.data - np.max(x.data, axis=1, keepdims=True)) super().__init__(ex / np.sum(ex, axis=1, keepdims=True), x) def _back(self, x): g = self.data * (np.eye(self.data.shape[0]) - self.data) x.g += self.g * g super()._back() class LCE(NetNode): def __init__(self, p, t): super().__init__( np.mean(-np.sum(t.data * np.log(p.data), axis=1)), p, t ) def _back(self, p, t): p.g += self.g * (p.data - t.data) / t.data.shape[0] t.g += self.g * -np.log(p.data) / t.data.shape[0] super()._back() As you can see, my cross entropy loss (LCE) has the same derivative as the one in the hw, because that is the derivative for the loss itself, without getting into the softmax yet. But then, I would still have to do the derivative of softmax to chain it with the derivative of loss. This is where I get stuck. For softmax defined as: The derivative is usually defined as: But I need a derivative that results in a tensor of the same size as the input to softmax, in this case, batch_size x 10. So I'm not sure how the above should be applied to only 10 components, since it implies that I would diferentiate for all inputs with respect to all outputs (all combinations) or in matrix form. AI: After further working on this, I figured out that: The homework implementation combines softmax with cross entropy loss as a matter of choice, while my choice of keeping softmax separate as an activation function is also valid. The homework implementation is indeed missing the derivative of softmax for the backprop pass. The gradient of softmax with respect to its inputs is really the partial of each output with respect to each input: So for the vector (gradient) form: Which in my vectorized numpy code is simply: self.data * (1. - self.data) Where self.data is the softmax of the input, previously computed from the forward pass.
H: Visualizing software metrics I have the below sets of data per application, you can call them as software metrics. These metrics vary depending on the size of an application. Bugs CodeSmells Vulnerability The size of the application is determined by LOC (Lines of code), how can i showcase the complexity of each app relative to the lines of code if i visualize each of these parameters. Example Bugs LOC SweetApp 10 10000 SourApp 120 5660000 SaltyApp 55 1500 How do i visualize Bugs per app in relation to the LOC, because LOC determines their complexity a higher number in bugs doesn't necessarily be bad considering a higher number in LOC. AI: You could use the ratios of bugs (or any other variable) divided by lines of code (LOC). Since ratios may vary a lot for instance bugs vs. vulnerabilities the resulting plot oftentimes doesn't look as good so you could use a normalization procedure, in fact, is the recommended procedure. In your case Bugs and a few vulnerabilities divided by LOC doesn't look too bad so I included the plot of raw ratios too in this example. I'm using a count overlapping points plot which is a variant of a barplot that doesn't fill bars and only cares about the value as a point. You could use any normalization that you feel gives you the best result. Here I'm only centering the data. In both plots the highest value means higher complexity according to the ratio metric. Raw Ratios of variables divided by LOC Normalized Ratios of variables divided by LOC Code in R needed to replicate these plots require("reshape2") require("ggplot2") df1 <- data.frame(App = c("SweetApp", "SourApp", "SaltyApp"), LOC = c(10000, 5660000, 1500), Bugs = c(10, 120, 55), CodeSmells = c(50, 30, 20), Vulnerabilities = c(2, 3, 10)) #Define ratios df1$RatiosBugs <- df1$Bugs / df1$LOC df1$RatiosCodeSmells <- df1$CodeSmells / df1$LOC df1$RatiosVulnerabilities <- df1$Vulnerabilities / df1$LOC #Normalize Ratios df1$NormRatiosBugs <- scale(df1$RatiosBugs, scale = FALSE) df1$NormRatiosCodeSmells <- scale(df1$RatiosCodeSmells, scale = FALSE) df1$NormRatiosVulnerabilities <- scale(df1$RatiosVulnerabilities, scale = FALSE) dfRaw <- df1[, c("App", "RatiosBugs", "RatiosCodeSmells", "RatiosVulnerabilities")] dfNorm <- df1[, c("App", "NormRatiosBugs", "RatiosCodeSmells", "RatiosVulnerabilities")] dfRaw.m <- melt(dfRaw, id.vars = c("App")) dfNorm.m <- melt(dfNorm, id.vars = c("App")) #Plot Raw Ratios ggplot(dfRaw.m, aes(App, value)) + geom_count(aes(color = variable), position = position_dodge(width = 0.4), stat = "identity") #Plot Normalized Ratios ggplot(dfNorm.m, aes(App, value)) + geom_count(aes(color = variable), position = position_dodge(width = 0.4), stat = "identity")
H: How to compare two regression models? Which measurement(s) should one choose to compare two regression models? After modifying a learning algorithm(specifically, a regression algorithm, let's call it M1) to generate another learning algorithm M2, how to validate if the above modification is efficient? here is what I did(with 10-fold cross-validation) I choose MSE as the only measurement, at each run, for M1 and M2, calculate the MSE of both the training and testing set. And the result shows that: average MSE of the training set of 10 runs: M2 < M1 average MSE of the testing set of 10 runs: M2 < M1 Question: according to the above list, can we draw a conclusion that M2 is better than M1? thus, the modification of algorithm M1 is efficient(at least on this dataset)? Or: Did I miss some other important measurements? Is there a rule of thumb of comparing two regression models? AI: There are two things to consider: Sampling bias Metric The sampling bias problem is that your test set is likely not the complete set of things you're interested in. So, no, you can't simply check MSE_1 < MSE_2 and conclude it is always the case when it's "just" for your dataset the case. This is what significance tests are for. (Although this kind of reasoning is super common in machine learning and I did it myself already ) Then the question if the metric is the correct one for your application. Typical choices are: MSE, mean absolute error, custom cost functions
H: Cannot get the prediction right using Stochastic Gradient Descent: Always predicts 1 I have a CSV file with 20 columns and 785 rows. The 785th row for each column is a label describing the encoded image. The encoded image is either 3 or 5. So 1-784 row is the encoded image and 785th row is the label that names the image. I loaded the CSV file 3_5_small.csv and segregated the labels from the encoded data. which as you see is the image of number 3. Now, I decided to use logistic regression to predict the images from the encoded data. I used Stochastic Gradient Descent as explained in the machine learning course by Andrew NG. But I do not think, I got it right. Before following the code, here are the steps I did: Transformed train_labels_3_5 which contained only 3 and 5 to 1 and 0 respectively. So I want to predict the image of 3. If the output probability is < 0.5, it will be 5 and > 0.5 will be 3. Randomly shuffled the train_data_3_5 and train_labels_3_5 to the same degree. Randomly generated the theta vector Passed the theta vector and X vector into the hypothesis function Updated the theta vector. This is all I did. Here is the code to what I have done. train <- function(data, labels, alpha = 0.001) { #browser() #Initialize the theta vector theta <- seq(from = 0, to = 1, length.out = nrow(data)) number_of_iterations = 10 for(noi in 1:number_of_iterations) { for(i in seq(1:ncol(data))) { x = as.vector(data[,i]) #Create a x vector h = hypothesis(x, theta) #Call the hypothesis function to get the probability y = labels[1,i] theta <- theta - (alpha * ((h - y) * x)) } } return(theta) } But on the test data and even on the training data, this does not predict correct at all. I do not know where have I gone wrong. I have revisited the algorithm, the lecture but cannot figure out, what am I doing incorrectly. It always predicts 1 no matter I pass the vector for 3 or 5! AI: My network does always predict the same class. What is the problem? I had this a couple of times. Although I'm currently too lazy to go through your code, I think I can give some general hints which might also help others who have the same symptom but probably different underlying problems. Debugging Neural Networks Fitting one item datasets For every class i the network should be able to predict, try the following: Create a dataset of only one data point of class i. Fit the network to this dataset. Does the network learn to predict "class i"? If this doesn't work, there are four possible error sources: Buggy training algorithm: Try a smaller model, print a lot of values which are calculated in between and see if those match your expectation. Dividing by 0: Add a small number to the denominator Logarithm of 0 / negativ number: Like dividing by 0 Data: It is possible that your data has the wrong type. For example, it might be necessary that your data is of type float32 but actually is an integer. Model: It is also possible that you just created a model which cannot possibly predict what you want. This should be revealed when you try simpler models. Initialization / Optimization: Depending on the model, your initialization and your optimization algorithm might play a crucial role. For beginners who use standard stochastic gradient descent, I would say it is mainly important to initialize the weights randomly (each weight a different value). - see also: this question / answer Learning Curve See sklearn for details. The idea is to start with a tiny training dataset (probably only one item). Then the model should be able to fit the data perfectly. If this works, you make a slightly larger dataset. Your training error should slightly go up at some point. This reveals your models capacity to model the data. Data analysis Check how often the other class(es) appear. If one class dominates the others (e.g. one class is 99.9% of the data), this is a problem. Look for "outlier detection" techniques. More Learning rate: If your network doesn't improve and get only slightly better than random chance, try reducing the learning rate. For computer vision, a learning rate of 0.001 is often used / working. This is also relevant if you use Adam as an optimizer. Preprocessing: Make sure you use the same preprocessing for training and testing. You might see differences in the confusion matrix (see this question) Common Mistakes This is inspired by reddit: You forgot to apply preprocessing Dying ReLU To small / to big learning rate Wrong activation function in final layer: Your targets are not in sum one? -> Don't use softmax Single elements of your targets are negative -> Don't use Softmax, ReLU, Sigmoid. tanh might be an option Too deep network: You fail to train. Try a simpler neural network first.
H: What are the benefits of having ML in js? What are the benefits of having ML in JavaScript I.e. the deeplearn.js (now tensorflow) stuff, as opposed to implementing the ML steps in a python backend? AI: There are a lot of services that offer free or very cheap hosting of static websites. If you are able to implement your ML model in JS this allows you to deploy your product/app/whatever easily and with low cost. In comparison, requiring a backend server running your model is harder to setup and maintain, in addition to costing more.
H: Create new data frames from existing data frame based on unique column values I have a large data set (4.5 million rows, 35 columns). The columns of interest are company_id (string) and company_score (float). There are approximately 10,000 unique company_id's. company_id company_score date_submitted company_region AA .07 1/1/2017 NW AB .08 1/2/2017 NE CD .0003 1/18/2017 NW My goal is to create approximately 10,000 new dataframes, by unique company_id, with only the relevant rows in that data frame. The first idea I had was to create the collection of data frames shown below, then loop through the original data set and append in new values based on criteria. company_dictionary = {} for company in df['company_id']: company_dictionary[company_id] = pd.DataFrame([]) Is there a better way to do this by leveraging pandas? i.e., is there a way I can use a built-in pandas function to create new filtered dataframes with only the relevant rows? Edit: I tried a new approach, but I'm now encountering an error message that I don't understanding. [In] unique_company_id = np.unique(df[['ID_BB_GLOBAL']].values) [In] unique_company_id [Out] array(['BBG000B9WMF7', 'BBG000B9XBP9', 'BBG000B9ZG58', ..., 'BBG00FWZQ3R9', 'BBG00G4XRQN5', 'BBG00H2MZS56'], dtype=object) [In] for id in unique_company_id: [In] new_df = df[df['id'] == id] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) C:\get_loc(self, key, method, tolerance) 2133 try: -> 2134 return self._engine.get_loc(key) 2135 except KeyError: pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4433)() pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4279)() pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13742)() pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13696)() KeyError: 'id' During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) <ipython-input-50-dce34398f1e1> in <module>() 1 for id in unique_bank_id: ----> 2 new_df = df[df['id'] == id] C:\ in __getitem__(self, key) 2057 return self._getitem_multilevel(key) 2058 else: -> 2059 return self._getitem_column(key) 2060 2061 def _getitem_column(self, key): C:\ in _getitem_column(self, key) 2064 # get column 2065 if self.columns.is_unique: -> 2066 return self._get_item_cache(key) 2067 2068 # duplicate columns & possible reduce dimensionality C:\ in _get_item_cache(self, item) 1384 res = cache.get(item) 1385 if res is None: -> 1386 values = self._data.get(item) 1387 res = self._box_item_values(item, values) 1388 cache[item] = res C:\ in get(self, item, fastpath) 3541 3542 if not isnull(item): -> 3543 loc = self.items.get_loc(item) 3544 else: 3545 indexer = np.arange(len(self.items))[isnull(self.items)] C:\ in get_loc(self, key, method, tolerance) 2134 return self._engine.get_loc(key) 2135 except KeyError: -> 2136 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2137 2138 indexer = self.get_indexer([key], method=method, tolerance=tolerance) pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4433)() pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4279)() pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13742)() pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13696)() KeyError: 'id' AI: You can groupby company_id column and convert its result into a dictionary of DataFrames: import pandas as pd df = pd.DataFrame({ "company_id": ["AA", "AB", "AA", "CD", "AB"], "company_score": [.07, .08, .06, .0003, .09], "company_region": ["NW", "NE", "NW", "NW", "NE"]}) # Approach 1 dict_of_companies = {k: v for k, v in df.groupby('company_id')} # Approach 2 dict_of_companies = dict(tuple(df.groupby("company_id"))) import pprint pprint.pprint(dict_of_companies) Output: {'AA': company_id company_region company_score 0 AA NW 0.07 2 AA NW 0.06, 'AB': company_id company_region company_score 1 AB NE 0.08 4 AB NE 0.09, 'CD': company_id company_region company_score 3 CD NW 0.0003}
H: Difference between Time series clustering and Time series Segmentation In the context of time series data mining, I have read about time series segmentation and time series clustering, but I couldn't differentiate between both. In case they are different, how these methods are correlated with each other? Well from my understanding (please correct me if I am wrong), the segmentation is considered as a preprocessing step for the clustering phase. I mean that the segmentation step is used mainly to partition your time series data into segments, let's say into states. After that, a conventional clustering algorithm can be applied to group these segments into clusters (similar segments belong to the same cluster). As an example, let's say that the segmentation process represents a given time series into the following segments: (S1, S2, S3, S4, S5, S6). Then after applying the segmentation process, a conventional clustering method is applied to cluster the extracted segments. So we might end up with something like this: If k = 3: then K1 {S1, S5}, K2 {S3, S6}, K3 {S2, S4} Please correct me if I am mistaken, and provide links for more clarification if you have any. AI: Actually there is no fixed terminology and these two terms sometimes used in the same meaning and sometimes different. I would suggest following the terminology bellow for yourself, then you can differentiate methods according to this: Time-Series Segmentation means partitioning an individual time series to similar segments i.e. clustering within an individual time-series (e.g. i have a video in which someone is reading a book for a while, then starts walking and then starts cycling. now I want to segment these three actions). Suggestion: State-Space reconstruction, moving Autocorrelation, moving DTW, Fourier Analysis, Visibility Graphs or any other method which can measure the similarity of a time-series with itself. Time-Series Clustering (or this) means finding similar time-series within a dataset of time-series (e.g. i have 10 brain signals, 5 from healthy subjects 5 from patients without knowing who is patient and who is healthy. Now I want to cluster this dataset into two clusters) Suggestion: Build a similarity matrix between time-series using e.g. DTW and then apply Spectral Clustering (just improvised. If you search literature there should be more mature solutions) Hope it helped :)
H: How can I draw bar graph in python on aggregated data? Normally when I draw bar plot its simple as import matplotlib.pyplot as plt from pylab import rcParams import seaborn as sb %matplotlib inline rcParams['figure.figsize'] = 5, 4 sb.set_style('whitegrid') x = range(1, 10) y = [1,2,3,4,0.5,4,3,2,1] plt.bar(x, y) When I aggregate data on basis of age feature with the following command data_ag = data.groupby('age')['age'].count() It returns the output that is the people belonging to a particular age e.g. 11 people with age 22 years. age 22 11 23 8 27 28 28 1 29 70 30 13 31 45 Name: age, dtype: int64 How can I treat those as x and y points to draw a bar plot? x = # what should I write here for age data y = # what for count plt.bar(x, y) AI: I solved my issue using size() and reset_index() functions. g1 = data.groupby( ["age"] ).size().reset_index(name='count') x= g1['count'] y=g1['age'] plt.bar(x, y)
H: Are there any python libraries for sequences clustering? I have a problem which I explained in other question. I've understood that my dataset is a sequence of states or something like that. Is there libraries to analyze sequence with python? And is it right way to use Hidden Markov Models to cluster sequences? AI: Is there libraries to analyze sequence with python You can take a look at here. You can also use TensorFlow if your task is sequence classification, but based on comments you have referred that your task is unsupervised. Actually, LSTMs can be used for unsupervised tasks too depending on what you want. Take a look at here. And is it right way to use Hidden Markov Models to cluster sequences? Markov hidden models are those that your current state does not depend on all previous states. If you your task has longterm dependencies, you can use LSTM networks. If your data does not have longterm dependencies you can use simple RNNs.
H: One Hot Encoding vs Word Embedding - When to choose one or another? A colleague of mine is having an interesting situation, he has quite a large set of possibilities for a defined categorical feature (+/- 300 different values) The usual data science approach would be to perform a One-Hot Encoding. However, wouldn't it be a bit extreme to perform some One-Hot Encoding with a dictionary quite large (+/- 300 values)? Is there any best practice on when to choose Embedding vectors or One-Hot Encoding? Additional, information: how would you handle the previous case if new values can be added to the dictionary. Re-training seems the only solution, however with One-Hot Encoding, the data dimension will simultaniously increase which may lead to additional troubles, embedding vectors, on the opposite side, can keep the same dimension even if new values appears. How would you handle such a case ? Embedding vectors clearly seem more appropriate to me, however I would like to validate my opinion and check if there is another solution that could be more apporiate. AI: One-Hot Encoding is a general method that can vectorize any categorical features. It is simple and fast to create and update the vectorization, just add a new entry in the vector with a one for each new category. However, that speed and simplicity also leads to the "curse of dimensionality" by creating a new dimension for each category. Embedding is a method that requires large amounts, both in the total amount of data and repeated occurrences of individual exemplars, and long training time. The result is a dense vector with a fixed, arbitrary number of dimensions. They also differ at the prediction stage: a One-Hot Encoding tells you nothing of the semantics of the items; each vectorization is an orthogonal representation in another dimension. Embeddings will group commonly co-occurring items together in the representation space. If you have enough training data, enough training time, and the ability to apply the more complex training algorithm (e.g., word2vec or GloVe), go with Embeddings. Otherwise, fall back to One-Hot Encoding.
H: Visualizing Decision Tree of K-Nearest-Neighbours classifier I'm using Sklearn's KNN to build a classifier and was wondering if there is any way to visualize the decision tree that the algorithm builds. Maybe something of this fashion AI: $k$-NN does not build a decision tree to classify a new instance, it looks at the class of the most similar examples (the nearest neighbours) in the training set. So, in short, no you cannot get a decision tree from $k$-NN. You can build a decision tree for your dataset directly by using scikit-learn's DecisionTreeClassifier instead if you need a decision tree.
H: Instead of one-hot encoding a categorical variable, could I profile the data and use the percentile value from it's cumulative density distribution? I have a categorical variable which has thousands of values, for a dataset which has millions of records. The data is being used to create a binary classification model. I am in the early steps of feature selection, but I am trying out Random Forest, Boosted Trees, and Logistic Regression to see what works. If I find the frequency of each category and sort that, I see that about 50 values make up the top 80%. Is it valid to condense this feature as a binary on whether the value is in that set of values or not. By 'valid', I mean is it likely that this sort of transformation retain any useful information for a model? I have a concern that sorting these categorical values which do not have any order to them creates some incorrect assumptions. The frequecy distribution looks a little like this: A;10% D;5% E;1.2% B;1.1% ... Z;0.004% W;0.0037% ... Going one step further, is it valid to profile each class in my dataset and do the same? Say Categories A-F comprise the top 80% of class 0 and Categories D-H are the top 80% of class 1. I would convert: data_id;cat_var 1;B 2;F 3;H 4;Z to data_id;cat_var_top80class0;cat_var_top80class1 1;1;0 2;1;1 3;0;1 4;0;0 Adding picture to hopefully clear up this idea. In yellow are the pre-calculated distributions of cat_var (***_id in the picture) for classes 0 and 1 based on the training set. On the right shows how the transformation would be applied: AI: (Edited after @D.W. suggestion). To the best of my knowledge, there is nothing wrong with what you have in mind; thus it is certainly valid.. As you said, you have to try out all possible ways you can think and see which one works better. The most important point adopting a particular encoding is to be aware of you have certain amount of information getting lost and it varies one case to another depending on the problem at hand. For example in the case of binary coding based on high/low frequency sublevels, there will large of information (details) lost which could help the algorithm to do classification. I liked your idea of percentile coding based on cumulative density distribution. Maybe you want to look at Quantile-based discretization which is available in pandas.qcut. The rest is my earlier answer as it was (below). I intended to suggest trying other techniques as well on top of what you have had in mind; but apparently the message was not clear. Please note that I do not seek to get my answer marked as final answer, as I aware that still it does not fully answer your question; it is simply to brainstorm and exchange ideas in length. ;-) Perhaps you have already digged out enough about ways to convert categorical variables into continuous data. In case you did not and missed checking this answer, check it out. As discussed, they are many ways to convert cat-to-num, and your problem is one of the hardest yet the most common across many domains. You have a high cardinality in your categorical variable, and as far as I understood imbalance distributions of those sublevels and you are not sure whether messing up the order of those sublevels matters or not. You may need to try Ordinal Encoding (if order really matters) or weight of evidence (WoE) transformation (see this blog post for instance) that I have heard but not tried or even going beyond mixing them in a meaningful way to represent your categorical data properly. What I have learned, despite all efforts in the field, that this problem is still an open challenge in data science and machine learning. Thus there is no best solution, or well-established method as far as I checked. Please do let me know if you come across one.
H: XGBoost equations (for dummies) I am having a hard time trying to understand the MSE loss function given in the Introduction to Boosted Trees (beware! My maths skills are the equivalent of a very sparse matrix): $ \begin{split}\text{obj}^{(t)} & = \sum_{i=1}^n (y_i - (\hat{y}_i^{(t-1)} + f_t(x_i)))^2 + \sum_{i=1}^t\Omega(f_i) \\ & = \sum_{i=1}^n [2(\hat{y}_i^{(t-1)} - y_i)f_t(x_i) + f_t(x_i)^2] + \Omega(f_t) + constant \end{split} $ The second equality sign implies that one could easily derive the second equation from the first one, but I cannot see how. My first naïve attempt was to: express $y_i$ as $a$ express $(\hat{y}_i^{(t-1)} + f_t(x_i))$ as $b$ and then expand $(a-b)^2$ But I wasn't successful. Any help is really appreciated. AI: I recall I was struggling for some time deriving the second equation. That constant keeps many of your missing elements. Let's break it down using your $(a-b)^{2}$ notation. We will have $a^{2}$, $b^{2}$, and $2ab$: $a^{2}$: $y_{i}$ is constant since it is your true labels/values, thus $a^{2}$ i.e. $y_{i}^{2}$ goes to the constant. $b^{2}$: $y\hat{}_{i}^{t-1}$+$f_{t}^{2}$+$2y\hat{}_{i}^{t-1}f_{t}$ $y\hat{}_{i}^{t-1}$ is constant as it is prediction in a step before $(t-1)$ that we know already, thus goes to the constant term. The other two terms remain as it is. $2ab$: $2y_{i}(y\hat{}_{i}^{t-1}+f_{t}) = -2y_{i}y\hat{}_{i}^{t-1}-2y_{i}f_{t}$. Here also the first term is constant. Only the second term remains. The rest should be pretty straightforward, just add things that are left and clean it up and you see the second equation comes out beautifully.
H: High model accuracy vs very low validation accuarcy I'm building a sentiment analysis program in python using Keras Sequential model for deep learning my data is 20,000 tweets: positive tweets: 9152 tweets negative tweets: 10849 tweets I wrote a sequential model script to make the binary classification as follows: model=Sequential() model.add(Embedding(vocab_size, 100, input_length=max_words)) model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu')) model.add(MaxPooling1D(pool_size=2)) model.add(Flatten()) model.add(Dense(250, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # Fit the model print(model.summary()) history=model.fit(X_train[train], y1[train], validation_split=0.30,epochs=2, batch_size=128,verbose=2) however I get very strange results! The model accuracy is almost perfect (>90) whereas the validation accuracy is very low (<1) (shown bellow) Train on 9417 samples, validate on 4036 samples Epoch 1/2 - 13s - loss: 0.5478 - acc: 0.7133 - val_loss: 3.6157 - val_acc: 0.0243 Epoch 2/2 - 11s - loss: 0.2287 - acc: 0.8995 - val_loss: 5.4746 - val_acc: 0.0339 I tried to increase the number of epoch, and it only increases the model accuracy and lowers the validation accuracy Any advice on how to overcome this issue? Update: this is how I handle my data #read training data pos_file=open('pos2.txt', 'r', encoding="Latin-1") neg_file=open('neg3.txt', 'r', encoding="Latin-1") # Load data from files pos = list(pos_file.readlines()) neg = list(neg_file.readlines()) x = pos + neg docs = numpy.array(x) #read Testing Data pos_test=open('posTest2.txt', 'r',encoding="Latin-1") posT = list(pos_test.readlines()) neg_test=open('negTest2.txt', 'r',encoding="Latin-1") negT = list(neg_test.readlines()) xTest = posT + negT total2 = numpy.array(xTest) CombinedDocs=numpy.append(total2,docs) # Generate labels positive_labels = [1 for _ in pos] negative_labels = [0 for _ in neg] labels = numpy.concatenate([positive_labels, negative_labels], 0) # prepare tokenizer t = Tokenizer() t.fit_on_texts(CombinedDocs) vocab_size = len(t.word_index) + 1 # integer encode the documents encoded_docs = t.texts_to_sequences(docs) #print(encoded_docs) # pad documents to a max length of 140 words max_length = 140 padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post') Here I used Google public word2vec # load the whole embedding into memory embeddings_index = dict() f = open('Google28.bin',encoding="latin-1") for line in f: values = line.split() word = values[0] coefs = asarray(values[1:], dtype='str') embeddings_index[word] = coefs f.close() # create a weight matrix for words in training docs embedding_matrix = zeros((vocab_size, 100)) for word, i in t.word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector #Convert to numpy NewTraining=numpy.array(padded_docs) NewLabels=numpy.array(labels) encoded_docs2 = t.texts_to_sequences(total2) # pad documents to a max length of 140 words padded_docs2 = pad_sequences(encoded_docs2, maxlen=max_length, padding='post') # Generate labels positive_labels2 = [1 for _ in posT] negative_labels2 = [0 for _ in negT] yTest = numpy.concatenate([positive_labels2, negative_labels2], 0) NewTesting=numpy.array(padded_docs2) NewLabelsTsting=numpy.array(yTest) AI: You should try to shuffle all of your data and split them to the train and test and valid set then train again.
H: Censored output data, which activation function for the output layer and which loss function to use? I am building a neural network to solve a regression problem. The output is a single numerical value. Unfortunately, the output is censored: the values below 0 were recorded as 0, and postive values remained unchanged. What activation function should I use for the output layer (maybe ReLU)? How to define the loss function, should I just use RMSE? (because the output is censored, we want the neural network to be able to generate 0 output, and positive values). Edit 1: The problem is to predict the electricity demand time series based on multiple input variables. Only the values above a certain threshold are being recorded, hence the output is censored. We have a lot of numerical/categorical input variables: time of the day, air temperature, days in a week, holiday/workday, etc. We want to build a neural network model to predict electricity demand (0 or positive) based on the input variables. AI: We can't tell you what loss function to use. That is based on business needs. In particular, what is the cost of being wrong? Is the cost of being wrong proportional to relative error? absolute error? Something else? That will drive the choice of loss function. You should try to choose a loss function where the value of the loss function is proportional to the cost of the error (e.g., the monetary cost to your company). Using ReLU as the activation function in the final layer would be sensible given that your data is censored to replace all negative outputs with 0.
H: Generalization of Correlation Coefficient The correlation coefficient tells me how two variables (sequences of numbers) are correlated with each other. Does it generalize to non-linear scenarios? How could one more generally measure the general predictive power of x over y when the relationship between x and y is not linear? AI: I assume that when you speak of correlation coeficient, you have the Pearson linear correlation in mind. Indeed, there are other options. Two very popular ones are the rank correlations respectively called Spearman's $\rho$ and Kendall's $\tau$. To give you an idea of what they are, consider $n$ observations from a $d$-dimensional random vector $X = (X_1,\dots,X_d)$. Also let $X_{ij}$ be the $i$th observation for variable $j$. These measures are called rank correlations because they can be computed using the ranks only. What I mean is that if you sort all $X_{ij}$, $i=1,\dots,n$, and replace the biggest observation by $n$, the second biggest by $n-1$, and so on (do that for all columns $j$) and call you new observations $R_{ij}$, then the empirical Spearman's $\rho$ (matrix) is simply the Pearson linear correlation (matrix) of $(R_1,\dots,R_d)$; and the empirical (pairwise) Kendall's $\tau$ between $X_{i_1}$ and $X_{i_2}$ is the probability of concordance minus the probability of discordance between two iid observations, say $(X_{1 i_1},X_{1 i_2})$ and $(X_{2 i_1},X_{2 i_2})$, which can equivalently be computed from the ranks $(R_{1 i_1},R_{1 i_2})$ and $(R_{2 i_1},R_{2 i_2})$ instead. A rank correlation between $X_{i_1}$ and $X_{i_2}$ of one indeed means perfect concordance (i.e. $X_{i_1}$ always increases with $X_{i_2}$), but that does not necessarily means they are linearly related. The ranks are linearly related. Just to make the concept of concordance clearer, here the (bivariate) observations are all concordant and here they are all discordant So that when you consider a cloud of points you have some pairs that are concordant and others that are discordant. Note that this answer provides examples, but there are many other ways to approach the question. As commented by Emre, information-theoretic measures are also an option.
H: Pandas grouped data to Bokeh graph I'm having trouble graphing Pandas grouped data in Bokeh. company_id company_score date_submitted company_region AA .07 1/1/2017 NW AB .08 1/2/2017 NE CD .0003 1/18/2017 NW I've successfully grouped the data by company_id, and calculated the simple moving average of company_score using 10 days / inputs. score_by_company = df['company_score'].groupby(df['company_id']).rolling(10).mean() company_id index SMA BBG000B9WMF7 7613 NaN 1911663 NaN 1911664 NaN 1911665 NaN 1911666 NaN 1911667 NaN 1911668 NaN 1911669 NaN 1911670 NaN 1911671 0.000002 1911672 0.000002 etc. etc. How can I translate this data into a time series graph, with each company_id being its own line? I feel like I need to create a data dictionary off of the grouped data, but I'm not sure if that's the right approach. AI: You can make the plots by looping over the groups from groupby... Or this should also work.. import matplotlib import matplotlib.pyplot as plt df['company_score'].groupby(df['company_id']).rolling(10).mean().unstack().plot() See this for more information on plotting with pandas dataframes and this for looping over a groupby-object. Finally this link is super helpful.
H: How does a Q algorithm consider future rewards? I am trying to understand the underlying logic of Q learning (deep Q learning to be precise). At the moment I am stuck at the notion of future rewards. To understand the logic, I am reviewing some of the present code samples. This one seemed quite interesting, so I went through it: https://github.com/keon/deep-q-learning/blob/master/dqn.py Here is the gist of the code that does the actual training of the underlying deep neural network: def replay(self, batch_size): minibatch = random.sample(self.memory, batch_size) for state, action, reward, next_state, done in minibatch: target = reward if not done: target = (reward + self.gamma * np.amax(self.model.predict(next_state)[0])) target_f = self.model.predict(state) target_f[0][action] = target self.model.fit(state, target_f, epochs=1, verbose=0) if self.epsilon > self.epsilon_min: self.epsilon *= self.epsilon_decay In the 5th line of the code, (after the if not done line) we are adding the discounted reward of the next step, to the present step, and setting it as the target reward of the executed action to be trained. So, the way I see it, we have the reward of the executed action, and discounted possible reward of the following action, combined. As far as I understand, in each iteration, Q-learning algorithm predicts the future reward of next step (and next step only) using the machine learning technique in use (be it the CNN, DNN etc.). And we are multiplying the reward of next step (and that specific next step only) with discount rate, to make it less important than the immediate reward (with the ratio we specified). So, my question is, how does the algorithm takes even further steps (say, 5 steps) ahead into account? AI: As far as I understand, in each iteration, Q-learning algorithm predicts the future reward of next step (and next step only) using the machine learning technique in use (be it the CNN, DNN etc.). The Q values should eventually converge to the expected sum, future, discounted reward when taking action A in state S and following the optimal policy. Breaking it down: Expected sum is not exactly the same as "predicted", but close enough for our purposes. And it really does mean sum of the rewards, not a single reward. To differentiate, this is often called the "return" or "utility" Future -> from the step being evaluated onwards until end of episode, or the limit as time goes to infinity for continuous tasks with discounting. Discounted -> a discount factor is only necessary for continuous tasks. And we are multiplying the reward of next step (and that specific next step only) with discount rate, to make it less important than the immediate reward (with the ratio we specified). No, there is no multiplication of the reward. Let's take a look at the line: target = (reward + self.gamma * np.amax(self.model.predict(next_state)[0])) The reward is not being multiplied by anything. What is being multiplied by $\gamma$ is the Q value of the next state. That value represents the total sum of all rewards following on from that point - not a single reward value at all. So, my question is, how does the algorithm takes even further steps (say, 5 steps) ahead into account? It is in the Q values. The pseudocode for the code you are looking at is not: target_for_Q(s,a) = next_step_reward * gamma It is: target_for_Q(s,a) = next_reward + gamma * current_value_of_Q(s',a') Or: target_for_Q(s,a) = next_reward + gamma * estimate_all_future_return This is closely related to the Bellman function for policy evaluation. Intuitively what is happening is that you start with (really poor) estimates for expected return (not expected reward), and update them by inserting observed values of next_reward, s' and a' into the update rule above. The values always represent a learned estimate of total expected return.
H: How do I determine if variables are correlated? Is it simply a mathematical calculation? I'm self learning data science so bear with me as I try and make my question as clear as possible. Lets assume I have a dataset of a dependent variable y and some independent variables x1, x2 and x3 etc. Is the correlation between these variables simply a mathematical calculation? I mean if i ran a correlation test in R for example and the test came back with no correlation between the variables is that necessarily true? What if I knew or suspected that one of the variables is correlated to another how would I define that in my model or would relying on a correlation test be sufficient. AI: Yes, correlation is a mathematical concept, and it as well known as Pearson correlation. This is probably the one you are obtaining in your analysis. However, there are other correlation analysis you can try in order to be totally sure about your result. The most famous ones out of Pearson are Kendall correlation and Spearman correlation. In addition, sometimes relation between variables is less evident than a correlation analysis. For example, if the variables are time-related, one may have a lag over the other. For example, in finances it is known that the GDP is correlated with a lag of some months of the Employment Rate. It is part of a good Data Science knowledge to have a business knowledge, an intuition of why the variables may be related in some way and try to find how to prove that this non evident dependence exists. Edit: I noticed you put R as a tag. You can study as well libraries polycor and ggm to learn about polychoric correlations and partial correlations, although the may not apply in your case.
H: Detecting over fitting of SVM/SVC I am using 3-fold cross validation and a grid search of the C and gamma parameters for a SVC using the RBF kernel I have achieved a classification score of 84%. When testing against live data the accuracy rate is 70% (1500 samples used). However, when testing against an un-seen hold out set the accuracy is 86% (8800 samples, 20% of the original dataset). The training and holdout data set have even distribution of the 3 classes. What could be the cause of this large discrepancy? It does not seem to be over fitting judging by the performance of the model with the hold out set? EDIT: How did you split the data set? The data was originally in sequential order. I wrote a script to randomly split each sample between the train and hold out set, making use of a CSPRNG. Then at the end a report was automatically generated to display the distribution of each class in each set. The distribution very near equal. How did you do the grid search? Through the SKlearn SVC grid search method (GridSearchCV). Is there any overlap between the data points used during grid search and the un-seen hold out set? No overlap, they are all from unique time stamps in the initial set. Does the live data come from the same distribution as the other? Yes the live data comes from the same source and the distribution is roughly the same. How do you know? I have a script to count up the occurrences of each class in the data set. AI: It seems likely that the live data is different somehow from your other data. Cross-validation shows a 84% accuracy, and accuracy on the held-out set is 86%, which is pretty consistent and does not indicate overfitting. Accuracy on the live data is 70%, which is significantly different. That suggests that live data is somehow different in ways that are important to the classifier. Perhaps concept drift has occurred.
H: When inputting image rgb values to MLP, should I divide by 255? I have an MLP with 3072 input nodes which are for 1024 rgb pixels. My datasets is in an array with each row representing one image and looking like this: [red_pix1, red_pix2, ..., red_pix1024, green_pix1, green_pix2, ..., green_pix1024, blue_pix1, blue_pix2, ..., blue_pix1024] Each array value is an integer between 0 and 255. My question is, before training the network, should I "normalize" my dataset by dividing each element by 255? That way, each input element would have values between 0 and 1. Is this better than having values between 0 and 255? AI: The component values are often stored as integer numbers in the range 0 to 255, the range that a single 8-bit byte can offer, Yes, If you divide by 255 the range can be described with a 0.0-1.0 where 0.0 means 0 (0x00) and 1.0 means 255 (0xFF). Normalization will help you to remove distortions caused by lights and shadows in an image. Refer to this Normalize RGB
H: Hyperparameter Tuning in Machine Learning What is the difference between Hyper-parameter Tuning and k-NN algorithm? Is k-NN also a type of Hyper-parameter tuning? AI: In kNN algorithm, you only try to find a suitable value of parameter k. And some models may have many parameters that can be modified. Normal parameters are optimized by loss functions and Hyperparameter tuning allows you to set various parameters to get the best model. You set them before training. Its 2 methods: GRID METHOD AND RANDOM SAMPLING might work well. Grid Method: Impose a grid on possible space of a hyperparameter and then go over each cell of grid one by one and evaluate your model against values from that cell. Grid method tends to vast resources in trying out parameter values which would not make sense at all. Random Sampling Method: In random method, we have a high probability of finding a good set of params quickly. After doing random sampling for a while, we can zoom into the area indicative of the good set of params. Random sampling allows efficient search in hyperparameter space. But sampling at random does not guarantee uniformity over the range of valid values. Therefore, it is important to pick an appropriate scale. You can read this: Hyperparameter Tuning of Deep Learning Algorithm
H: Significant overfitting with CV I working on a binary classification task. The dataset is quite small ~1800 rows and ~60 columns. There are no duplicates in the rows. I am comparing different classifiers amongst the canonical ones: random forest, logistic regression, boosted tree and SVC. I am training the hyperparameters by a CV on 90% (train) with 10% held out to measure the generalization error (test). The dataset is slightly unbalances (1 to 3 ratio of classes) hence I used a stratified fold for all splits. I also use roc-auc as a metric for my CV. I get the following results for roc-auc score and accuracy: DummyClassifier Train ROC-AUC score: 0.50000 Accuracy: 0.69705 Test ROC-AUC score: 0.50000 Accuracy: 0.69545 LogisticRegression Train ROC-AUC score: 0.88459 Accuracy: 0.78666 Test ROC-AUC score: 0.72559 Accuracy: 0.69545 RandomForestClassifier Train ROC-AUC score: 1.00000 Accuracy: 0.99695 Test ROC-AUC score: 0.81748 Accuracy: 0.80455 XGBClassifier Train ROC-AUC score: 1.00000 Accuracy: 0.99949 Test ROC-AUC score: 0.80617 Accuracy: 0.79545 SVC Train ROC-AUC score: 0.89900 Accuracy: 0.83248 Test ROC-AUC score: 0.73515 Accuracy: 0.73182 There is always a significant gap between train and test scores. I am clearly overfitting. I guess it is a consequence of the low number of rows but I am not sure about what to do about that? Force the CV grid search for hyperparameters to a range with strong regularization? AI: For the problem of overfitting, you could look train models that employ regularization. For instance this examples shows how to regularize an SVM. Another thing I noted is that you have used the tag "unbalanced-classes". If that is the case, accuracy isn't a very good metric. While AUC is good at this, I've personally had trouble with this metric in the past. My suggestion would be to include a metirc like F1-score and most importantly in each case calculate the confusion matrix. This will show you if you are missing one class more than the other. If that is the case you might want to incorporate an oversampling method (e.g. SMOTE) into your pipeline.
H: Help Interpreting a Active Learning Learning Curve I am applying a active learning with a SGDClassifier (log loss function) as the base learner on some data and I have the following graphs representing the learning curve of queries vs error rate. The green is the validation error and blue is the training error. Is my model overfitting or has high variance in both graphs? AI: Overfitting looks more likely because: After some queries your validation errors are systematically higher than training errors, which is probably not what you want. After some queries, your training errors slowly fall while validation errors remain constant. It's like your classifier is memorising your data set at a slow pace. When the number of queries is small, it looks like your classifier did better simply because it was a simpler data set. Again, overfitting can also be an issue.
H: Weights in neural network So I am newbie in deep learning, I came across activation functions which gives an output and compares it to label, if it's wrong, it adjusts its weight until it gives the same output as labelled data for that particular input in training data set. x1 x2 x3 y 10 15 20 0 20 7 10 1 5 10 4 0 So imagine this is an example training set, we send these inputs to activation function, and for the first input it returns correct output (0). But for second output it again returns 0, so the weights are adjusted until the activation function returns 1. So now my doubt is, if the new updated weights returns the wrong output for the third input, its weight gets changed again, but will there be a situation where these weights will not satisfy for the previously tested inputs, like for example the first input in this case. Is there a chance that new weights will return 1 for the first input, which is wrong? AI: In parametric models such as linear regression, logistic regression and multi-layers perceptrons, weights are updated with regards to the "difference" between the output of your model and the real label. More precisely, weights are updated using the gradient descent / backpropagation procedure. It is composed of two parts : the forward pass and the backward pass. For a given observation (or a set of observations), the forward pass is about feeding the model with observations and output a result a. This output "a" is then compared with the real value, the label y. Using some cost function metrics (such as Absolute Error or Square Error for regression purposes, cross-entropy for classification purposes...), we can then compute j(y,a) which is the error between the output and the real value. We can now run the backward pass which is about computing the derivative of the cost function with regards to any weight / bias coefficient in the logistic regression / neural network. We can the update coefficients such as : Where alpha is the learning rate. So to answer your question, weights are not updated until they reach the expected value. We are just trying to reach the minimum in cost function surface by running gradient descent procedure.
H: german gunning fog index function I would like to analyse some text and most of my Reviews are german. Does anyone know if python has a good gunning fog index function for german language? I couldnt find anything best regards AI: This might be rather in the field of linguistics than related to data science, but nevertheless: To my knowledge, the "gunning fog index" as measure for understandability is intended for english language only, per definition. For some tools intended for german, see https://klartext.uni-hohenheim.de/hix (page in german), which refers to a german SMOG index. There are implementations of SMOG listed in PyPI, but you may need to find the appropriate values to configure them for german language.
H: Convert Atypical Date Format for Time Series in Python I have an atypical time format that I need to convert into a datetime index for time series analysis. I'm working in Python / Pandas. The column is 'BC_DT', and the format is "27-MAR-18". Example is below. BC_DT 27-MAR-18 28-MAR-18 29-MAR-18 I tried this method, but I'm getting an error: ValueError: time data '27-MAR-18' does not match format '%d-%b-%Y' df['Converted_Date'] = df['BC_DT'].apply(lambda x: dt.datetime.strptime(x, '%d-%b-%Y')) AI: Let pandas determine what datetime format you are using automatically. import pandas as pd raw_data = pd.DataFrame(data={'BC_DT':['27-MAR-18','28-MAR-18','29-MAR-18']}) raw_data['BC_DT'] = pd.to_datetime(raw_data['BC_DT']) print(raw_data) BC_DT 0: 2018-03-27 1: 2018-03-28 2: 2018-03-29
H: How do I find the minimum value of $x^2+y^2$ with a genetic algorithm? I want to find $(x,y)$ which minimizes $x^2+y^2$ with GA to apply it for another function. Does anyone know any example of GA with deap (Python) like that? AI: Genetic algorithm This consists in 4 crucial steps: initialization, evaluation, selection and combination. Initialization Each individual in the population is encoded by some genes. In our case the genes represent our $[x, y]$ values. We will then set our search range to [0, 1000] for this specific problem. Usually you will know what is naturally possible based on your problem. For example, you should know the range of possible soil densities in nature. We will create 100 individuals in our population. Evaluation of the fitness This step simply asks you to put the $[x,y]$ values into your function and get its result. Pretty standard stuff. Selection There are many ways with which you can select parents. I will always keep the alpha male. The best individual in the population, he will be cloned to the next. Then I will use tournament selection. We will repeat the following until the next generation population is full. Pick four parents at random, take the best individual from the first two and the best from the last two. These will be the two parents which will gives us our next offspring. Combination From the two parents we will build the new genome for the child using the binary values of the $[x,y]$ values of the parents. The resulting binary value for each codon in the genome of the child is selected from the two parent genes by uniform random. The code class Genetic(object): def __init__(self, f, pop_size = 100, n_variables = 2): self.f = f self.minim = -100 self.maxim = 100 self.pop_size = pop_size self.n_variables = n_variables self.population = self.initializePopulation() self.evaluatePopulation() def initializePopulation(self): return [np.random.randint(self.minim, self.maxim, size=(self.n_variables)) for i in range(self.pop_size)] def evaluatePopulation(self): return [self.f(i[0], i[1]) for i in self.population] #return [(i[0]-4)**2 + i[1]**2 for i in self.population] def nextGen(self): results = self.evaluatePopulation() children = [self.population[np.argmin(results)]] while len(children) < self.pop_size: # Tournament selection randA, randB = np.random.randint(0, self.pop_size), \ np.random.randint(0, self.pop_size) if results[randA] < results[randB]: p1 = self.population[randA] else: p1 = self.population[randB] randA, randB = np.random.randint(0, self.pop_size), \ np.random.randint(0, self.pop_size) if results[randA] < results[randB]: p2 = self.population[randA] else: p2 = self.population[randB] signs = [] for i in zip(p1, p2): if i[0] < 0 and i[1] < 0: signs.append(-1) elif i[0] >= 0 and i[1] >= 0: signs.append(1) else: signs.append(np.random.choice([-1,1])) # Convert values to binary p1 = [format(abs(i), '010b') for i in p1] p2 = [format(abs(i), '010b') for i in p2] # Recombination child = [] for i, j in zip(p1, p2): for k, l in zip(i, j): if k == l: child.append(k) else: child.append(str(np.random.randint(min(k, l), max(k,l)))) child = ''.join(child) g1 = child[0:len(child)//2] g2 = child[len(child)//2:len(child)] children.append(np.asarray([signs[0]*int(g1, 2), signs[1]*int(g2, 2)])) self.population = children def run(self): ix = 0 while ix < 1000: ix += 1 self.nextGen() return self.population[0] Then you can use the code by f = lambda x, y: (x)**2 + y**2 gen = Genetic(f) minim = gen.run() print('Minimum found X =', minim[0], ', Y =', minim[1]) Minimum found X = 0 , Y = 0 f = lambda x, y: (x-6)**2 + y**2 gen = Genetic(f) minim = gen.run() print('Minimum found X =', minim[0], ', Y =', minim[1]) Minimum found X = 6 , Y = 0
H: Is data partitioning necessary for an explanatory model and why? I've come accross the following paragraph in the To Explain or To Predict? paper by Galit Shmueli. In explanatory modeling, data partitioning is less common [than in predictive modeling] because of the reduction in statistical power. When used, it is usually done for the retrospective purpose of assessing the robustness of ˆf. A rarer yet important use of data partitioning in explanatory modeling is for strengthening model validity, by demonstrating some predictive power. Although one would not expect an explanatory model to be optimal in terms of predictive power, it should show some degree of accuracy. I understand why data partitioning is useful in the case of a predictive model, which is to assess the generalization capacity of a model. However, in the case of an explanatory model, I don't understand why it should show some degree of accuracy in terms of predictive power since it's not the objective of the model. Here comes my question: is data partitioning necessary for an explanatory model and why? AI: An explanatory model is used to identify and explain what causes some particular outcome (i.e. identify/quantify the drivers of effect). Even though the model will not be used for prediction, it needs to be accurate enough to adequately predict the response so that you can be sure the conclusions you draw from the model are valid. As an extreme example, if your explanatory model does not predict the outcome at all, then all the inferences you make from this model are useless. So then, how does one validate model accuracy? In most cases, data partitioning and validation on the hold-out sample is used; in your case the validation is more of a sense-check for accuracy.
H: what actually word embedding dimensions values represent? I am learning word2vec and word embedding , I have downloaded GloVe pre-trained word embedding (shape 40,000 x 50) and using this function to extract information from that: import numpy as np def loadGloveModel(gloveFile): print ("Loading Glove Model") f = open(gloveFile,'r') model = {} for line in f: splitLine = line.split() word = splitLine[0] embedding = np.array([float(val) for val in splitLine[1:]]) model[word] = embedding print ("Done.",len(model)," words loaded!") return model Now if I call this function for word 'hello' something like : print(loadGloveModel('glove.6B.100d.txt')['hello']) it gives me 1x50 shape vector like this: [ 0.26688 0.39632 0.6169 -0.77451 -0.1039 0.26697 0.2788 0.30992 0.0054685 -0.085256 0.73602 -0.098432 0.5479 -0.030305 0.33479 0.14094 -0.0070003 0.32569 0.22902 0.46557 -0.19531 0.37491 -0.7139 -0.51775 0.77039 1.0881 -0.66011 -0.16234 0.9119 0.21046 0.047494 1.0019 1.1133 0.70094 -0.08696 0.47571 0.1636 -0.44469 0.4469 -0.93817 0.013101 0.085964 -0.67456 0.49662 -0.037827 -0.11038 -0.28612 0.074606 -0.31527 -0.093774 -0.57069 0.66865 0.45307 -0.34154 -0.7166 -0.75273 0.075212 0.57903 -0.1191 -0.11379 -0.10026 0.71341 -1.1574 -0.74026 0.40452 0.18023 0.21449 0.37638 0.11239 -0.53639 -0.025092 0.31886 -0.25013 -0.63283 -0.011843 1.377 0.86013 0.20476 -0.36815 -0.68874 0.53512 -0.46556 0.27389 0.4118 -0.854 -0.046288 0.11304 -0.27326 0.15636 -0.20334 0.53586 0.59784 0.60469 0.13735 0.42232 -0.61279 -0.38486 0.35842 -0.48464 0.30728 ] Now I am not getting what actually these values represent , ( I know its result of hidden layer of single layer neural network ) but my confusion is what actually these weights represent and how it is useful for me? Because what I was getting suppose if I have : Here I understand because each word is mapping to each column category label, But in the GloVe there are no columns labels for 50 columns, it just returns 50 values vector, so what actually these vectors represent and what i can do with it? I am trying to find this since 4-5 hours but everyone/every tutorial on the internet explaining what are word embedding and how they looks like but no one explaining what actually these weights represent? AI: These columns are actually arbitrary, they do not represent anything for humans. However, it does not mean they are useless, quite the opposite - computers can extract features from this highly dimensional space more easily, such as in neural networks. To extract information which is useful for humans, I recommend to have a look at examples in original paper by Tomas Mikolov: We can have a look at common capital city. Athens is to Greece (in sense that they have similar relationship) as is Oslo to Norway. We can use afterwards vectors obtained using GloVe and construct $$v(Athens) - v(Greece) + v(Norway)$$ and this vector in 'globally' (meaning that trained on universal dataset, such as wikipedia) trained model would be closest to $v(Oslo)$.
H: Examples on kohonen self organizing maps Is there a simple example to start with for using kohonen 1.1.2 or is it only the test file that will be the reference? AI: There are alternative solutions for self organizing maps. Best of them I found pymvpa where the example is easy to read and understand. It is also maintained quite activately as you can see from their Github. I tried to run the kohonen 1.1.2 test file, but it did not run after two days of trying. So, let's have a try for the another solution. To run pyMVPA example som.py, you have to do the following (at least): 1 install some pre-requisites: at least numpy, scipy, nibabel and swig to be able to run setup.py 2 run the commands: python setup.py build python setup.py install 3 modify example under doc/examples (som.py) to have import matplotlib.pyplot as pl 4 run command: python som.py Note that this works only in python 2, not python3 can be used as command.
H: What plt.subplots() doing here? Below is the code I am trying to execute. x = np.random.normal(size=1000) fig, ax = plt.subplots() H = ax.hist(x, bins=50, alpha=0.5, histtype='stepfilled') Can anybody elaborate what fig, ax = plt.subplots() doing here ? Thank you. AI: As you can read from here plt.subplots() is a function that returns a tuple containing a figure and axes object(s). Thus when using fig, ax = plt.subplots() you unpack this tuple into the variables fig and ax. Having fig is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with fig.savefig('yourfilename.png'). You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object. The parameter bins is used to set the number of ranges to be used to accumulate the data in those ranges. I don't know where else you have question.
H: twitter data analysis? I am involved in twitter analysis data. I want to find trending topics in tweets with some hashtags, like #finance or #technology. I have a hugh data set of tweets and now I need to analyze them. I need to recognize topics, if there are. They way I'm approaching this is, first, performing a vector representation of each tweet, with a tfidf technique, and then, build groups of them based on their cosine similarity. Are there common techniques in tweets analysis? AI: I believe that the algorithm that you want to use is something called a latent dirichlet allocation (LDA) model. This model is designed to uncover the topics in a corpus of documents. Scikit learn has an implementation. They even have a tutorial which teaches you how to extract topics. The tutorial also describes Non-negative Matrix Factorization (NNMF) as a method to extract the topics. I can't vouch for this algorithm, because I haven't used it personally (as opposed to LDA which I have used before), but from their tutorial NNMF does seem to give reasonable results. Using cosine similarity will help you to group tweets that are most similar, but it wouldn't give you their topics. Which may be what you want? It really is hard to say, because only you would know how you should have the system behave. Unfortunately, that doesn't help you figure out what is trending, and you will need to do some heavy post-processing to make whatever algorithm you use spit out something that is useful to you. Good luck!
H: Which is the best Machine learning technique for this Load forecasting problem? I am trying to use Machine Learning to predict the load of a residence at any point in time for a whole year. I have past data pertaining to that house. So I have the training data and I need the algorithm to predict future loads of the house. Based on my knowledge, I have found the "supervised" machine learning technique to be the one I must adapt. I figured this out since I have labelled test data, I have a prediction requirement and I can get feedback for my prediction (cross-checking with the actual value). Am I correct here? Also, I read online that "Unsupervised" learning is to be used at places where we need to find "Hidden data structure". I assume it means pattern. If so, what is the difference between the unsupervised and supervised learning in my case. Both of them will give me a prediction about the future load pertaining to that house at any point in time. My background I am doing my Masters in EE (Power systems). I am new to Machine Learning as well. AI: I think this is one of the time series forecasting because you want to predict the future load of residence with the past dataset of residence's load by time. By my experience I recommend using LSTM RNN for the solution. An LSTM is well-suited to classify, process and predict time series given time lags of unknown size and duration between important events. Of course, this is supervised learning. As for difference between supervised learning and unsupervised learning on your case, you don't need to use unsupervised learning for this solution because you have already dataset with input/correct output. if you are training your machine learning task for every input with corresponding target, it is called supervised learning, which will be able to provide target for any new input after sufficient training. Contrary, if you are training your machine learning task only with a set of inputs, it is called unsupervised learning, which will be able to find the structure or relationships between different inputs. It says you should use supervised learning - RNN or CNN. PS: as for framework, I recommend using Tensorflow or Keras.
H: Do DBMS decrease Memory requirements? I finished my Economics thesis using RStudio, but my script was very slow due to massive RAM consumption during the process. My Case I had a massive dataset (stock prices in daily frequency for 10 years, ~700 stocks i.e. $3500\times700$) and I was picking each stock as a vector to decompose it into wavelets and CF filter (2 datasets $28000\times700$) and apply benford's law (two datasets $9\times700$). The Problem RStudio was storing my datasets in memory and they were consuming a significant proportion just by touching them. Question I started learning basic SQL commands and I found out that I can call specific columns from a certain table. Would my script be more efficient if I was calling my stocks one by one as vectors from there instead of picking them directly from RStudio? In other words, do queries call the whole dataset and then retrieve the requested values or do they follow a kind of shortcut to be memory efficient? If not, what's the purpose of using databases for domestic use? AI: Welcome to the site! Would my script be more efficient if I was calling my stocks one by one as vectors from there instead of picking them directly from RStudio? Yes as you told, calling the specific columns from Database is better than extracting everything. I generally use Dataframes rather than vectors. Dataframes are very efficient and the transformation using Dataframes is much easier and better. You can go through this link, for better understanding on Vector Vs Dataframe. Currently, I use 1 year data and consists of 100,000 records. Query takes like 8 minutes to extract the data in Rstudio using SQL DB and store it in an R-dataframe. After the extraction is over I don't hit the database meaning no read and write on the database by which the DB Server is not effected. After doing all the modeling and finally I commit the data into SQL DB to store the committed data(results). In other words, do queries call the whole dataset and then retrieve the requested values or do they follow a kind of shortcut to be memory efficient? It depends on how you write the query, in my experience when I tried using temp-tables and is much efficient but when tried running that query in R but it wasn't supporting so had to stick to nested queries. If you don't write the nested queries in an optimized manner then it takes more time to extract the data(memory lekage).
H: Euclidean vs. cosine similarity I have a text dataset which I vectorize using a tfidf technique and now in order to make a cluster analysis I am measuring distances between these vector representations. I have found that a common technique is to measure distance using cosine similarity, and when I ask why euclidean distance is not used, the common answer is that cosine similarity works better when vectors have different magnitude. Since my text vectorized representation is normalized I wonder which is the advantage of using cosine similarity over euclidean distance in order to cluster my data? AI: On L2 normalized data it is an easy and good exercise to prove that they are equivalent. So you should try to solve the math yourself. Hint: use squared Euclidean. Note that it is common with tfidf to not have normalized data because of various technical reasons, e.g., when using inverted indexes in text search. Furthermore, cosine is faster on very sparse data.
H: Understanding Contrastive Divergence I’m trying to understand, and eventually build a Restricted Boltzmann Machine. I understand that the update rule - that is the algorithm used to change the weights - is something called “contrastive divergence”. I looked this up on Wikipedia and found these steps: Take a training sample v, compute the probabilities of the hidden units and sample a hidden activation vector h from this probability distribution. Compute the outer product of v and h and call this the positive gradient. From h, sample a reconstruction v' of the visible units, then resample the hidden activations h' from this. (Gibbs sampling step) Compute the outer product of v' and h' and call this the negative gradient. ... I don’t understand step 3 and I’m struggling to grasp the concept of Gibbs sampling. Would someone explain this simply to me? I have covered neural networks if that helps you. AI: Gibbs sampling is an example for the more general Markov chain Monte Carlo methods to sample from distribution in a high-dimensional space. To explain this, I will first have to introduce the term state space. Recall that a Boltzmann machine is built out of binary units, i.e. every unit can be in one of two states - say 0 and 1. The overall state of the network is then specified by the state for every unit, i.e. the states of the network can be described as points in the space $\{0,1\}^N$, where N is the number of units in the network. This point is called the state space. Now, on that state space, we can define a probability distribution. The details are not so important, but what you essentially do is that you define energy for every state and turn that into a probability distribution using a Boltzmann distribution. Thus there will be states that are likely and other states that are less likely. A Gibbs sampler is now a procedure to produce a sample, i.e. a sequence $X_n$ of states such that, roughly speaking, the distribution of these states across the state space reflects the probability distribution. Thus you want most of the $X_n$ to be in regions of the state space with high probability (and low energy), and few of them to be in regions with low probability (and high energy). To do this, a naive Gibbs sampling approach would proceed as follows. You start with some state $X_0$. To find the state $X_1$, you would pick some unit and calculate the conditional probability for that unit to be in state 1 ("on") conditional on the current value of all other units. Call this number p. You would then set the unit to 1 with probability p and pick the next unit to repeat this to get from $X_1$ to $X_2$ and so forth. In the special case of a restricted Boltzmann machine, this can be greatly simplified. Instead of going through, say, first all hidden units and then all visible units and update them like this one by one, you can, in fact, update all hidden units in one step and all visible units in one step, because any two hidden units and any two visible units are independent. Thus, for a full cycle through all units of the state space, you would: calculate the probability for all hidden units to be 1, given the value of the visible units set the hidden units to 1 with this probability calculate the probability for the visible units to be 1, again conditional on the value of the hidden units, and set the visible units to 1 with this probability This constitutes one full Gibbs sampling step and is your step 1 + the first part of 3 (the second part is then needed for the further calculation and not part of the sampling). The reason why we do this in the CD algorithm is that we actually want to approximate an expectation value and use a sampler for this. This is a complex topic and hard to summarize in a few sentences. If you want to learn more about the mathematics behind this (Markov chains) and on the application to RBMs (contrastive divergence and persistent contrastive divergence), you might find this and this document helpful - these are some notes that I put together while learning about this.
H: How does the number of trees effect the prediction time in gradient boost classification trees? After tuning hyper-parameters for a gradient boosted model, I have found that the best tree count (iterations) is a few thousand. I'm worried that such a high count might impact prediction performance. Can someone explain the relation between trees count and prediction time? AI: The time it takes to get a prediction from a model of gradient boosted classification trees should be linear in the number of trees. So getting predictions from a model with 1000 trees should take about twice as long as 500 trees, and about half as long as 2000 trees. You'll need to test it yourself and check if it's fast enough for your use case. Modern libraries like xgboost can handle high numbers of trees with remarkable efficiency. One thing to keep in mind is that it's generally faster to get predictions for a whole block of test examples at once than it is to get the predictions one at a time.
H: What is one hot encoding in tensorflow? I am currently doing a course in tensorflow in which they used tf.one_hot(indices, depth). Now I don't understand how these indices change into that binary sequence. Can somebody please explain to me the exact process??? AI: Suppose you have a categorical feature in your dataset (e.g. color). And your samples can be either red, yellow or blue. In order to pass this argument to a ML algorithm, you first need to encode it so that instead of strings you have numbers. The easiest way to do such a thing is to create a mapping where: red --> 1 yellow --> 2 blue --> 3 and replace each string with its mapped value. However this might create unwanted side effects in our ML model as when dealing with numbers it might think that blue > yellow (because 3 > 2) or that red + yellow = blue (because 1 + 2 = 3). The model has no way of knowing that these data were categorical and then were mapped as integers. The solution to this problem is one-hot encoding where we create N new features, where N is the number of unique values in the original feature. In our exampel N would be equal to 3, because we have 3 unique colors (red, yellow and blue). Each of these features be binary and would correspond to one of these unique values. In our example the first feature would be a binary feature telling us if that sample is red or not, the second would be the same thing for yellow and the third for blue. An example of such a transformation is illustrated below: Note, that because this approach increases the dimensionality of the dataset, if we have a feature that takes many unique values, we may want to use a more sparse encoding (like the one I presented above).
H: How to learn from multiple data sources with different input variables but the same underlying pattern? I will explain with an example: Let's say you have 2 factories that produce pulp paper. Each have similar processes where the laws of physics give the same outcome. Now let's say this 2 factories have equipment and sensors from different manufacturers, so the output of those sensors is not comparable in any way (different number of variables, different metric system etc.). Although for both factories I can caculate the output easily and determine the learning target in a comparable way (eg. metric tonnes of paper). Is there a way of using deep learning to learn from both datasets at the same time? I mean increase the predictive power upon a sample from factory 1 due to insights on factory 2? What about having 3 DNN, 2 for reducing feature representation and standardizing output representation and the third one for learning the general pattern common to both and predicting the final output? AI: What you are referring to is multi-view learning. Multi-view learning basically tells us how multiple data sources or multiple feature subsets can be combined to create a more robust learning curve for the algorithm. In recent years, starting from 2013 a lot of research has been carried in this rapidly growing field. A good introduction to the topic can be found in the link below. It contains a more theoretical and mathematical approach to understanding the method. http://research.ics.aalto.fi/airc/reports/R1011/msml.pdf
H: Question on bias-variance tradeoff and means of optimization So I was wondering how does one, for example, can best optimize the model they are trying to build when confronted with issues presented by high bias or high variance. Now, of course, you can play with the regularization parameter to get to a satisfying end, but I was wondering whether it is possible to do this without relying on regularization. If b is the bias estimator of a model and v of its variance, wouldn't it make sense to try to minimize b*v? AI: There are a lot of ways bias and variance can be minimized and despite the popular saying it isn't always a tradeoff. The two main reasons for high bias are insufficient model capacity and underfitting because the training phase wasn't complete. For example, if you have a very complex problem to solve (e.g. image recognition) and you use a model of low capacity (e.g. linear regression) this model would have high bias as a result of the model not being able of grasp the complexity of the problem. The main reason for high variance is overfitting on the training set. That being said there are ways of reducing both bias and variance on a ML model. For example the easiest way of achieving this is getting more data (in some cases even synthetic data help). What we tend to do in practice is: First, we increase the capacity of the model in order to reduce the variance on the training set as much as possible. In other words we want to make the model overfit (even reach a loss of 0 on the training set). This is done because we want to make sure the model has the capacity of sufficiently understanding the data. Then we try to reduce the bias. This is done through regularization (early stopping, norm penalties, dropout, etc.)
H: How to delete entire row if values in a column are NaN I'd like to drop all the rows containing a NaN values pertaining to a column. Lets assume I have a dataset like this: Age Height Weight Gender 12 5'7 NaN M NaN 5'8 160 M 32 5'5 165 NaN 21 NaN 155 F 55 5'10 170 NaN I want to remove all the rows where 'Gender' has NaN values. The output i'd like: Age Height Weight Gender 12 5'7 NaN M NaN 5'8 160 M 21 NaN 155 F Thanks in advance! AI: Well if the dataset is not too large I would suggest using pandas to clean the data. So you would need to first do Python2 python2 -m pip install pandas Python3 python3 -m pip install pandas If you already have anaconda installed you can skip the above step. Next you could go through an IDE (like jupyter) or through the shell type the following commands import pandas as pd df = pd.read_csv("filename", dtype=str) #or if excel file #df = pd.read_excel("filename", dtype=str) df = df[pd.notnull(df['Gender'])] Then you would want to save your result in a file with df.to_csv("newfile");
H: Is there a way to replace existing values with NaN I'm experimenting with the algorithms in iPython Notebooks and would like to know if I can replace the existing values in a dataset with Nan (about 50% or more) at random positions with each column having different proportions of Nan values. I'm using the Iris dataset for this experimentation to see how the algorithms work and which one works the best. The link for the dataset is here. Thanks in advance for the help. AI: Randomly replace values in a numpy array # The dataset data = pd.read_csv('iris.data') mat = data.iloc[:,:4].as_matrix() Set the number of values to replace. For example 20%: # Edit: changed len(mat) for mat.size prop = int(mat.size * 0.2) Randomly choose indices of the numpy array: i = [random.choice(range(mat.shape[0])) for _ in range(prop)] j = [random.choice(range(mat.shape[1])) for _ in range(prop)] Change values with NaN mat[i,j] = np.NaN Dropout for any array dimension Another way to do that with an array of more than 2 dimensions would be to use the numpy.put() function: import numpy as np import random from sklearn import datasets data = datasets.load_iris()['data'] def dropout(a, percent): # create a copy mat = a.copy() # number of values to replace prop = int(mat.size * percent) # indices to mask mask = random.sample(range(mat.size), prop) # replace with NaN np.put(mat, mask, [np.NaN]*len(mask)) return mat This function returns a modified array: modified = dropout(data, 0.2) We can verify that the correct number of values have been modified: np.sum(np.isnan(modified))/float(data.size) [out]: 0.2
H: How can I transpose a high dimensional dataset? I have a (.csv) file with more than 35,000 rows and 100 columns, where the rows represent the attributes and columns represent the instances. In Excel the maximum number of columns is 16,384. Therefore, it is impossible to do this in one single Excel sheet. I need my dataset to be transposed in order to perform some machine learning algorithms in WEKA. Is there any tool allow such thing like this? Also, is there a way to transpose (.arff) file in WEKA? AI: This is possible to do in WEKA, but it only works if: the first column of your data is of string or nominal type, and the rest of your columns are numeric. Open the dataset in the explorer, and first remove the class attribute by choosing 'No Class' from the drop-down menu on the right. Then, apply the filter from filters.unsupervised.attribute.Transpose. After this, you should re-apply the class attribute by re-selecting the class attribute in the drop-down menu. If your data doesn't meet the requirements listed above, I would recommend using this short python script to do it: import pandas as pd df = pd.read_csv('your_file_here.csv', index=False) df.transpose().to_csv('your_new_file_here.csv', index=False) and then loading the new file into WEKA.
H: Right ML mode and metric to minimize FN and FP on imbalanced dataset So I have a dataset in which I have to predict class binary label (1 or 0), the problem, out of 120k data points, only 200 have the label '1'. the aim is to minimize FN and FP. Which ML model should I use? Gradient boost or XGBoost or logistic regression. How do I calculate the class weights? Which accuracy metric captures the minimization of FP and FN? AI: If you use class weights, I don't think it would make much of a difference which model you use (regarding the imbalance). If you are familiar with python, I would suggest this sklearn function that could help you out in computing the class weights. However, I have found that oversampling (or undersampling) often works better than class weights. Consider using SMOTE for this. While there are metrics and loss functions better suited at handling class imbalance (GMS, quadratic kappa, etc.) I don't think you would have an issue with using any loss if you considered oversampling your data.
H: Why not always use the ADAM optimization technique? It seems the Adaptive Moment Estimation (Adam) optimizer nearly always works better (faster and more reliably reaching a global minimum) when minimising the cost function in training neural nets. Why not always use Adam? Why even bother using RMSProp or momentum optimizers? AI: Here’s a blog post reviewing an article claiming SGD is a better generalized adapter than ADAM. There is often a value to using more than one method (an ensemble), because every method has a weakness.
H: How to load a csv file into [Pandas] dataframe if computer runs out of RAM? I have been trying to train a neural network, but my computer is always running out of RAM memory when I'm loading the dataframe with Pandas. Its a .csv file that is like 7+ GB. I wanted to try some primitive batching but in order to one hot encode I need to find number of all unique values, which i can't do without loading data into a dataframe first. Are there any other tools that I can use to attempt loading the file into a dataframe? Does Pyspark also have a limit of when it starts crashing? I know that its capable of breaking down operations into stages, does that help with RAM management or just execution? AI: As noted many times by the writers of pandas, the ideal size of memory for analyzing with pandas is around 5-10 times the load of data you are giving. That being said, if you can afford to load data is chunks for pre-processing and dump only the columns that are needed, I recommended pandas.read_table option to load data in pieces (check option chunks). pyspark as you mentioned might be good to go with. But there is dask which is built around python stack of pandas and numpy for distributed work loads which has support for pandas.DataFrame and numpy.array . But I never followed anyone who was successful in using it in production (or atleast mentioned that they had used it). May be some of the people here can vouch for it. There is another library called sframe to load the data way out of memory as the author suggested in one of his presentations. It keeps on serializing data onto disk. Hope I summarized it well.
H: Standardization and Normalization Which and all Machine Learning algorithms needs the data to be standardised/normalised before feeding into the model. How do we determine whether the particular model/data needs to be standardised/normalised. Thank you. AI: Whenever you have features that they have different scale and it is significant for some features, you should standardize your feature. Take a look at here.
H: Is there a quick way to check for multicollinearity between categorical variables in R? I have a large amount of categorical and dummy variables (36) and I would like to remove a number of them based on their multicollinearity (or just collinearity). Instead of using Chi Square tests over and over again, are there any functions that can check for (multi)collinearity in my variables and return variables with multicollinearity (or collinearity)? AI: In my work I usually use Normalized Mutual Information (NMI) to get an understanding of how "correlated" two categorical variables are. Normalized Mutual Information is an information-theoretic measure that tells you how much information is shared by two variables. If NMI is close to 1, the two variables are very "correlated", while if NMI is close to 0 the two variables are "uncorrelated". I wrote this function that computes the NMI between the first two variables in a data.table. It is quite fast thanks to data.table. Feel free to use it! compute_mutualinfo <- function(df){ require(data.table) require(entropy) var1 = names(df)[1] var2 = names(df)[2] tmp_tab <- df[,.N,by=c(var1,var2)] names(tmp_tab)[1:2] <- c('V1','V2') cross_tab <- tapply(tmp_tab$N,list(addNA(as.factor(tmp_tab$V2),ifany = T), addNA(as.factor(tmp_tab$V1),ifany = T)), sum) cross_tab[is.na(cross_tab)] <- 0 mi.plugin(cross_tab)/sqrt(entropy(df[,.N,by=var1,with=T][,N])*entropy(df[,.N,by=var2,with=T][,N])) }
H: How to predict customer's next purchase Suppose we want to predict what customer will buy during his next visit to the Electronic Shop based on his past purchase history. I know that it is a very broad question, but I am new to machine learning and don't have much idea about how to approach this problem. The simplest thing that comes to my mind is to find the most frequent items that customer has bought and suggest it. However, I don't think that this is a very robust approach as it doesn't consider this scenario: Computer (1st Purchase) -> Mouse (2nd Purchase) -> Mouse Pad (3rd Purchase) I am looking for a simple model to get started and scale in terms of features and training data. I would love to hear suggestions of experienced Data Scientist as it is a most common problem. Thank you. AI: Take a look at association rule learning (https://en.wikipedia.org/wiki/Association_rule_learning). A really common algorithm is the Apriori agorithm. You could use the package apyori, it works great: https://pypi.python.org/pypi/apyori/1.1.1
H: Questions on ensemble technique in machine learning I am studying the ensemble machine learning and when I read some articles online, I encountered 2 questions. 1. In this article, it mentions Instead, model 2 may have a better overall performance on all the data points, but it has worse performance on the very set of points where model 1 is better. The idea is to combine these two models where they perform the best. This is why creating out-of-sample predictions have a higher chance of capturing distinct regions where each model performs the best. But I still cannot get the point, why not train all training data can avoid the problem? 2. From this article, in the prediction section, it mentions Simply, for a given input data point, all we need to do is to pass it through the M base-learners and get M number of predictions, and send those M predictions through the meta-learner as inputs But in the training process, we use k -fold train data to train M base-learner, so should I also train M base-learner based on all train data for the input to predict? AI: Instead, model 2 may have a better overall performance on all the data points, but it has worse performance on the very set of points where model 1 is better. The idea is to combine these two models where they perform the best. This is why creating out-of-sample predictions have a higher chance of capturing distinct regions where each model performs the best. It's not about training on all the data or not. Both models trained on all the data. But each of them is better than the other at different points. If I and my older brother are tying to guess the exact year of a song, I will do better in 90s songs and he in 80s songs - it's not a perfect analogy but you get the point - imagine my brain just can't process 80s songs, and his can't process 90s songs. The best is to deploy us both knowing we each have learnt different regions of the input space better. Simply, for a given input data point, all we need to do is to pass it through the M base-learners and get M number of predictions, and send those M predictions through the meta-learner as inputs k-fold is still just one learner. But you're training multiple times to chose parameters that minimize error in the left-out fold. This is like training only me on all the songs showing me k-1 folds of data, and I calibrate my internal model the best I can... but I'll still never be very good at those 80s songs. I'm just one base learner whose functional form (my brain) isn't fit for those songs. If we could bring the second learner along, that would improve things.
H: What is difference between detrend and normalization? matlab function detrend subtracts the mean from data. If data contains several data columns, detrend treats each data column separately. One of the normalization technique is subtracting the mean and dividing it by standard deviation. Since the normalization already subtract the mean from the data, in such case, is it essential to perform (before or after) detrend operation? What is the significance of each operation? AI: You detrend data in order to get rid of the linear trend in your data, which might cause spurious regression, misleading evidence that there is some relation between variables. Normalization means adjusting values measured on different scales to a notionally common scale. Example you give: Standard score $\frac{X - \mu}{\sigma}$ is just one possible method of normalization . It allows you to see where value lies in comparison to mean. In my experience you would only detrend data in order to create time series model. On the other hand, normalization is frequently used to compare previously not comparable statistics or detect anomalies (in case of standard score). As a result, usage really depends on the use case and you rarely use both at the same time. Wikipedia is good enough source to clarify this unambiguity.
H: What if my validation set is worse than my training? I am running a CNN, on the 1st epoch my training set accuracy is 15% and validation set is 12%, by the 51st epoch my training accuracy is 87% and validation set is 13%. What is happening? What does it mean if my validation set is lower or greater than my training set ?? (my test set gives 12%) AI: This is a common high-variance problem due to overfitting. Simply put: Good training accuracy together with low dev set (or in your terminology the validation set) accuracy it means you are expressing your training data very well, whereas your model fails to perform well on unseen data (High-variance Problem). More general: there are few other combinations of this high-/low-bias together with high-/low-variance that are explained in length in the context of Bias-Variance Trade-off by Andrew Ng in his first Machine Learning course and even more relevant to Convolutional Neural Networks in his last DeepLearning course, see this video. You also easily find many blog posts discussing about these stuffs like this blog post, or this one. What you see here is quite alright. If fact it is highly recommended practice that you start by a model capable of describing your training set very well first (even overfit). Then if overfit meaning there is a large difference in accuracy between train/dev sets (similar to your scenario), you start penalizing the model (aka regularization). Once again you can find tons of materials on how to regularize CNN models. On the top of your search you will find Dropout, Max-Pooling. See here for example. I also suggest to pay attention to your data split distributions. It is a very fundamental concept that often paid less attention to. Violating this assumption (aka data split mismatch) could lead to problems that could appear in any forms like large accuracy difference of model on data splits. Maybe check Ng's short video.
H: How to pass common inputs to a merged model in Keras I'm attempting to merge a VGG-16 and ResNet-50 through concatenation. I was successful in training and saving the merged model. Here is the code snippet: from keras import applications from keras.models import Model from keras.layers import Dense, GlobalAveragePooling2D from keras.layers import Concatenate from keras.models import load_model # Loading the training data img_rows=300 img_cols=300 channel = 3 num_classes = 3 batch_size = 10 nb_epoch = 10 #load the first model ######################################################### base_model1 = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows,img_cols,3)) #get the model summary base_model1.summary() #addind the top layers x1 = base_model1.output x1 = GlobalAveragePooling2D()(x1) #analogous to flatten() model1 = Model(inputs=base_model1.input, outputs=x1) model1.summary() #load the second model ######################################################## base_model2 = applications.ResNet50(weights='imagenet', include_top=False, input_shape=(img_rows,img_cols,3)) #get the model summary base_model2.summary() #addind the top layers x2 = base_model2.output x2 = GlobalAveragePooling2D()(x2) #analogous to flatten() model2 = Model(inputs=base_model2.input, outputs=x2) model2.summary() '''merge the models''' mergedOut = Concatenate()([model1.output,model2.output]) #add a new dense layer and softmax out=Dense(2048, activation='relu')(mergedOut) out = Dense(num_classes, activation='softmax', name='predictions')(out) #create the new model with three branches and one dense layer model = Model(inputs=[model1.input,model2.input], outputs=out) model.summary() ############################################################################### #training the model hist=model.fit([X_train, X_train], Y_train, batch_size=batch_size, epochs=nb_epoch, shuffle=True, verbose=1, validation_data=([X_valid, X_valid], Y_valid)) However, the inputs to both the models are the same. I would like to use this as a common input to the merged model to avoid passing X_train and X_valid twice during the model.fit. How can I do this? AI: I found the answer to my question. I need to pass the common input shape to the individual models and concatenate the outputs like this: input_shape=(img_rows,img_cols,3) commonInput = Input(input_shape) out1 = model1(commonInput) out2 = model2(commonInput) mergedOut = Concatenate()([out1,out1])
H: Linear Regression in Python Below is the dataset for which I am trying to implement Linear regression in python. age sex bmi children smoker region charges 0 19 female 27.900 0 yes southwest 16884.92400 1 18 male 33.770 1 no southeast 1725.55230 2 28 male 33.000 3 no southeast 4449.46200 3 33 male 22.705 0 no northwest 21984.47061 4 32 male 28.880 0 no northwest 3866.85520 I am confused what to do with the columns children, smoker, sex as they are of type "object". Data columns (total 7 columns): age 1338 non-null int64 sex 1338 non-null object bmi 1338 non-null float64 children 1338 non-null int64 smoker 1338 non-null object region 1338 non-null object charges 1338 non-null float64 Do I have to convert this to numeric before building my model ? Please provide your suggestions. Thank you. AI: Yes, you will have to convert everything to numeric. That requires thinking about what these attributes represent accordingly you can use either the below 3 options. There are three options: One-Hot encoding for categorical data Arbitrary numbers for ordinal data Use something like group means for categorical data (e. g. mean prices for city districts). You have to be careful to not infuse information you do not have in the application case. I'm expanding on option 1 and 3, if you want to know about option 2 you can go through the links attached at last. One hot encoding If you have categorical data, you can create dummy variables with 0/1 values for each possible value. Similarly you could implement for children, smoker. E. g. id Sex 0 Male 1 Feamle to id Male Female 0 1 0 1 0 1 This can easily be done with pandas: import pandas as pd data = pd.DataFrame({'Sex': ['Male', 'Female']}) print(pd.get_dummies(data)) will result in: Sex_Male Sex_Female 0 1 0 1 0 1 Using categorical data for groupby operations This is an additional usecase but in your case it is not necessary to use this but if you feel so, you can try implementing this as well You could use the mean for each category over past (known events). Say you have a DataFrame with the last known mean prices for cities: prices = pd.DataFrame({ 'city': ['A', 'A', 'A', 'B', 'B', 'C'], 'price': [1, 1, 1, 2, 2, 3], }) mean_price = prices.groupby('city').mean() data = pd.DataFrame({'city': ['A', 'B', 'C', 'A', 'B', 'A']}) print(data.merge(mean_price, on='city', how='left')) Result: city price 0 A 1 1 B 2 2 C 3 3 A 1 4 B 2 5 A 1 For better understanding you can go through this Link-1, Link-2
H: Is there t-SNE in WEKA? I want to use t-SNE in WEKA just for visualization purposes. I tried to look at the package manager but could not find any implementation of it. Is there anything that I can do to achieve it? AI: Sadly no, there is not a T-SNE implementation for WEKA. If you can install python packages in your environment, then you can use the wekaPython package (in WEKA's package manager) to run scikit-learn's T-SNE implementation on data you have loaded into WEKA. Use this code in the 'CPython Scripting' panel (which appears after successfully installing wekaPython): X = py_data.iloc[:, :-1] y = py_data.iloc[:, -1] from sklearn.manifold import TSNE import matplotlib.pyplot as plt tX = TSNE().fit_transform(X) plt.scatter(tX[:, 0], tX[:, 1]) plt.show()
H: Formatting categories of data with pandas in Python I have around 9500 cities that I want to put in a pandas dataframe, then store in a file for later use. For example, below, these are 3 cities that I have data for. Some range in size based on year. Lovell_Wyoming only has 9 years of data points, corresponding to the years, as opposed to the 15 that Wheatland and Worland have. My original idea was putting the quantitative data (Arson..Year) in a map, then putting the city name, as a key, into a larger map, with that quantitative data. Building the larger map that way. Then, converting the map to a dataframe, then to a csv. I am a bit inexperienced with pandas so I am not sure how to do this correctly, if this is even the best way to do it. At the end of the day, I would like this data in a csv file that is easily accessible by loading it into a dataframe and calling on whatever value I need. City 'Lovell_Wyoming' Arson [0, 0, 0, 0, 0, 0, 0, 1, 0] Assaults [6, 6, 3, 4, 3, 28, 3, 2, 2] Auto_thefts [1, 1, 1, 0, 0, 1, 2, 0, 1] Burglaries [6, 11, 5, 2, 0, 15, 11, 7, 7] Murders [0, 0, 0, 0, 1, 0, 0, 0, 0] Rapes [0, 0, 3, 0, 0, 1, 1, 0, 1] Robberies [0, 0, 0, 0, 0, 0, 0, 1, 0] Thefts [23, 49, 35, 39, 28, 37, 54, 35, 10] Year [2002, 2003, 2005, 2006, 2007, 2008, 2009, 2010, 2014] City 'Wheatland_Wyoming' Arson [0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0] Assaults [9, 2, 6, 5, 6, 6, 2, 4, 2, 4, 3, 11, 5, 4, 8] Auto_thefts [4, 8, 3, 3, 4, 4, 5, 3, 4, 6, 4, 8, 12, 7, 3] Burglaries [17, 17, 14, 9, 10, 17, 12, 26, 51, 12, 15, 21, 32, 31, 13] Murders [1, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0] Rapes [0, 0, 0, 4, 2, 1, 2, 0, 2, 1, 1, 0, 2, 0, 0] Robberies [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Thefts [109, 95, 146, 81, 108, 100, 82, 85, 106, 128, 48, 85, 66, 56, 47] Year [2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016] City 'Worland_Wyoming' Arson [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Assaults [12, 17, 19, 17, 11, 15, 16, 2, 9, 1, 4, 7, 2, 1, 3] Auto_thefts [2, 1, 2, 1, 1, 8, 1, 1, 1, 0, 1, 0, 0, 0, 1] Burglaries [6, 10, 10, 10, 9, 10, 10, 0, 6, 1, 0, 2, 0, 11, 18] Murders [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Rapes [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3] Robberies [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Thefts [44, 41, 47, 29, 30, 25, 27, 27, 23, 30, 36, 45, 54, 46, 43] Year [2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016] I apologize in advance if the way I formatted this is a bit weird! Please let me know if you would like any more information. AI: This is how I would do it. However, a DataFrame can be structured in a number of way which best suits your needs. I believe that this method allows for the greatest flexibility because you can easily use grouping functions to restructure this format on the go. First you need to set up your data in a way that is compatible with Python. I use a dictionary of dictionaries cities = {'Lovell_Wyoming': {'Crimes': { 'Arson': [0, 0, 0, 0, 0, 0, 0, 1, 0], 'Assaults': [6, 6, 3, 4, 3, 28, 3, 2, 2] , 'Auto_thefts': [1, 1, 1, 0, 0, 1, 2, 0, 1] , 'Burglaries': [6, 11, 5, 2, 0, 15, 11, 7, 7] , 'Murders': [0, 0, 0, 0, 1, 0, 0, 0, 0] , 'Rapes': [0, 0, 3, 0, 0, 1, 1, 0, 1] , 'Robberies': [0, 0, 0, 0, 0, 0, 0, 1, 0] , 'Thefts': [23, 49, 35, 39, 28, 37, 54, 35, 10] }, 'Years': [2002, 2003, 2005, 2006, 2007, 2008, 2009, 2010, 2014] }, 'Wheatland_Wyoming': {'Crimes': { 'Arson': [0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0] , 'Assaults': [9, 2, 6, 5, 6, 6, 2, 4, 2, 4, 3, 11, 5, 4, 8] , 'Auto_thefts': [4, 8, 3, 3, 4, 4, 5, 3, 4, 6, 4, 8, 12, 7, 3] , 'Burglaries': [17, 17, 14, 9, 10, 17, 12, 26, 51, 12, 15, 21, 32, 31, 13] , 'Murders': [1, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0] , 'Rapes': [0, 0, 0, 4, 2, 1, 2, 0, 2, 1, 1, 0, 2, 0, 0] , 'Robberies': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , 'Thefts': [109, 95, 146, 81, 108, 100, 82, 85, 106, 128, 48, 85, 66, 56, 47] }, 'Years': [2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016] } } Then compile this data into rows of your DataFrame data = [] for city in cities: for ix, year in enumerate(cities[city]['Years']): for crime in cities[city]['Crimes']: temp = {'City':city, 'Crime':crime, 'Year':year, 'Count':cities[city]['Crimes'][crime][ix]} data.append(temp) Then into the DataFrame structure as. import pandas as pd df = pd.DataFrame(data=data) df Queries You can then query this DataFrame in a number of ways for example if you want to know arsons in 2002 you would do df[(df['Crime']=='Arson') & (df['Year']==2002)] Counting You can count the number of arsons throughout the years as df[(df['Crime']=='Arson')].groupby(['City'])['Count'].agg('sum') City Lovell_Wyoming 1 Wheatland_Wyoming 3 Name: Count, dtype: int64 Writing your DataFrame to a CSV file This can be done directly as df.to_csv('filename.csv')
H: Basic classification question I am wondering how I can manage a test data after using PCA or normalization and another thing like that in the classification because our model works on the representation given by its input vectors. For example, suppose you have used PCA in your training dataset to gain better accuracy or you have normalized (min-max) data. Now, you have developed a model and want to install it and label the new coming samples. You need to somehow apply PCA to each coming record and normalize that record. Applying PCA to one record will not yield the same effect of the PCA used in training phase and I think even it doesn't make sense to apply PCA to just one sample. So how can we manage these preprocessing techniques in the training phase in the test data, too? Thanks in advance. AI: PCA is a matrix transformation from your original dataset to a set of orthogonal features. The transformation matrix which is applied to the training set is maintained and used in the future with your testing data such that the original testing set features will be mapped to same space as the training set transformed by PCA. If the training set has $n$ instances and their are $m$ features, the training matrix is of size $n \times m$. The PCA transformation matrix is of dimensions $m \times k$, where $k$ is the number of retained PCA features, the top eigenvalues. Thus we can transform a single instance $1 \times m$, by the $m \times k$ transformation matrix. This results in a $1 \times k$ vector. I have some text files that I vectorized using bag-of-words. The training set is shown below on the left side and the testing set is on the right side. Each row is a text file and the columns is the word count. If we plot the first 2 features of this dataset we get Now we will fit our PCA transformation matrix and we will apply this transformation to both the training and testing set. from sklearn.decomposition import PCA pca = PCA(n_components=2, copy=True) pca.fit(X_train) train_PCA = pca.transform(X_train) test_PCA = pca.transform(X_test) This gives the following plot. The purple and yellow points are the 2 different classes from the training set. Then the light blue points are from the testing set. You can see that the points in the testing set after being transformed by PCA will line up alongside the training set.
H: Can you interpolate with QLearning or Reinforcement learning in general? I am currently researching the usages of machine learning paradigms for pathfinding problems. I am currently looking into the reinforcement learning paradigm and I used QLearning for pathfinding. When there are not many states QLearning seems to be working well, but as soon as the environment gets bigger and the amount of states gets bigger it is performing rather bad. Since the convergence of QLearning is so slow I am wondering if it is possible with QLearning to interpolate the QValue of unexplored states since QLearning does not use a model? Is it possible with reinforcement in general or does it require to learn all possible states? AI: Since the convergence of QLearning is so slow I am wondering if it is possible with QLearning to interpolate the QValue of unexplored states since QLearning does not use a model? When Q learning is described as "model free", it means that the agent does not need access to (or use) a predictive model of the environment. It cannot refer to state transitions and rewards in advance, but has to experience them in order to learn. This does not mean that you have to avoid using a learning data model (such as a neural network) in order to generalise to new unseen data. So, yes, Q learning can interpolate from unseen states and predict their Q value. To do this, you replace the state/action table with a supervised learning method based on descriptions of state $s$ and action $a$ as inputs, that you train as a regression model to predict $Q(s,a)$ (as a variant you can also have just state as input and predict $Q(s,a)$ for all possible actions as a vector in one go). However, Q learning with a neural network suffers from instability. See Deep Mind's DQN paper for example of a system that solves that instability. In short: Use experience replay - store S, A, R, S' data for each step and run the Q learning update on random mini-batches of the stored data, instead of online. Keep two copies of the Q estimator neural network. Train one continuously, and copy it to a "frozen" version every now and then (e.g. every 100 mini-batches). Use the "frozen" copy to calculate the new $Q(s,a)$ targets. This still might not match your learning scenario. If you want to solve mazes, think carefully about what data is truly available to the agent and how you might use it. For instance if you are using Q learning to solve a maze where you have a map, it is very inefficient approach. This is often shown as a toy problem, because it is possible to view the learning data very easily. But in that toy problem, the agent is not given the map, nor any knowledge of what a grid is. Here are a couple of suggestions that may still help, separate to using a neural network value estimator: If you do have a model of your environment (but not a map or other data that could be directly analysed for a solution), joining Q learning with a planning algorithm might work better for you than Q learning, as in Dyna-Q. This is relevant where you have an agent exploring in real time that would benefit from "looking" ahead before taking actions. If your problem is very sparse rewards (due to larger maze, and only getting different reward at the end), then a worthwhile improvement is to look into multi-step TD learning, where the rewards are propagated back to previous steps more efficiently. Maybe look into $Q(\lambda)$
H: How does ,the Mutlinomial Bayes's alpha parameter, affects the text classification task? I would like to know how the alpha parameter, in Multinomial Bayes, affects the text classification task. I know that this parameter is correlated to the algorithm's ability in classifying unseen words during training. How changes text classification using low or high values of alpha? AI: Lets assume you are building a text classifier with a training set of 5 sentences. For this example, lets say you are trying to classify tweets (which are usually a sentence long) to whether it was a tweet made by Trump or not. You are given a tweet, "I have huge respect for women" and your goal is to classify it to 'Trump' or 'Not Trump'. Moreover, you are given 5 other random tweets and you also know whether those tweets were by Trump or not (Basically, they are already classified) this is your training set. in other words you are calculating, $$P(I\,have\,huge\,respect\,for\,women) = P(I)\times P(have)\times P(huge)\times P(respect)\times P(for)\times P(women)\\$$ $$\\$$ $$P(I\,have\,huge\,respect\,for\,women|Trump) = P(I|Trump)\times P(have|Trump)\times P(huge|Trump)\times P(respect|Trump)\times P(for|Trump)\times P(women|Trump)\\$$ $$\\$$ $$P(I\,have\,huge\,respect\,for\,women|Not\,Trump) = P(I|Not\,Trump)\times P(have|Not\,Trump)\times P(huge|Not\,Trump)\times P(respect|Not\,Trump)\times P(for|Not\,Trump)\times P(women|Not\,Trump)$$ $$\\$$ Depending on the probablity of equations (2) and (3) the user would make a decision whether the statement was made by Trump or not. Now lets say that none of the 5 training set tweets have the word huge in it, in which case, $$\\$$ $$P(huge|Trump)=P(huge|Not\,Trump)=0$$ $$\\$$ And hence, equations (2) and (3) are now zero and this is bad. A solution to this is to do smoothing or rather Laplace smoothing. The basic idea is to increase the probablities of all bigrams in your non-maximum likelihood equation (Since we are changing the counts from what they occurred in hopes to make it better) by 1 to make everything non-zero. That increase by 1 is called adding the pseudocount or as you know it as $\alpha$. Now, $\alpha=1$ may not always give the most accurate probablities and hence they could attain any finite non-negative integer. The way to know what $\alpha$ gives the most accurate responses is through iterating over all values of $\alpha$ on the training set unfortunately. The way to do that is whole different topic. PS : Trump did say he has huge respect for women. Such irony.
H: Stacked time series plot in python In pandas I can set the date as index, and then run df.plot() to see a line chart. How do I make that line chart stacked as in the picture below? AI: You can simply use df.plot.area() Found here after a quick google search.
H: Tips to improve Linear Regression model I have just run a Linear regression model on the Dataset having 7 independent variable and 1 target variable. Below is the R squared and MSE values. Mean squared error for training set : 36530921.0123 $R^2$ value for training set : 0.7477 Can anybody please give me some tips to increase the efficiency of this model. Edit: I have just implemented the same problem using Linear regression with Normalization of the features. I got the below output: Mean squared error for training set : 5.468490570335696e-10 R2 value for training set : 0.9275088299658416 Mean squared error for training set : 4.111793316375822e-10 R2 value for training set : 0.9342888671422529 So can we consider normalizing the dataset to get better accuracy ? AI: You can build more complex models to try to capture the remaining variance. Here are several options: Add interaction terms to model how two or more independent variables together impact the target variable Add polynomial terms to model the nonlinear relationship between an independent variable and the target variable Add spines to approximate piecewise linear models Fit isotonic regression to remove any assumption of the target function form Fit non-parametric models, such as MARS
H: What are the state-of-the-art models for identifying objects in photos? From my observations and little experience it appears that most of the ML project are about classifying stuff. Is there cancer signs on the photo? Does the picture show car, whale or banana? Etc. I need to implement a model for face identification. Not detection/recognition, but identification: having two different photos of the same person, my model should determine if in the pictures is depicted the same person. I want to achieve that using Tensorflow with convolutional nets. I've read this paper: http://ydwen.github.io/papers/WenECCV16.pdf and center loss looks promising. What do you think about that? Are there any new ideas/papers/implementations regarding that problem that are worth attention? I asked this question also on MachineLearning reddit (https://www.reddit.com/r/MachineLearning/comments/8cysrx/d_what_are_the_stateoftheart_models_for/) and got an useful link with FaceNet implementation, trying here also :) AI: EDIT : Deep Face Recognition: A Survey New on arxiv 4/18/2018 looks like best survey of methods over face related tasks :). Beyond Facenet, there are a few approaches which may be good to look at how many faces do you intend to have your system know about - aka netowrks that have a network to directly output which face it is (~ <10K), vs a feature map for clustering (~ 10k to 100K), or comparing any 2 faces (~ >100K) - below are examples of approaches to each... This paper just came out: Exploring Disentangled Feature Representation Beyond Face Identification- 2018 - Reported Accuracy 99.816 . Use an encoder-decoder like scheme to compute features of a face. Then given all the faces you computed this on, do clustering to find which ones are the same face (TSNE - distance in feature space). This paper is cool since it also gives features of each face like 'smiling', and these are used to augment the search. Along similar line is this Before that (other than Facenet) - DeepFace - Accuracy 97.35. Its the facebook lib. If its state-of-the-art enough for them, its state of the art enough for me. Approach is given two images, put them into siamese network, for first detect, then 3d model the face, then project to 2d featuremap, which then combine to label saying whether they are same person or not. Robust Face Recognition via Multimodal Deep Face Representation - 2016, Reported accuracy 98.43. This is interesting because they trained on a relatively small dataset (CASIA WebFace). However this has the last layer being the number of identities in the dataset - so this could be a llimiting factor if you want to recognize millions of identities like facebook does. Otherwise, this looks easiest to implement/mess with. Patch-based Face Recognition using a Hierarchical Multi-label Matcher - not sure what is going on here, but looks interesting. I would think another limiting factor for you is how many examples do you have per face - eg do you have 4K identities and 4mil images like the facebok dataset, or 10k identities and .5mil images (CASIA WebFace), or LFW with like ~5K identities and ~15k images.
H: Handling categorical variables in large df I have a df with nearly 40 million rows and ~20 columns (total size is 2.2+GB). 15 of my features are categorical. I figured that the most reasonable way to go about this problem without making the df any bigger would be fit/transform each with LabelEncoder and then convert each feature to category data type. The only thing is that none of the categorical features are ordinal. Should I fit/transform them with StandardScaler or is that unnecessary? AI: No. If, as you said, the variables are categorical, performing a scaling does not make any sense. Plain LabelEncoder already does what you want.
H: Choice of time series models I built a model for time series in order to forecast new values. What is the best way to choose the correct model? Are better criteria like AIC and BIC or the comparison between prediction errors? In this last case I have to split dataset in train and test while in the first case is not necessary, reducing time of implementation. Thanks! AI: First thing first, when ever you use Time Series data you call it as Forecasting not Prediction as it is time dependent. To understand why you can go through this link Metrics to compare models When you are trying to compare between models you need to use AIC,BIC, AUC etc values. you can go though this link to understand better Metrics to Access the Model When you are accessing the performance of the model then you need to check for Error Rates(RMSE, MAE, MAPE, MSE etc). Yes, in this case you need to divide the data into Train and Test to access the model. You can go through this link for better understanding Improve the Forecast To take it to the next level, you can use an ensemble to get better the result. This might or might not decrease the error rate. In most of the cases it is helpful. You can combine 2/3 moderately performing hotels outcome to get best results i.e., Ensemble Model.
H: Use of Random Forest algorithm in PySpark for imputation I am wondering how to use Random Forest algorithm for imputing missing values in a dataset. It is supposed to work well with missing values but I am not sure how those missing values are dealt with and how RF imputation works in PySpark. AI: You can do the following: use all the other features as input and the missing data as the label. Train using all the rows that have the column filled with data and classify the others that don't. Use the values predicted by the Random Forest as the value of that field on the subsequent models and transformations.
H: General methods outlier detection What are general methods for outlier detection that do not assume any underlying distribution in the data? I have a dataset with the prizes of the rents in London, as well as their location, number of bedrooms, living rooms and bathrooms. I want to identify outliers in this data, where some of the variables are discrete and some of them are continuous. Any ideas on how to do this? AI: Dbscan seems a great choice for you, look at scikit-learn implementation for further discovery. About being discrete or continuous, it actually doesn't matter, what you have to look at it is if the scale is the best suited for the algorithm in hand (and scikit-learn has algorithms to handle that). Another tip is to actually see if the attributes fit on a distribution, some of them might, and parametric methods of detecting outliers are better suited for the task.
H: How does backpropagation differ from reverse-mode autodiff Going through this book, I am familiar with the following: For each training instance the backpropagation algorithm first makes a prediction (forward pass), measures the error, then goes through each layer in reverse to measure the error contribution from each connection (reverse pass), and finally slightly tweaks the connection weights to reduce the error. However I am not sure how this differs from the reverse-mode autodiff implementation by TensorFlow. As far as I know the above algorithm first goes through the graph in the forward direction and then in the second pass computes all partial derivatives for the outputs with respect to the inputs. This is very similar to the propagation algorithm. How does backpropagation differ from reverse-mode autodiff ? AI: Thanks to the above answer for the valid contribution however I have found the answer to this question by the author of the book himself: Bakpropagation refers to the whole process of training an artificial neural network using multiple backpropagation steps, each of which computes gradients and uses them to perform a Gradient Descent step. In contrast, reverse-mode auto diff is simply a technique used to compute gradients efficiently and it happens to be used by backpropagation.
H: Regularization - Combine drop out with early stopping I'm building a RNN (recurrent neural network) with LSTM cells. I'm using time series to perform anomaly detection. When training my RNN I'm using a dropout of 0.5 and I'm early stopping with a patience of 5 epochs when my validation loss is increasing. Does it make sense to use early stopping in combination with dropout? AI: It does make sense, they are just two different things. Dropout only makes your model learning harder, and by this it helps the parameters of the model act in different ways and detect different features, but even with dropout you can potentially overfit your traning set. On the other hand, early stopping prevents your model from overfitting by taking the best model on your validation data so far. However, for the sake of simplicity, I think it is easier to just use dropout (training a neural network is not easy and the training may not be successful due to many different reasons, it is a good practice to reduce the possible reasons why the training is failing as much as possible). Unless you have short time to train your network, with a sufficiently high amount of dropout you will ensure that your model is not overfitting. My final recommendation is: just use dropout. If using a 0.5 dropout rate still overfits, set a higher dropout rate.
H: RF and DT overfitting I am new with Machine Learning and I started with some lessons in Kaggle. There, I learnt how to use DecisionTreeRegressor() and RandomForestRegressor() from sklearn. However, I cannot really understand how I can verify that my explanatory variables do not overfit the model. For example, the lessons included evaluation with the use of Mean Absolute Error. MAE and MRSE can evaluate whether my Decision Tree depth is optimal or not, but not if my explanatory data are even relevant. I come from Economics, so I am used to deal with such problems using diagnostics or $R^2$. Is there any equivalent to $R^2$ benchmark to determine whether my explanatory variables are overfitting my model or not? AI: I think you can perform Predictor Importance test and see which are the variable explaining the most. There is this package named Boruta, you can go through the link for implementation in python. You can eliminate the variables which are highly correlated. For example if you have age as the target variable and you have DOB as a feature then it makes no sense to build a model. So, you need to make sure to eliminate the variable which are highly correlated to the target variable. In my Scenario I had this following visualization As you can see the 2 variables(underlined with red dash) are highly correlated with the target variable, before removing these variables the MAE was 0.9(approx) after removing those features(Backward Stepwise Elimination) and the MAE was 3.5(approx) but that is the actual error. Currently working on getting some external features to explain the data and to improve the accuracy. Every time it is not about accuracy/error rate of the model, it is also about how good our model could be generalized and robust our model should be. To check if the data is overfitting, then I tried testing it by taking those 2 variables and try modelling and the MAE was 1.6(approx) from this we can understand that these 2 variables explain the most. So, try applying and see how the features are correlated with the target variable. One of the methods used to address over-fitting in decision tree is called pruning which is done after the initial training is complete. In pruning, you trim off the branches of the tree, i.e., remove the decision nodes starting from the leaf node such that the overall accuracy is not disturbed. This is done by segregating the actual training set into two sets: training data set, D and validation data set, V. Prepare the decision tree using the segregated training data set, D. Then continue trimming the tree accordingly to optimize the accuracy of the validation data set, V. You can go thorough this link, about how we can avoid over fitting by tuning the parameters.
H: What can functional programming be used for in data science? In my next academic year at university, I have the option to take a course in Advanced Functional Programming. A basic description of the course is this: "You’ll focus on a number of more advanced functional programming topics such as: programming with effects; reasoning about programs; control flow; advanced libraries; improving efficiency; type systems; and functional pearls." Therefore I'd like to know if functional programming is useful in Data Science. If so, why it is useful, and by extension, whether this course will ultimately be useful in the pursuit of becoming a Data Scientist. AI: One reason why functional programming could be useful for data science is that it lends itself more easily to parallel and distributed programming, e.g. the popular frameworks Apache Spark for cluster computing and Apache Kafka for stream-processing are both written in Scala (and Java). Other than that "functional programming" as a skill is not directly related to data science. It's a tool that may facilitate some practicalities of data science and therefore more relevant for the "data engineering" aspect of data science. It's useful but probably not necessary. It depends on your interests.
H: Sparse Matrix - Effect and Solution Can anybody explain me what are the effects on the model if we have sparse data in our dataset. And also how to deal these sparse matrices ? Thank you. AI: The idea is really simple, just look at some online resources like https://en.m.wikipedia.org/wiki/Sparse_matrix The implementation is also really simple. For pandas this page might help you https://pandas.pydata.org/pandas-docs/stable/sparse.html
H: Dropout in other machine learning models Dropout is a widely used technique in deep learning. Dropout was built for neural networks, but I wonder if other prediction models can use this idea as well as a regularizer. Do you know of any similar technique in linear regression, SVMs or tree-based methods? AI: Random forests could be thought of as using a kind of dropout-esque technique as each split node only considers a random subset of the features, effectively 'dropping out' the other ones. Also, sometimes in large tree ensembles, each tree is only given a random subset of features to begin with, akin to dropout on the input layer of a neural network.
H: Why does momentum need learning rate? If momentum optimizer independently keeps a custom "inertia" value for each weight, then why do we ever need to bother with learning rate? Surely, momentum would catch up its magnutude pretty quickly to any needed value anyway, why to bother scaling it with learning rate? $$v_{dw} = \beta v_{dw} +(1-\beta)dW$$ $$W = W-\alpha v_{dw}$$ Where $\alpha$ is the learning rate (0.01 etc) and $\beta$ is the momentum coefficient (0.9 etc) Edit Thanks for the answer! To put it more plain: momentum controls "how well we retain" the movement, and learning rate is "how fast do we reGain" the movement AI: To answer the first question about why we need the learning rate even if we have momentum, let's consider an example in which we are not using the momentum term. The weight update is therefore: $ \Delta w_{ij} = \frac{\partial E}{\partial w_{ij}} \cdot l $ where: $ \Delta w_{ij} $ is the weight update $ \frac{\partial E}{\partial w_{ij}} $ is the gradient of the error with respect to the weight $ l \space $ is the learning rate coefficient Our weight update is determined by the gradient of our current error with respect to the weight at node $ ij $. Therefore, our prior weight deltas are not factored into our weight update equation. If we were to eliminate the learning rate, our weights would not update. Now let's consider an example using the momentum term in its derived form: $ \Delta w_{ij} = (\frac{\partial E}{\partial w_{ij}} \cdot l) + (\mu \cdot \Delta w^{t-1}_{ij}) $ where: $ \mu $ is the momentum coefficient $ \Delta w^{t-1}_{ij} $ is the weight update of node $ ij $ from the previous epoch Now we are factoring the previous weight delta in our weight update equation. In this form, it is easier to see that the learning rate and momentum are effectively independent terms. However, without a learning rate, our weight delta would still be zero. Now you might ask: what if we remove the learning rate after getting an initial momentum value so that momentum is the sole influence of the weight delta? This destroys the backpropagation algorithm. The objective of backprop is to optimize the weights to minimize error. We achieve this minimization by adjusting the weights according to the error gradient. Momentum, on the other hand, aims to improve the rate of convergence and to avoid local minimas. The momentum term does not explicitly include the error gradient in its formula. Therefore, momentum by itself does not enable learning. If you were to only use momentum after establishing an initial weight delta, the weight update equation would look as such: $ \Delta w_{ij} = (\mu \cdot \Delta w^{t-1}_{ij}) $ and: $ \lim_{t \to \infty} \Delta w^t_{ij} = \begin{cases} 0 & | \space \mu < 1 \space \lor \space (\mu = 1 \space \land \space \Delta w^{t=0}_{ij} < 1) \\ 1 & | \space \mu = 1 \land \space \Delta w^{t=0}_{ij} = 1\\ \infty & | \space otherwise \end{cases} $ Although there exists a scenario where the weight delta approaches zero, this descent is not based on the error gradient and is in fact predetermined by the momentum coefficient and the initial weight delta: this weight delta does not achieve our objective to minimize the error and is therefore useless. TL;DR: The learning rate is critical for updating the weights to minimize error. Momentum is used to help the learning rate, but not replace it.
H: Dropout vs weight decay Dropout and weight decay are both regularization techniques. From my experience, dropout has been more widely used in the last few years. Are there scenarios where weight decay shines more than dropout? AI: These techniques are not mutually exclusive; combining dropout with weight decay has become pretty standard for deep learning. However, where weight decay applies a linear penalty, dropout can cause the penalty to grow exponentially. This property of dropout can lead to hypothetical failures as proposed and proven in section 4.2 of this paper. In general, research has consistently shown the benefits of dropout (with and without weight decay) for training deep networks. A practical scenario in which weight decay is exclusively preferred over dropout would be quite the anomaly.
H: Neural Network Hidden Layer Selection I am trying to build an MLP classifier model on a dataset containing 30000 samples and 23 features. What are the standards I need to consider while selecting the number of hidden layers and number of nodes in each hidden layer? AI: First try a simple model: The input layer and the output layers dimension are defined by your data / your problem definition. Then train a model without any hidden layer. See how good it performs. Is it good enough? If yes, you're done. If no, continue Add a hidden layer of reasonable size or adjust a hidden layers size. Go to step (2). The "reasonable" size part might be difficult. As a guidance: If you have only a single node, it is certainly too small for a 1000 class problem. It might be big enough for a 2 - 3 class problem. I would usually suggest to keep the size of the features per layer roughly constant or at most reduce it by 1/10 or triple it. But that is only gut feeling. The reason for my preference for simple models is Occam's razor, the fact that they are often faster, easier to analyze and to manually improve. For more information about topology learning and rules how to design neural networks, see: Thoma, Martin. "Analysis and Optimization of Convolutional Neural Network Architectures." arXiv preprint arXiv:1707.09725 (2017). Especially chapter 2.5 and chapter 3.
H: Python Sklearn TfidfVectorizer Feature not matching; delete? I trained a classifier using TfidfVectorizer in Sklearn. I then pickled the model for future use. The new x_test that I want to make predictions on, has more features than the x_train from the model. This is the resulting error: ValueError: X has 4877 features per sample; expecting 2799 Is there a way to delete any features in x_test that were not used in x_train? I know if I had used a countverctorizer, i could have bypassed the error by not using fit_transform on x_test. But since it is TfidfVectorizer, it won't let me bypass. I also tried imputation but couldn't get it to work. Thanks AI: You should not experience this problem if you use TfidfVectorizer properly. Demo: In [58]: from sklearn.feature_extraction.text import TfidfVectorizer source text In [59]: text = """I trained a classifier using TfidfVectorizer in Sklearn. I then pickled the model for future use. ...: ...: The new x_test that I want to make predictions on, has more features than the x_train from the model. This is the resulting error""" let's tokenize it to a list of sentenses: In [60]: from nltk import sent_tokenize In [61]: vect = TfidfVectorizer() In [62]: data = sent_tokenize(text) yields: In [63]: data Out[63]: ['I trained a classifier using TfidfVectorizer in Sklearn.', 'I then pickled the model for future use.', 'The new x_test that I want to make predictions on, has more features than the x_train from the model.', 'This is the resulting error'] now we can fit and transform our data set: In [64]: X = vect.fit_transform(data) result: In [65]: X Out[65]: <4x31 sparse matrix of type '<class 'numpy.float64'>' with 34 stored elements in Compressed Sparse Row format> In [66]: vect.get_feature_names() Out[66]: ['classifier', 'error', 'features', 'for', 'from', 'future', 'has', 'in', 'is', 'make', 'model', 'more', 'new', 'on', 'pickled', 'predictions', 'resulting', 'sklearn', 'tfidfvectorizer', 'than', 'that', 'the', 'then', 'this', 'to', 'trained', 'use', 'using', 'want', 'x_test', 'x_train'] now let's feed it a data set with unknown words (features): In [67]: new_dataset = ["let's see what happens to unknown words", "Yet another sentence."] In [68]: X2 = vect.transform(new_dataset) In [69]: X2 Out[69]: <2x31 sparse matrix of type '<class 'numpy.float64'>' with 1 stored elements in Compressed Sparse Row format> it worked properly - all unknown features (words) have been ignored: In [70]: pd.SparseDataFrame(X2, columns=vect.get_feature_names(), default_fill_value=0) Out[70]: classifier error features for from future has in is make ... the then this to trained use using want \ 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 x_test x_train 0 0.0 0.0 1 0.0 0.0 [2 rows x 31 columns]
H: What should be the ratio of True vs False cases in a binary classifier dataset? I am using a CNN for sentiment analysis of news articles. It is a binary classification with outputs: Interesting & Uninteresting. In my dataset, there are around 50,000 Uninteresting articles and only about 200 Interesting articles. I know the ratio is badly skewed. My question is what should be the ratio in such a scenario. One approach that I want to try is to cluster the Uninteresting news articles and take a sample from each cluster for training. Is there a better approach? AI: Ideal true vs false ratios don't exist and they should reflect the the reality the best they can, you can always remove negatives if the ratio is too skewed to improve training speed though. Let me explain it with an example. Ads CTR is as old as the internet and it's skewed to less than 1% positives vs. plus 99% negatives. Yet, data scientists prefer to train it on the entire dataset because many negatives will include information that models couldn't find otherwise. They might not provide a lot of information as a positive one but they are still somewhat important. There are approaches where CTR ratios get artificially rebalanced by sampling in case you want a swifter training and it will still work. In your case, positives are 0.4% which resemble CTR on ads so you can: gather more data to increase the number of positives in order to better understand what makes an article interesting. In case that is not possible trying ensembles which often improve prediction performance. Clustering is an unsupervised approach so you would be losing information by doing so (training labels) besides, sentence embeddings (representations) of one big cluster of negatives and a tiny cluster of positives do not convey information as well as word embeddings which have already been trained on billions of documents. In addition, running k-means on categorical variables will yield anomalous clusters because it's meant to be used with continuous variables. You can find more information about the topic on the following links: Kmeans: Whether to standardise? Can you use categorical variables? Is Cluster 3.0 suitable? My data set contains a number of numeric attributes and one categorical Kaggle Why does K means clustering perform poorly on categorical data The weakness of the K means method is that it is applicable only when the mean is defined one needs to specify K in advance and it is unable to handle noisy data and outliers Therefore, you should use high dimensional embeddings or representations to cluster meanings together, this has been explored in word meanings but for sentences or articles, a vector representation becomes more complicated to implement. One possible approach is the Word Movers’ Distance but there are many more possible approaches, you should google them. In addition a non-linear clustering algorithm such as t-sne will probably yield better results than k-means using the embeddings approach. A better approach is: to use multiple models and compare their performance on this dataset. I have the impression that there will be certain keywords that make articles interesting, so a bag of words will still be helpful, even as a starter model. Use feature engineering. Your model might be overloooking important features, such as article length, reading time, number of paragraphs, ratio of complex words (measured by length), etc. Feature engineering is always important in case you haven't used it yet. Use pretrained embeddings. CNN and RNN models can use pretrained embeddings such as GloVe, Word2Vec or FastText so you use better representations plus other complex layers later on in the architecture. This is extremely important to increase accuracy. Use metrics to measure improvement and ranks to check for the best predicted interesting articles.
H: How to do Feature Scaling for these ranges [0,1] and [-1,1]? I want to rescale the features of my data to be between [0,1] and [-1,1]? Is their a clear cut way that works every time for these ranges? I think the below equation works for [0,1] but when it is describe people say generally it works, so I am not certain it works every single time. AI: What you said is right, the above equation is for normalizing the data with-in the range of [0,1] Now, we can generalize using the below equation To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In general, you can always get a new variable $x'''$ in $[a,b]$: $$ x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a $$
H: What are the consequences of not freezing layers in transfer learning? I am trying to fine tune some code from a Kaggle kernel. The model uses pretrained VGG16 weights (via 'imagenet') for transfer learning. However, I notice there is no layer freezing of layers as is recommended in a keras blog. One approach would be to freeze the all of the VGG16 layers and use only the last 4 layers in the code during compilation, for example: for layer in model.layers[:-5]: layer.trainable = False Supposedly, this will use the imagenet weights for the top layers and train only the last 5 layers. What are the consequences of not freezing the VGG16 layers? from keras.models import Sequential, Model, load_model from keras import applications from keras import optimizers from keras.layers import Dropout, Flatten, Dense img_rows, img_cols, img_channel = 224, 224, 3 base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, img_channel)) add_model = Sequential() add_model.add(Flatten(input_shape=base_model.output_shape[1:])) add_model.add(Dense(256, activation='relu')) add_model.add(Dense(1, activation='sigmoid')) model = Model(inputs=base_model.input, outputs=add_model(base_model.output)) model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9), metrics=['accuracy']) model.summary() AI: I think that the main consequences are the following: Computation time: If you freeze all the layers but the last 5 ones, you only need to backpropagate the gradient and update the weights of the last 5 layers. In contrast to backpropagating and updating the weights all the layers of the network, this means a huge decrease in computation time. For this reason, if you unfreeze all the network, this will allow you to see the data fewer epochs than if you were to update only the last layers weights'. Accuracy: Of course, by not updating the weights of most of the network your are only optimizing in a subset of the feature space. If your dataset is similar to any subset of the imagenet dataset, this should not matter a lot, but, if it is very different from imagenet, then freezing will mean a decrease in accuracy. If you have enough computation time, unfreezing everything will allow you to optimize in the whole feature space, allowing you to find better optima. To wrap up, I think that the main point is to check if your images are comparable to the ones in imagenet. In this case, I would not unfreeze many layers. Otherwise, unfreeze everything but get ready to wait for a long training time.
H: Role derivative of sigmoid function in neural networks I try to understand role of derivative of sigmoid function in neural networks. First I plot sigmoid function, and derivative of all points from definition using python. What is the role of this derivative exactly? import numpy as np import matplotlib.pyplot as plt def sigmoid(x): return 1 / (1 + np.exp(-x)) def derivative(x, step): return (sigmoid(x+step) - sigmoid(x)) / step x = np.linspace(-10, 10, 1000) y1 = sigmoid(x) y2 = derivative(x, 0.0000000000001) plt.plot(x, y1, label='sigmoid') plt.plot(x, y2, label='derivative') plt.legend(loc='upper left') plt.show() AI: The use of derivatives in neural networks is for the training process called backpropagation. This technique uses gradient descent in order to find an optimal set of model parameters in order to minimize a loss function. In your example you must use the derivative of a sigmoid because that is the activation that your individual neurons are using. The loss function The essence of machine learning is to optimize a cost function such that we can either minimize or maximize some target function. This is typically called the loss or cost funtion. We typically want to minimize this function. The cost function, $C$, associates some penalty based on the resulting errors when passing data through your model as a function of the model parameters. Let's look at the example where we try to label whether an image contains a cat or a dog. If we have a perfect model, we can give the model a picture and it will tell us if it is a cat or a dog. However, no model is perfect and it will make mistakes. When we train our model to be able to infer meaning from input data we want to minimize the amount of mistakes it makes. So we use a training set, this data contains a lot of pictures of dogs and cats and we have the ground truth label associated with that image. Each time we run a training iteration of the model we calculate the cost (the amount of mistakes) of the model. We will want to minimize this cost. Many cost functions exist each serving their own purpose. A common cost function that is used is the quadratic cost which is defined as $C = \frac{1}{N} \sum_{i=0}^{N}(\hat{y} - y)^2$. This is the square of the difference between the predicted label and the ground truth label for the $N$ images that we trained over. We will want to minimize this in some way. Minimizing a loss function Indeed most of machine learning is simply a family of frameworks which are capable of determining a distribution by minimizing some cost function. The question we can ask is "how can we minimize a function"? Let's minimize the following function $y = x^2-4x+6$. If we plot this we can see that there is a minimum at $x = 2$. To do this analytically we can take the derivative of this function as $\frac{dy}{dx} = 2x - 4 = 0$ $x = 2$. However, often times finding a global minimum analytically is not feasible. So instead we use some optimization techniques. Here as well many different ways exist such as : Newton-Raphson, grid search, etc. Among these is gradient descent. This is the technique used by neural networks. Gradient Descent Let's use a famously used analogy to understand this. Imagine a 2D minimization problem. This is equivalent of being on a mountainous hike in the wilderness. You want to get back down to the village which you know is at the lowest point. Even if you do not know the cardinal directions of the village. All you need to do is continuously take the steepest way down, and you will eventually get to the village. So we will descend down the surface based on the steepness of the slope. Let's take our function $y = x^2-4x+6$ we will determine the $x$ for which $y$ is minimized. Gradient descent algorithm first says we will pick a random value for $x$. Let us initialize at $x=8$. Then the algorithm will do the following iteratively until we reach convergence. $x^{new} = x^{old} - \nu \frac{dy}{dx}$ where $\nu$ is the learning rate, we can set this to whatever value we will like. However there is a smart way to choose this. Too big and we will never reach our minimum value, and too small we will waste soooo much time before we get there. It is analogous to the size of the steps you want to take down the steep slope. Small steps and you will die on the mountain, you'll never get down. Too large of a step and you risk over shooting the village and ending up the other side of the mountain. The derivative is the means by which we travel down this slope towards our minimum. $\frac{dy}{dx} = 2x - 4$ $\nu = 0.1$ Iteration 1: $x^{new} = 8 - 0.1(2 * 8 - 4) = 6.8 $ $x^{new} = 6.8 - 0.1(2 * 6.8 - 4) = 5.84 $ $x^{new} = 5.84 - 0.1(2 * 5.84 - 4) = 5.07 $ $x^{new} = 5.07 - 0.1(2 * 5.07 - 4) = 4.45 $ $x^{new} = 4.45 - 0.1(2 * 4.45 - 4) = 3.96 $ $x^{new} = 3.96 - 0.1(2 * 3.96 - 4) = 3.57 $ $x^{new} = 3.57 - 0.1(2 * 3.57 - 4) = 3.25 $ $x^{new} = 3.25 - 0.1(2 * 3.25 - 4) = 3.00 $ $x^{new} = 3.00 - 0.1(2 * 3.00 - 4) = 2.80 $ $x^{new} = 2.80 - 0.1(2 * 2.80 - 4) = 2.64 $ $x^{new} = 2.64 - 0.1(2 * 2.64 - 4) = 2.51 $ $x^{new} = 2.51 - 0.1(2 * 2.51 - 4) = 2.41 $ $x^{new} = 2.41 - 0.1(2 * 2.41 - 4) = 2.32 $ $x^{new} = 2.32 - 0.1(2 * 2.32 - 4) = 2.26 $ $x^{new} = 2.26 - 0.1(2 * 2.26 - 4) = 2.21 $ $x^{new} = 2.21 - 0.1(2 * 2.21 - 4) = 2.16 $ $x^{new} = 2.16 - 0.1(2 * 2.16 - 4) = 2.13 $ $x^{new} = 2.13 - 0.1(2 * 2.13 - 4) = 2.10 $ $x^{new} = 2.10 - 0.1(2 * 2.10 - 4) = 2.08 $ $x^{new} = 2.08 - 0.1(2 * 2.08 - 4) = 2.06 $ $x^{new} = 2.06 - 0.1(2 * 2.06 - 4) = 2.05 $ $x^{new} = 2.05 - 0.1(2 * 2.05 - 4) = 2.04 $ $x^{new} = 2.04 - 0.1(2 * 2.04 - 4) = 2.03 $ $x^{new} = 2.03 - 0.1(2 * 2.03 - 4) = 2.02 $ $x^{new} = 2.02 - 0.1(2 * 2.02 - 4) = 2.02 $ $x^{new} = 2.02 - 0.1(2 * 2.02 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ And we see that the algorithm converges at $x = 2$! We have found the minimum. Applied to neural networks The first neural networks only had a single neuron which took in some inputs $x$ and then provide an output $\hat{y}$. A common function used is the sigmoid function $\sigma(z) = \frac{1}{1+exp(z)}$ $\hat{y}(w^Tx) = \frac{1}{1+exp(w^Tx + b)}$ where $w$ is the associated weight for each input $x$ and we have a bias $b$. We then want to minimize our cost function $C = \frac{1}{2N} \sum_{i=0}^{N}(\hat{y} - y)^2$. How to train the neural network? We will use gradient descent to train the weights based on the output of the sigmoid function and we will use some cost function $C$ and train on batches of data of size $N$. $C = \frac{1}{2N} \sum_i^N (\hat{y} - y)^2$ $\hat{y}$ is the predicted class obtained from the sigmoid function and $y$ is the ground truth label. We will use gradient descent to minimize the cost function with respect to the weights $w$. To make life easier we will split the derivative as follows $\frac{\partial C}{\partial w} = \frac{\partial C}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial w}$. $\frac{\partial C}{\partial \hat{y}} = \hat{y} - y$ and we have that $\hat{y} = \sigma(w^Tx)$ and the derivative of the sigmoid function is $\frac{\partial \sigma(z)}{\partial z} = \sigma(z)(1-\sigma(z))$ thus we have, $\frac{\partial \hat{y}}{\partial w} = \frac{1}{1+exp(w^Tx + b)} (1 - \frac{1}{1+exp(w^Tx + b)})$. So we can then update the weights through gradient descent as $w^{new} = w^{old} - \eta \frac{\partial C}{\partial w}$ where $\eta$ is the learning rate.
H: What data treatment/transformation should be applied if there are a lot of outliers and features lack normal distribution? I am solving for a regression use case using tensorflow's DNNRegressor. For EDA purpose, I referred to this post and used pandas boxplot to plot my numerical predictors and target variable(here, pid demand) and scatter_matrix for plotting the distributions and here are the results : predictor_target_boxplot ; features_label_pdf_scatter_matrix . I need help in interpreting these two plots, specifically on these fronts: How come the boxplot shows so many points beyond whiskers (~10%), can there be so many outliers in a dataset? How do I handle those outliers? Based on the second plot (feature, label pdf), should I normalize my features to exhibit Gaussian distribution? If so, why? AI: This comments aren't all mine, I have asked on a slack forum, boxplot is shouting at you: skewness, and also high dispersion. Can not ask much more to a boxplot than location, dispersion and skewness.. Also checkout this term heteroscedasticity(completely suits your case) Try switching the transformation to a log plot or lower.. Also your eda can't be boxplots dependent as prices are involved here..one of the remedy would be doing box-cox transformation https://www.differencebetween.com/difference-between-dispersion-and-vs-skewness/ http://www.statsmakemecry.com/smmctheblog/confusing-stats-terms-explained-heteroscedasticity-heteroske.html To have a look at it, https://datascienceplus.com/how-to-detect-heteroscedasticity-and-rectify-it/
H: DBSCAN - Space complexity of O(n)? According to Wikipedia, "the distance matrix of size $\frac{(n^2-n)}{2}$ can be materialized to avoid distance recomputations, but this needs $O(n^2)$ memory, whereas a non-matrix based implementation of DBSCAN only needs $O(n)$ memory." $\frac{(n^2-n)}{2}$ is basically the triangular matrix. However, it says that a non-matrix based implementation only requires $O(n)$ memory. How does that work? Regardless of what data structure you use, don't you always have to have $\frac{(n^2-n)}{2}$ distance values? It would still be $O(n^2)$ space complexity, no? Is there something I'm missing here? I'm working with a huge dataset and I would really like to cut down on memory usage. AI: You can run DBSCAN without storing the distances in a matrix. This has the drawback that each time you visit a point, you have to recalculate all the relevant distances, which requires more time. However, the space complexity stays $O(n)$, since the only things you have in memory at any single time are the positions of the n points, their various labels, the neighbors of the current point and the neighbors of a particular neighbour in the case that the point turns out to be a core point.