text
stringlengths
83
79.5k
H: Weights and bias' relative to preprocessed X I am currently using sklearn scale to preprocess my X data before being put into a perceptron - mean/stddev so as to prevent the data converging to infinity or 0. My perceptron returns the weights + bias after the network has been trained: X = preprocessing.scale(X) After processing the X and Y data through my perceptron I am returned with weights. From these weights I can calculate the line of best fit: ls = cp.linspace(cp.min(X), cp.max(X)) best_fit = w[1]+w[0]*ls where w[1] is a bias. This best_fit line is accurate but it is relative to the preprocessed X rather than the original X which I would like to plot. What is the technique to make these weights relative to the original X values if it is possible? AI: If $$y = w_1 + w_0 \left(\frac{x_{}-\mu_1}{\sigma_1} \right),$$ then we have $$y = \left(w_1-\frac{\mu_1}{\sigma_1}\right)+ \left(\frac{w_0}{\sigma_1}\right)x$$
H: How can a GRU perform as well as an LSTM? I think I understand both types of units in terms of just the math. What I don't understand is, how is it possible in practice for a GRU to perform as well as or better than an LSTM (which is what seems to be the norm)? I don't intuitively get how the GRU is able to make up for the missing cell content. The gates seem to be pretty much the same as an LSTM's gates, but with a missing part. Does it just mean that the cell in an LSTM is actually nearly useless? Edit: Other questions have asked about the differences between GRU and LSTM. None of them (in my opinion) explain well enough why a GRU works as well as an LSTM even without the memory unit, only that the lack of a memory unit is one of the differences that makes a GRU faster. AI: GRU and LSTM are two popular RNN variants out of many possible similar architectures motivated by similar theoretical ideas of having a "pass through" channel where gradients do not degrade as much, and a system of sigmoid-based control gates to manage signals passing between time steps. Even with LSTM, there are variations which may or may not get used, such as adding "peephole" connections between previous cell state and the gates. LSTM and GRU are the two architectures explored so far that do well across a wide range of problems, as verified by experiment. I suspect, but cannot show conclusively, that there is no strong theory that explains this rough equivalence. Instead we are left with more intuition-based theories or conjectures: GRU has less parameters per "cell", allowing it in theory to generalise better from less examples, at the cost of less flexibility. LSTM has a more sophisticated memory in the form of separating internal cell state from cell output, allowing it to output features useful for a task without needing to memorise those features. This comes at the cost of needing to learn extra gates which help map between state and features. When considering performance of these architectures in general, you have to allow that some problems will make use of these strengths better, or it may be a wash. For instance, in a problem where forwarding the layer output between time steps is already a good state representation and feature representation, then there is little need for the additional internal state of the LSTM. In effect the choice between LSTM and GRU is yet another hyperparameter to consider when searching for a good solution, and like most other hyperparameters, there is no strong theory to guide an a priori selection.
H: How to move for loop away and go pure Pandas? I'm working with huge data sheet, and start learning Pandas, but i hit this challenge i have a loop and trying to move everything from my loop into Pandas but i not all i can find a way around. panda_dataframe = pd.read_sql(sql=sql, con=mysql_cnx, index_col='UUID') logging.debug('__setupProducts() - after mysql query : run time {time}'.format(time=datetime.datetime.now() - start_time)) logging.debug('__setupProducts() - Product found to handle: {count}'.format(count=len(panda_dataframe))) panda_dataframe['Price'] = panda_dataframe['Price'].apply(lambda x:float(x/100)) panda_dataframe['PriceNext'] = panda_dataframe['PriceNext'].apply(lambda x:float(x/100)) panda_dataframe['CostPrice'] = panda_dataframe['CostPrice'].apply(lambda x:float(x/100)) panda_dataframe['CostPriceReal'] = panda_dataframe['CostPriceReal'].apply(lambda x:float(x/100)) panda_dataframe['InStoreStock'] = panda_dataframe['InStoreStock'].apply(lambda x:int(x)) logging.debug('__setupProducts() - restructer dataframe to right types : run time {time}'.format(time=datetime.datetime.now() - start_time)) for product_uuid, product in panda_dataframe.iterrows(): logging.info('Product: {title} loading and prepare...'.format(title=product['Title'])) try: product_data = store_remote_stock_dataframe.get_group(product_uuid) product_data_onstock = product_data.loc[product_data['Stock'] > 0, ['Stock', 'CostPriceReal', 'CostPrice', 'Expected', 'DistributorUUID', 'Country']] product_data_outstock = product_data.loc[product_data['Stock'] <= 0, ['Stock', 'CostPriceReal', 'CostPrice', 'Expected', 'DistributorUUID', 'Country']] product_data = None if len(product_data_onstock) > 0: stock_cost_price = product_data_onstock.sort_values(by=['CostPrice'], ascending=True).iloc[0,:] stock_cost_real_price = product_data_onstock.sort_values(by=['CostPriceReal'], ascending=True).iloc[0,:] elif len(product_data_outstock) > 0: stock_cost_price = product_data_outstock.sort_values(by=['CostPrice'], ascending=True).iloc[0,:] stock_cost_real_price = product_data_outstock.sort_values(by=['CostPriceReal'], ascending=True).iloc[0,:] else: stock_cost_price = None stock_cost_real_price = None stock_cost_price = stock_cost_price.drop(['CostPriceReal','CostPrice']) if stock_cost_price is not None else None stock_cost_real_price = stock_cost_real_price.drop(['CostPriceReal','CostPrice']) if stock_cost_real_price is not None else None except: stock_cost_price = None stock_cost_real_price = None products.append({ 'uuid' : product_uuid, 'title' : product['Title'], 'price' : product['Price'], 'price-next' : product['PriceNext'], 'price-cost' : product['CostPrice'], 'price-cost-real' : product['CostPriceReal'], 'overwrites' : product['Overwrites'], 'distributor-stock' : { 'cost-price' : { 'distributor' : stock_cost_price['DistributorUUID'] if stock_cost_price is not None else None, 'stock' : stock_cost_price['Stock'] if stock_cost_price is not None else 0, 'expected' : stock_cost_price['Expected'] if stock_cost_price is not None else -1, 'country' : stock_cost_price['Country'] if stock_cost_price is not None else None, }, 'cost-price-real' : { 'distributor' : stock_cost_real_price['DistributorUUID'] if stock_cost_real_price is not None else None, 'stock' : stock_cost_real_price['Stock'] if stock_cost_real_price is not None else 0, 'expected' : stock_cost_real_price['Expected'] if stock_cost_real_price is not None else -1, 'country' : stock_cost_real_price['Country'] if stock_cost_real_price is not None else None } }, 'stock' : { 'store' : int(product['InStoreStock']) if product['InStoreStock'] is not None else 0, }, 'manufacturer' : manufacturers[product['ManufacturerUUID']]['_id'] if product['ManufacturerUUID'] in manufacturers else None, 'category' : categorys[product['CategoryUUID']]['_id'] if product['CategoryUUID'] in categorys else None, }) what i wich is move my "try" code shut be before my for loop becures then i can remove the for loop and continue whitout the for loop at all. Hope sombody can help me to be better to Pandas so i can use the panda power :) AI: I always avoid for loops when operating on pandas rows - it’s slow and inefficient. If possible, try to creat some functions (def func(x): ...) and then apply these functions to the rows of the dataframe: df[‘col1’].apply(func) which will vectorise the operation of applying func to each row, making it much faster and more efficient.
H: Pandas apply return: Must have equal len keys and value when setting with an iterable I have my code where i want to apply a function on and overwirte the inputs based on the return data, here is my apply section panda_dataframe[[ 'distributor-stock-cost-price-real-distributor', 'distributor-stock-cost-price-stock', 'distributor-stock-cost-price-expected', 'distributor-stock-cost-price-country', 'distributor-stock-cost-price-real-distributor', 'distributor-stock-cost-price-real-stock', 'distributor-stock-cost-price-real-expected', 'distributor-stock-cost-price-real-country' ]] = panda_dataframe[['uuid']].apply(lambda x:__getDistributorStockData(x)) And then i got my function i apply to my dataframe. def __getDistributorStockData(x): in_product_uuid = x out_cost_price_dist = None out_cost_price_stock = None out_cost_price_expected = None out_cost_price_country = None out_cost_price_real_dist = None out_cost_price_real_stock = None out_cost_price_real_expected = None out_cost_price_real_country = None try: product_data = store_remote_stock_dataframe.get_group(in_product_uuid) product_data_onstock = product_data.loc[product_data['Stock'] > 0, ['Stock', 'CostPriceReal', 'CostPrice', 'Expected', 'DistributorUUID', 'Country']] product_data_outstock = product_data.loc[product_data['Stock'] <= 0, ['Stock', 'CostPriceReal', 'CostPrice', 'Expected', 'DistributorUUID', 'Country']] if len(product_data_onstock) > 0: stock_cost_price = product_data_onstock.sort_values(by=['CostPrice'], ascending=True).iloc[0,:] stock_cost_real_price = product_data_onstock.sort_values(by=['CostPriceReal'], ascending=True).iloc[0,:] out_cost_price_dist = stock_cost_price['DistributorUUID'] out_cost_price_stock = stock_cost_price['Stock'] out_cost_price_expected = stock_cost_price['Expected'] out_cost_price_country = stock_cost_price['Country'] out_cost_price_real_dist = stock_cost_real_price['DistributorUUID'] out_cost_price_real_stock = stock_cost_real_price['Stock'] out_cost_price_real_expected = stock_cost_real_price['Expected'] out_cost_price_real_country = stock_cost_real_price['Country'] elif len(product_data_outstock) > 0: stock_cost_price = product_data_outstock.sort_values(by=['CostPrice'], ascending=True).iloc[0,:] stock_cost_real_price = product_data_outstock.sort_values(by=['CostPriceReal'], ascending=True).iloc[0,:] out_cost_price_dist = stock_cost_price['DistributorUUID'] out_cost_price_stock = stock_cost_price['Stock'] out_cost_price_expected = stock_cost_price['Expected'] out_cost_price_country = stock_cost_price['Country'] out_cost_price_real_dist = stock_cost_real_price['DistributorUUID'] out_cost_price_real_stock = stock_cost_real_price['Stock'] out_cost_price_real_expected = stock_cost_real_price['Expected'] out_cost_price_real_country = stock_cost_real_price['Country'] else: stock_cost_price = None stock_cost_real_price = None except: stock_cost_price = None stock_cost_real_price = None return [ out_cost_price_dist, out_cost_price_stock, out_cost_price_expected, out_cost_price_country, out_cost_price_real_dist, out_cost_price_real_stock, out_cost_price_real_expected, out_cost_price_real_country ] My big issue is everything run fin, i have try it out on a single row, and its work fine, but its return inside a array, and now i want to apply it on every single row but i got this error. Traceback (most recent call last): File "test.py", line 12, in <module> pricerule_calculate_run = PriceruleCalculate.run(method='business') File ".....", line 940, in run __setupProducts(productUUID) File ".....", line 507, in __setupProducts ]] = panda_dataframe[['uuid']].apply(lambda x:__getDistributorStockData(x), axis=1) File "/usr/local/lib/python3.6/site-packages/pandas/core/frame.py", line 2514, in __setitem__ self._setitem_array(key, value) File "/usr/local/lib/python3.6/site-packages/pandas/core/frame.py", line 2544, in _setitem_array self.loc._setitem_with_indexer((slice(None), indexer), value) File "/usr/local/lib/python3.6/site-packages/pandas/core/indexing.py", line 599, in _setitem_with_indexer raise ValueError('Must have equal len keys and value ' ValueError: Must have equal len keys and value when setting with an iterable can eny how help me here and find my mistake, i'm still very new for Pandas and i'm optimize my data handler right away. AI: I found the issue, I need to return a pd.Series() return pd.Series([ out_cost_price_dist, out_cost_price_stock, out_cost_price_expected, out_cost_price_country, out_cost_price_real_dist, out_cost_price_real_stock, out_cost_price_real_expected, out_cost_price_real_country ])
H: Formal definition for parameter setting in data mining context While reading this material on decision trees, I came across the following statement: The construction of decision tree classifiers does not require any domain knowledge or parameter setting, and therefore is appropriate for exploratory knowledge discovery. I have couple of doubts related to this statement. First one is the definition of parameter setting in this context and another one is how being without parameter setting is apt for exploratory knowledge discovery? AI: First part about parameter setting regards to the fact that you don't need to define any parameters of the model, which is the case in per se linear regression which takes the form of ŷ =θ0+θ1x where $θ_0,θ_1$ are parameters. Decision trees create those n-dimensional boundaries for future classification (regression) based on certain criteria. If you are familiar with statistical inference, a good analogy would be good old parametric vs non-parametric tests.
H: feature names in LogisticRegression() I want to know feature names that a LogisticRegression() Model has used along with their corresponding weights in scikit-learn. I can access to weights using coef_, but i did not know how can pair them with their corresponding weights. AI: I made a scenario: from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model.logistic import LogisticRegression from sklearn.metrics import accuracy_score max_features = 100 tfidf = TfidfVectorizer(max_features=max_features)#stop_words='english',)# norm = None)#) #Simple texts_train = ['positive sample', 'again positive', 'negative sample', 'again negative'] target_train = [1,1,0,0] texts_test = ['negative', 'positive'] target_test = [0,1] texts_train1 = tfidf.fit_transform(texts_train) texts_test1 = tfidf.transform(texts_test) classifier = LogisticRegression() classifier.fit(texts_train1, target_train) predictions = classifier.predict(texts_test1) print('accuracy (simple):', accuracy_score(target_test, predictions)) tfidf.get_feature_names() ['again', 'negative', 'positive', 'sample'] classifier.coef_ array([[ 0. , -0.56718183, 0.56718183, 0. ]]) that makes sense!
H: What makes you confident in your results? At what point do you think you can present your work to tech illiterate superiors? I understand that the models are only as good as the data you get, and bad design can generate really bad data. Nonrandom sampling, unbalanced/incomplete designs, confounding, can make data analysis really hard. At what point should one be confident that they ran a useful model? Do you just do a cross-validation with a training/test dataset and call it a day? Obviously "all models are wrong, some are useful" but at some point the tradeoffs with excluding too many parameters by LASSOing and strange transformations by getting BIC down become glaring. tl;dr at the end of the day what makes you go "I did the right thing for my company/project, and this should work" AI: Hey Welcome to the Site! What you are saying is right, Data Science din't reach to the stage where it has some standard methods for achieving this(standard procedures, don't know we would be able to reach that stage in near future). But we have some general standards like: Forecasting: ETS,ARIMA,SARIMA etc Prediction: Linear Regression,Random Forest, GLM, Neural Network etc Classification: Logistic Regression, Random Forest etc When you go to granular level it is hard to generalize, as every business problem is different and one single method cannot be used for solving all the business problems. So, to answer the next question how do you get confidence that the outcome is good enough, I assume that you have heard about RMSE,MAPE and many more for predictions and confusion matrix for classification problem. We use these metrics to see access the models performance, for example if you are trying to classify whether the given cell is cancer cell or not, there are 100 records in which 90 are non-cancer cells and 10 are cancer cell, your model gives 99% accuracy but could classify 5 out of 9 literally 55% of the total in such scenarios you need to look cannot use accuracy, you need to use F1 score etc. As you were asking about a model right, all models are not useful. True not all models built are going to go for production level, you would choose the best one and productionize it. You can re-train your model on a basis(Daily, Weekly,Monthly based on business requirement). Would you call it a day off post completion of validation? I wouldn't, I would go to the Subject Matter Expert present him the results ask him/her for their insights, if they both are inline then I would do a Beta testing on some actual data and then productionize it. Now to address your last question, There is no standard saying that this is good or bad, if it works for you, your Business then that is a Good Model. To convenience your mangers and subject matter(Data) experts, you need to dig deep into the data try all different scenarios ask as many questions as possible. Try understanding the data very well. So, you can answer Business Questions with data supportive answers(This is possible only when you are well-worse with data). As they are very good with business they would be asking questions with respect to business, you need to be ready with all such scenarios by understanding business and data well. Finally, I do have a feeling like you do. I did alot of things but nothing worked but you shouldn't be unhappy as you understood that these are the ways which would lead you to Unsuccessful results(best example is Thomas Alva Edison has used 1000 diff metals before using Tungsten to make a bulb). Similarly all the methods which we have tried are different steps you have tried to get the solution. My funda is, did I try something different/new everyday or not. Crucial part of this process is, maintenance of clear documentation at each and every step. Which would come in handy in the near future. Anything in R&D is never a waste it is just an other try or experiment, so your work is never waste. Your are trying to build a solid base for the bright future of your company.
H: Should I remove features that occur very rarely to build a model? I am trying ML techniques in language processing. I have got 3000 short texts and I extract features(words and phrases) from all of them and build a vocabulary. I end up with 6000 od features and most of them occurs once or twice. So for example from texts: 0: One text here 1: Another text there I got One Another Text Here There Target 0 1 0 1 1 0 True 1 0 1 1 0 1 False So if word "one" occured once I have got as a column and it is False for all other 2999 texts. Should I drop these columns? Or should use different technique? This amount of columns make me some problems becuase it takes a lot of time to build a classify. AI: Short answer: yes, if they occur so rarely, they can only lead to overfit, so it's better to ignore them as features. Longer answer: Usually one puts all those unique occurrence in a single feature/token, and that's the way I suggest you to proceed. So if you have two features that appears only once, you can join them to create a feature that appears twice. But anyway, 3000 short text are too few to create a NLP model that can work well. To mitigate your lack of data, you can use pre-trained word embeddings like the ones available here. Doing so you can also keep most of the single occurring words/features, because in those word embeddings they already have a defined semantic meaning.
H: Counting the occurrence of each string in a pandas dataframe column I'm working with a data set of movies which has various info on them. One of the columns contains the various genres a movie may belong to like so: What I would like to do is count how often a genre occurs in each column, in above example a corresponding series would look like (created the series myself): How can I extract this information from the original dataframe using pandas? AI: Something Equivalent to this should work: the function(pandas.Series.str.count) in this link will even make it easier... You can use split by regex : #using your sample df[['class1', 'class2', 'class3',..]] = df['genre'].str.split('|', expand=True) Alternative to the above method (but iterating the dataframe) l = list(df[index-key].values) l = (",".join(l)) l.split('|') And if need remove column genre add drop: df = df.drop('genre', axis=1) And then you can use value_counts() But this assumes that you have same length of each genre or apply a check first and proceed accordingly.. Or You can also try using Counter from the collections module(after splitting them, just update each key's value) example Counter dict >>> from collections import Counter >>> Counter(['apple','red','apple','red','red','pear']) Counter({'red': 3, 'apple': 2, 'pear': 1})
H: How can I make a prediction in a regression model if a category has not been observed already? I'm researching a regression model to predict a target value that has four features, all of which are categorical. The categories are not fixed, e.g. one is a customer identifier. How could my model handle making a prediction for a customer identifer it hasn't seen before, based on the remaining features it has been trained with? I have considered having a model for each of the features that could predict which category label is most similar based on the other three remaining features (or multiple similar category labels could be used and take an average of the target values for those). My only concern with with this method is that it's not that scalable, I'm going to want to extend the model with more and more categorical features. Is there some technique that could create an 'unknown' label for each of the features so the model can handle this case or would the prediction likely be completely inaccurate? AI: Technically, you can't. That is one of the limitations of regression models; they are really only effective for values/ranges that they have seen before. Your use of categorical values makes it even more complex. But even with continuous variables, it is not recommended to use regression models for these "unknown" values.
H: How do NLP tokenizers handle hashtags? I know that tokenizers turn words into numerics but what about hashtags? Are tokenizers design to handle hashtags or should I be filtering the "#" prior to tokenizing? What about the "@" symbol? AI: The answer depends on what you want to do with the hashtags/words and also on what tokenizer you are using. Consider this example tweet: Hi, we need you! #Hi #Weneedyou If you use TreeBank or WordPunct tokenizers the output will be: ['Hi', 'we', 'need', 'you', '!', '#', 'Hi', '#', 'Weneedyou'] However if you use Whitespace Tokenizer, the result is: ['Hi', 'we', 'need', 'you!', '#Hi', '#Weneedyou'] Similar discrepancies can be found in terms such as can't or pre-order for instance. Additionally, you need to consider what is a token for your task at hand. In my previous example Hi make sense either with or without #. However Weneedyou without the hash is just a poorly written word. Maybe you want to keep the hashtags in your corpus or maybe not, but this depends on what you want to do with it. So first you need to know how your tokenizer handles these cases and decide if you want to remove them beforehand or later to keep hashtags in your corpus or only (sometimes weird) words. In the case of the @ is exactly the same, you can keep it, remove it or maybe delete the whole instance @user as you don't want to keep user names in your corpus. As I said, it all depends on your task. PS: In case you want to play around with different tokenizers, try this.
H: What is the minimum/suggested sequence length for training an LSTM? My dataset consists of short videos of 4/5 time-steps each (frames), and the problem is classifying this video (multi-label classification). The idea is to use an LSTM but I'm wondering if the sequence length is not enough. What are the suggested sequence length? May 4/5 time steps be enough? P.S. could you please post some link to scientific articles to endorse your claims? AI: LSTMs would run into problems beyond 500 time steps. original paper by Hochreiter and Schmidhuber: "LSTM can learn to bridge minimal time lags in excess of 1000 discrete time steps." Example: Your video is 60 seconds long. I would start by encoding 5 frames per second using a CNN (use resnet (SEnets have better performance but i havent read the paper) till the layer just before final affine layer which feeds into the softmax encode your images. Use these encodings as inputs in a GRU (Hence a 300 time steps for this GRU) and use its final state to make the class prediction (use a separate loss for each label?)
H: Ideal gas data needed I want to do a quick computation in R that involves estimating the ideal gas constant from experimental data. Ideally I'd have data from a monatomic gas at different pressures, volumes, temperatures and moles (or just pressure and volume, keeping the others fixed). Anyone have an idea of where I might find a data set like this? An R data frame would be nice, but I'll take excel, tab delimited, or whatever. AI: The National Institute of Standards and Technology (NIST) has ideal gas datasets here
H: Use a dataframe of word vectors as input feature for SVM I have a dataframe with a bunch of columns (words). df arg1 predicate 0 PERSON be 1 it Pick 2 details Edit 3 title Display 4 title Display I used a pretrained word2vec model to create a new df with all words replaced by vectors (1-D numpy arrays). get updated_df updated_df = df.applymap(lambda x: self.filterWords(x)) def filterWords(self, x): model = gensim.models.KeyedVectors.load_word2vec_format('./model/GoogleNews-vectors-negative300.bin', binary=True) if x in model.vocab: return model[x] else: return model['xxxxx'] updated_df print: arg1 \ 0 [0.16992188, -0.48632812, 0.080566406, 0.33593... 1 [0.084472656, -0.0003528595, 0.053222656, 0.09... 2 [0.06347656, -0.067871094, 0.07714844, -0.2197... 3 [0.06640625, -0.032714844, -0.060791016, -0.19... 4 [0.06640625, -0.032714844, -0.060791016, -0.19... predicate 0 [-0.22851562, -0.088378906, 0.12792969, 0.1503... 1 [0.018676758, 0.28515625, 0.08886719, 0.213867... 2 [-0.032714844, 0.18066406, -0.140625, 0.115722... 3 [0.265625, -0.036865234, -0.17285156, -0.07128... 4 [0.265625, -0.036865234, -0.17285156, -0.07128... I need to train a SVM(sklearn Linear SVC) with this data. When I pass the updated_df as X_Train, I get clf.fit(updated_df, out_df.values.ravel()) array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: setting an array element with a sequence What is the right way of passing this as the input data to the classifier? My y_train is fine. If I get a hash of the words to create the updated_df like below, it works fine. updated_df = df.applymap(lambda x: hash(x)) But I need to pass the word2vec vectors to establish a relationship between the words. I am new to python/ML and appreciate the guidance. Editing with the current status based on Theudbald's suggestion: class ConcatVectorizer(object): def __init__(self, word2vec): self.word2vec = word2vec # if a text is empty we should return a vector of zeros # with the same dimensionality as all the other vectors self.dim = len(word2vec.itervalues().next()) print "self.dim = ", self.dim def fit(self, X, y): print "entering concat embedding fit" print "fit X.shape = ", X.shape return self def transform(self, X): print "entering concat embedding transform" print "transform X.shape = ", X.shape dictionary = {':': 'None', '?': 'None', '': 'None', ' ': 'None'} X = X.replace(to_replace=[':','?','',' '], value=['None','None','None','None']) X = X.fillna('None') print "X = ", X X_array = X.values print "X_array = ", X_array vectorized_array = np.array([ np.concatenate([self.word2vec[w] for w in words if w in self.word2vec] or [np.zeros(self.dim)], axis=0) for words in X_array ]) print "vectorized array", vectorized_array print "vectorized array.shape", vectorized_array.shape return vectorized_array model = gensim.models.KeyedVectors.load_word2vec_format('./model/GoogleNews-vectors-negative300.bin', binary=True) w2v = {w: vec for w, vec in zip(model.wv.index2word, model.wv.syn0)} etree_w2v_concat = Pipeline([ ("word2vec vectorizer", ConcatVectorizer(w2v)), ("extra trees", ExtraTreesClassifier(n_estimators=200))]) rf.testWordEmbClassifier(etree_w2v_concat) def testWordEmbClassifier(self, pipe_obj): kb_fname = 'kb_data_3.csv' test_fname = 'kb_test_data_3.csv' kb_data = pd.read_csv(path + kb_fname, usecols=['arg1', 'feature_word_0', 'feature_word_1', 'feature_word_2', 'predicate']) kb_data_small = kb_data.iloc[0:5] kb_data_out = pd.read_csv(path + kb_fname, usecols=['output']) kb_data_out_small = kb_data_out.iloc[0:5] print kb_data_small pipe_obj.fit(kb_data_small, kb_data_out_small.values.ravel()) print pipe_obj.predict(kb_data_small) self.wordemb_predictResult(pipe_obj, test_fname, report=True) AI: In my opinion, scikit-learn raises an error because updated_df is composed of 2 features (columns) with list formats. Therefore, for a given observation x_i : x_i = [arg1_i, predicate_i] = [[vector_arg1_i], [vector_predicate_i]]. Scikit-learn can't handle this format of input features. There are mutiple ways to train a suprevised machine learning model after Word2Vec text processing. A common one is to sum or to average columns arg1 and predicate in order to have following observation x_i structure : x_i = [(arg1_i + predicate_i) / 2] = [(vector_arg_i + vector_predicate_i) / 2] More explanations and a gentle comparison between Word2Vec and CountVectorizer features engineering approaches for text classification : http://nadbordrozd.github.io/blog/2016/05/20/text-classification-with-word2vec/
H: binary decision tree hyperplane parallel to axis? I'm reading the random forest paper. binary decision trees use a single feature at each nonterminal node. A test point is assigned to the left or right branch by its value of that feature. Geometrically this corresponds to assigning the point to one side of the hyperplane that is parallel to one axis of the feature space. Can someone explain why the hyperplane is parallel to the axis? Say the feature is represented by the $x$ axis, and that the decision is $x < 5$ vs $x >5$. To me it seems like the hyperplane should be vertical to the $x$ axis so that $x = 4$ is on one side and $x=6$ is on the other. What's wrong with my understanding? Thanks. AI: Your understanding is fine. The hyperplane is $x=5$. In $3$ dimension, it is a plane that is parallel to the the $y$-$z$ plane, which can be written as $x=0$. It is parallel to the $y$-axis and $z$-axis. You are right that it is perpendicular to the $x$-axis. The $x$-axis is the normal direction for the plane $x=5$.
H: Setting "missing" distance values to zero when training a neural network Not sure if missing values is the right name to use here. I want to train a DNN on data given by a sensor. The sensor gives the (x,y) coordinates of the founded objects. The sensor can keep track of up to 32 objects at once. If the sensor can't find 32 objects (which is always the case) it sets the x and y coordinates of the objects not found to zero. Could there be a problem training a neural network on this kind of data? The sensor is set on a car, the objects are other cars and the networks job is to predict the next move. Another problem is that when the sensor finds a new object, already existing/found object might change id. Any tips on that? I am thinking about making randomly permute the indices of the object since that should not make any difference? Are there any standard solutions to these kind of problems? Especially Setting the distance of non existing objects to zero. AI: The short answer is "it depends" Is zero better than one or infinity? It depends on the range of x-y coordinates that you use. It also depends on your output. If you go from (x,y) to something like z = x^2, then you're placing your "null" at a local minimum. So, if it's a car, I'm assuming you're using a forward-facing camera. Do you really want your algorithm to see a bunch of "cars" bunched up at the origin? By the way, where is your origin? Upper left (like most image indexing)? Lower left (like a cartesian plane)? Center (if so, what direction is positive x and y?) As for re indexing, what you can do is look at the previous frame (or frames) where your 32 objects are indexed, along with their position and velocity. Then, look at the current frame and compare each object's position to the positions in the previous frame. Assign the objects in the current frame the same id as the object nearest it in the previous frame. You can break ties with velocity. following up on your comments: a neural network can learn any arbitrary function to any arbitrary precision. Eventually. If you want certain behavior at (0,0) that is dramatically different from behavior at (0+ε,0+ε), then your network will take a long time to converge. I suggest seeing what kind of results you get by converting nulls to zero and then comparing it to other techniques. Maybe replace null with the average of all the other cars, or set it immediately behind your ego vehicle (since cars primarily move forward, your network will probably put less importance on cars behind the ego car).
H: Text classification with neural network number of input neurons I am classifying documents I have around 4000 of them that I am trying to categorise into 5 categories. I am using a bag of words model which equates to about 18,000 unique words (features) and therefore I have an input layer of a neural network with 18,0000 inputs which doesn't seem right. It is taking a huge amount of memory to try and train this network and so much time it will never converge! Is there a way of reducing the number of input neurons seeing as a large portion of this data will be nulls? AI: Yes, actually what usually people do is to map the unique tokens in a space with fixed dimensionality, obtaining what is called "words embeddings". And actually, using already trained word embeddings like GloVE is usually it's a best practice: those vectors are trained on huge datasets like Wikipedia or Common Crawl. A nicety of those vectors is that, thanks to the way they are built, they include also the relation between the words, a sort of semantic. This is the way I definitely suggest you to start.
H: What is the meaning of semi-handcrafted features? I know handcrafted features are features which are being created by a human made algorithm. But what are semi-handcrafted features? AI: It means part of it handled by the human at first, and then, base on them running a machinery learning algorithms to extract new features. For example, gather some features from the human about some pictures, and using some machinery algorithms to find some new features by some non-linear combinations of those features.
H: Detecting Offensive Text Content in English and German I am working on an NLP project about classifying offensive text data in social media. By offensive I especially mean threat words that one say to another. Some examples: "Stop doing this or else you will pay for it." "Just wait until you see what's coming" "I will break your legs next time I see you." As an initial approach, I considered semantic and syntactic keyword matching. However, doing this on this very problem seemed harder because threating is an action and it is expressed in so many different ways. My main goal is classifying text data by offensive/non-offensive text by using Machine Learning and Deep Learning algorithms. After weeks of online searching, I could not find a ready-to-use dataset. I considered manually labelling the data. However, I don't know where I should start. What would be the best approach for making progress in this task? I also plan to do this in both English and German languages. Also, below is a related article for fully understanding of the problem: Deep learning for detecting inappropriate content in text AI: Toxic comment classification challenge might be a good place to start. It contains a set of comments and 6 binary classifications indicting if it's a toxic comment and of which type. I imagine this would be a sufficient start.
H: Homemade deep learning library: numerical issue with relu activation For the sake of learning the finer details of a deep learning neural network, I have coded my own library with everything (optimizer, layers, activations, cost function) homemade. It seems to work fine when benchmarking in on the MNIST dataset, and using only sigmoid activation functions. Unfortunately I seem to get issues when replacing these with relus. This is what my learning curve looks like for 50 epochs on a training dataset of ~500 examples: Everything is fine for the first ~8 epochs and then I get a complete collapse on the score of a dummy classifier (~0.1 accuracy). I checked the code of the relu and it seems fine. Here are my forward and backward passes: def fprop(self, inputs): return np.maximum( inputs, 0.) def bprop(self, inputs, outputs, grads_wrt_outputs): derivative = (outputs > 0).astype( float) return derivative * grads_wrt_outputs The culprit seems to be in the numerical stability of the relu. I tried different learning rates and many parameter initializers for the same result. Tanh and sigmoid work properly. Is this a known issue? Is it a consequence of non-continuous derivative of the relu function? AI: I don't know exactly what the problem is, but maybe you could try checking the value of your gradients and see if they change a lot around the 8th epoch.
H: Is it effective to use one-hot encoding when its dimension is as large as thousands Here I try to construct a classifier using DNN(deep neural network) with its inputs being many portfolios. In essence, each portfolio contains several stocks which are labeled by there inner-code, for example "1430" or "5560", etc. Since the inner-code is discrete digital number, I prefer to use on-hot encoding to represent each portfolio. However, there are as many as a few thousand different inner-code which means that the dimension of on-hot code may also be that large. I wonder whether the dimension is too large for real training and is there any way to solve or ease this problem, such as using PCA afterward. AI: In neural networks applied to natural language processing normally each possible word (or sub-words) is handled as an individual token, and normally the vocabulary (the set of supported tokens) is around 32K. These tokens are typically one-hot encoded. Therefore, there is not inherent problem in having one-hot encoded vectors with thousands of components. However, the amount and variety of training data should support the dimensionality of the model for it not to overfit. Note that in NLP, these one hot encodings are used as expected outputs of the model, over which to compute maximum likelihood (categorical cross-entropy) together with the output of the model (a softmax). Depending on how you are using this portfolio information, another option would be to have an embedding layer, which embeds discrete values into a continuous representation space (whose dimensionality is a hyperparameter of the model). This would be possible if this information is used as input to the model. In those cases, you input an integer number, which is used to index the embedded vector table.
H: How to find similar time series? I've got a collection of yearly data (one value per year per category), and I'd like to find series that are most similar to one another. Example data is here. I don't know much about data science, but it seems like cosine similarity might be the way to go? If so, how do I account for the nil values in the datasets? AI: Since the time-series are annual, the data points you have for each time-series are limited and also quite distant (the values are 1 year apart). So I wouldn't use Dynamic Time Wrapping on your data. If you are interested in comparing the patterns, a very simple approach would be Pearson's correlation. Keep in mind that this will not compare the actual values but the patterns (i.e. if the values have similar fluctuations with the years, so for example time-series [1 2 3 4] would have higher correlation with [5 6 7 8] than with [1 1 2 2]) If you are interested in both values and pattern, I would use a distance-based metric: Euclidean Distance, Manhattan distance etc. I believe you will find this post interesting, where the mathematical background of similarity is explained. Also, python implementations of several distance metrics in python (including cosine-similarity) can be found in this blog-post.
H: How to select the second point where derivative is greater than 1 in r? I'm looking for the exact value of epsilon to run the DBSCAN clustering algorithm. Here's the KNN distance plot. This chart has two flex points. I need the second flex point. I'm using the following code: # evaluate kNN distance dist <- dbscan::kNNdist(iris, 4) # order result dist <- dist[order(dist)] # scale dist <- dist / max(dist) # derivative ddist <- diff(dist) / ( 1 / length(dist)) # get first point where derivative is higher than 1 knee <- dist[length(ddist)- length(ddist[ddist > 1])] How can improve my code to get second point where derivative is higher than 1? AI: Check out the which method, it returns you the index where it satisfies the criteria. For example: > a <- c(-0.123, 1.2, 0.3, 8, -0.678) > b <- which(a>1) [1] 2 4 > b[2] [1] 4
H: Correct Way of Displaying Features in Decision Tree I am creating a very basic decision tree, the dataset being as follows (columns 1 to 11 are features and column 12 is prediction, I am slicing away column 0 in processing phase as in code below): |--------------------+---+------+----+----+-----+-----+-----+-----+-------+-----+-------+------------| | domain | A | AAAA | MX | NS | SOA | TXT | CAA | SSL | ccTLD | TTL | WHOIS | mal_or_ben | |--------------------+---+------+----+----+-----+-----+-----+-----+-------+-----+-------+------------| | fedoraproject.org | 2 | 2 | 1 | 2 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 1 | |--------------------+---+------+----+----+-----+-----+-----+-----+-------+-----+-------+------------| | blackswanstore.com | 1 | 0 | 1 | 2 | 1 | 2 | 0 | 0 | 0 | 1 | 1 | 0 | |--------------------+---+------+----+----+-----+-----+-----+-----+-------+-----+-------+------------| | comcast.net | 1 | 0 | 1 | 2 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | |--------------------+---+------+----+----+-----+-----+-----+-----+-------+-----+-------+------------| | achren.org | 1 | 0 | 1 | 2 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | |--------------------+---+------+----+----+-----+-----+-----+-----+-------+-----+-------+------------| although when I am passing the dataset, I dont actually pass the top row with the column names, just: fedoraproject.org,2,2,1,2,1,0,2,0,0,0,0,1 blackswanstore.com,1,0,1,2,1,2,0,0,0,1,1,0 This is how I am preparing my decision tree: # read data from csv balance_data = pd.read_csv("training_data.csv", sep=',', header=None) # x is predictor variable and y is outcome X = balance_data.values[:, 1:12] Y = balance_data.values[:, 12] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=200) y_train = y_train.astype('int') y_test = y_test.astype('int') # criterion as gini index clf_gini = DecisionTreeClassifier(criterion="gini", random_state=200, max_depth=5, min_samples_leaf=5) clf_gini.fit(X_train, y_train) # export gini criterion with open("graph.dot", "w") as f: f = tree.export_graphviz(clf_entropy, out_file=f) # plot using graphviz - needs pydot and graphviz installed and set to system path graph = Source(tree.export_graphviz(clf_entropy, out_file=None, feature_names=["A", "AAAA", "MX", "NS", "SOA", "TXT", "CAA", "SSL", "ccTLD", "TTL", "WHOIS"])) graph.format = "png" graph.render("graph_render", view=True) So, in order to pass the feature name, what I am doing is just passing the column names in order as in the csv input. Is this correct (I am very new to this) or am I missing something here? Thanks! AI: If you pass header=None, pandas.read_csv assumes that the first row contains data and names the columns '0' to '12'. Instead you should pass header=0 to specify that the column names are in the first row or equivalently skip the header argument. You can then still continue with X = balance_data.values[:, 1:12], because calling values returns a numpy array without the column names. Alternatively, you could also select your feature columns like so: feature_names = ['A','AAAA',....] X = balance_data[feature_names].values You can then pass the same list of feature_names to graphviz. Also note that you don't have to pass a numpy array to scikit-learn's functions. It can handle pandas DataFrames as well, so values is optional.
H: How to sort numbers using Convolutional Neural Network? Recently, in an interview I got this question: Design a convnet that sorts numbers. Operators are ReLU, Conv, and Pooling. E.g. input: 5, 3, 6, 2; output: 2, 3, 5, 6 I am not sure how can you sort a list of numbers using CNN. I know it is possible using RNN. Is it even possible? AI: I have a solution however I use a densely connected layer at the output to simplify the reshaping. If you can manipulate the sizes of this model such that you have 4 output parameters this should work as well. from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv1D, MaxPooling1D, Reshape from keras.callbacks import ModelCheckpoint from keras.models import model_from_json from keras import backend as K Preparing the data We will generate some random lists containing integers between [0,49], we will take a random permutation of the list and then take the first 4 values. We will then set our targets $y$ as the sorted rows of $x$. import numpy as np n = 100000 x_train = np.zeros((n,4)) for i in range(n): x_train[i,:] = np.random.permutation(50)[0:4] x_train = x_train.reshape(n, 4, 1) y_train = np.sort(x_train, axis=1).reshape(n, 4,) n = 1000 x_test = np.zeros((n,4)) for i in range(n): x_test[i,:] = np.random.permutation(50)[0:4] x_test = x_test.reshape(n, 4, 1) y_test = np.sort(x_test, axis=1).reshape(n, 4,) print(x_test[0][0].T) print(y_test[0]) [ 44. 36. 13. 0.] [ 0. 13. 36. 44.] The model I tried different combinations of parameters. This worked out not bad. input_shape = (4,1) model = Sequential() model.add(Conv1D(32, kernel_size=(2), activation='relu', input_shape=input_shape, padding='same')) model.add(Conv1D(64, (2), activation='relu', padding='same')) model.add(MaxPooling1D(pool_size=(2))) model.add(Reshape((64,2))) model.add(Conv1D(32, (2), activation='relu', padding='same')) model.add(MaxPooling1D(pool_size=(2))) model.add(Flatten()) model.add(Dense(4)) model.compile(loss=keras.losses.mean_squared_error, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) epochs = 10 batch_size = 128 # Fit the model weights. history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) Epoch 10/10 100000/100000 [==============================] - 6s 56us/step - loss: 0.9061 - acc: 0.9973 - val_loss: 0.5302 - val_acc: 0.9950 Results So for a new list of values, I get the predicted output. Then I determine which value in the original list is closest to each of these and replace them. I could have just rounded the predicted values, however this caused so +/-1 errors due to rounding the wrong way. test_list = [1,45,3,18] pred = model.predict(np.asarray(test_list).reshape(1,4,1)) print(test_list) print(pred) print([np.asarray(test_list).reshape(4,)[np.abs(np.asarray(test_list).reshape(4,) - i).argmin()] for i in list(pred[0])]) [1, 45, 3, 18] [[ 0.87599814 3.43058085 17.36335754 45.21624374]] [1, 3, 18, 45] And for the sequence you suggested as a test case test_list = [5,3,6,2] pred = model.predict(np.asarray(test_list).reshape(1,4,1)) print(test_list) print(pred) print([np.asarray(test_list).reshape(4,)[np.abs(np.asarray(test_list).reshape(4,) - i).argmin()] for i in list(pred[0])]) [5, 3, 6, 2] [[ 1.85080266 2.95598722 4.92955017 5.88561296]] [2, 3, 5, 6]
H: Data preparation for Regression Model Hi I'm currently trying to predict if an item will be successful in my store, this means (How much is going to sale in USD) My training dataset contains many features: Item name Item weight Item category Item country of origin Item sales overall Item sales per store Item rating Item price Etc.... Since I will be introducing a new item for sale, I know very little about this new item: Item name Item weight Item category Item country of origin Item price Not all the features present in training data/test data will be present when I will be making predictions. Is this normal in ML ? What is the rule of thumb when doing feature engineering for this type of cases. AI: Usually you can only predict with the variables you have trained on. But in a case like this, I would suggest you check the Multicollinearity of these missing variables with the ones you will have. May be they are already highly correlated with the features you already have. In that case, you can just model using the available features. If that is not the case and the missing variable looks like a significant variable in your regression, then you might need to pick some other variables which can help you indirectly derive the missing features. For example may be category helps you derive the rating with some accuracy.
H: Better way to break out a date? This piece of code should be self-explanatory and it's also a choke-point in my process. Any way to do this better? geoset['MonthInt'] = pd.to_datetime(geoset['Date received']).dt.month # Add a month geoset['YearInt'] = pd.to_datetime(geoset['Date received']).dt.year # Add a year geoset['DayInt'] = pd.to_datetime(geoset['Date received']).dt.day # Add a day of month geoset['DayOfWeekInt'] = pd.to_datetime(geoset['Date received']).dt.dayofweek # Add a day of week AI: Obvious optimisation would be converting 'Date received' to pandas datetime once, for example (if you need to retain original column): geoset['pddate_received'] = pd.to_datetime(geoset['Date received']) geoset['MonthInt'] = geoset['pddate_received'].dt.month # Add a month geoset['YearInt'] = geoset['pddate_received'].dt.year # Add a year geoset['DayInt'] = geoset['pddate_received'].dt.day # Add a day of month geoset['DayOfWeekInt'] = geoset['pddate_received'].dt.dayofweek # Add a day of week
H: sklearn CountVectorizer token_pattern -- skip token if pattern match I apologize if this question is misplaced -- I'm not sure if this is more of a re question or a CountVectorizer question. I'm trying to exclude any would be token that has one or more numbers in it. >>> from sklearn.feature_extraction.text import CountVectorizer >>> import pandas as pd >>> docs = ['this is some text', '0000th', 'aaa more 0stuff0', 'blahblah923'] >>> vec = CountVectorizer() >>> X = vec.fit_transform(docs) >>> pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) 0000th 0stuff0 aaa blahblah923 is more some text this 0 0 0 0 0 1 0 1 1 1 1 1 0 0 0 0 0 0 0 0 2 0 1 1 0 0 1 0 0 0 3 0 0 0 1 0 0 0 0 0 What I want instead is this: aaa is more some text this 0 0 1 0 1 1 1 1 0 0 0 0 0 0 2 1 0 1 0 0 0 3 0 0 0 0 0 0 My thought was to use CountVectorizer's token_pattern argument to supply a regex string that will match anything except one or more numbers: >>> vec = CountVectorizer(token_pattern=r'[^0-9]+') but the result includes the surrounding text matched by the negated class: aaa more blahblah stuff th this is some text 0 0 0 0 0 1 1 0 0 0 1 0 2 1 0 1 0 0 3 0 1 0 0 0 Also, replacing the default pattern (?u)\b\w\w+\b obviously messes with the tokenizer's normal function which I want to preserver. What I really want is to use the normal token_pattern, but apply a secondary screening of those tokens to only include those that have strictly letters in them. How can this be done? AI: Found this SO post which says to use the following regex: \b[^\d\W]+\b/g yielding the following: >>> vec = CountVectorizer(token_pattern=r'\b[^\d\W]+\b') >>> X = vec.fit_transform(docs) >>> pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) aaa is more some text this 0 0 1 0 1 1 1 1 0 0 0 0 0 0 2 1 0 1 0 0 0 3 0 0 0 0 0 0 What I needed in my regex were the \b word boundary characters of which I was not aware of. That does make this a misplaced question as it has nothing to do with data science or that discipline's tools (sklearn).
H: Improving LSTM Time-series Predictions I have been getting poor results on my time series predictions with a LSTM network. I'm looking for any ideas to improve the model. The above graph shows the True Data vs. Predictions. The True Data is smooth zig zag shaped, from 0 to 1. However the predictions rarely reach 0 or 1. The distribution in the prediction data-set rarely reaches 0 or 1 and it's centered around 0.5. However the distributions in the True Data set is evenly distributed. Here is the LSTM model built in keras: model = Sequential() model.add(Dropout(0.4, input_shape=(train_input_data_NN.shape[1], train_input_data_NN.shape[2]))) model.add(Bidirectional(LSTM(30, dropout=0.4, return_sequences=False, recurrent_dropout=0.4), input_shape=(train_input_data_NN.shape[1], train_input_data_NN.shape[2]))) model.add(Dense(1)) model.compile(loss='mae', optimizer='adam') How do I get the predictions to be more similar to the true data? AI: Ok it seems i calculated the outputs wrongly. It was not calculated fairly across the entire data-set. I am getting better results after improving the output calculations: It still can be improved, but it's a great start.
H: How to find relation between N components and predict the value of any one component using the predicted relation? I am new to Machine learning and trying to learn by practicing. I have a situation where I am reading a set of N data. Each of N data will have independent state at any moment of time. I want to use these N data to find a relation between them. In case if I am not receiving one or more sets of data, I should be able to use available relation between N data to predict the missing data. What type of machine learning algorithm can be used? AI: I understand your task can be formulated as an imputation task, see the link below for further details. Also before you proceed look up the concepts of missing data. In order to fill in missing data you’ll have to build N distinct models using available observations. Each of N models will have N-1 inputs. Note that because more than one component can be missing your models should be able to handle the missing data. According to my knowledge neural networks cannot handle the missing data out of the box. Tree based models, such that random forests or GBMs are more suitable for this kind of tasks. http://www.stat.columbia.edu/~gelman/arm/missing.pdf
H: Outlier detection with non normal distribution What are some techniques that I can use for anomaly detection given a non-Normal distribution? I have less than twenty available observations. AI: I would suggest a nearest neighbors approach. This technique is non-parametric, such that it does not assume your features follow any given distribution. The degree from which a novel instance can be classified as anomalous can set through some p-value estimation. These techniques are computationally expensive however due to your small dataset this may be well suited. Check out: Learning Minimum Volume Sets http://www.stat.rice.edu/~cscott/pubs/minvol06jmlr.pdf Anomaly Detection with Score functions based on Nearest Neighbor Graphs https://arxiv.org/abs/0910.5461 New statistic in P-value estimation for anomaly detection http://ieeexplore.ieee.org/document/6319713/ You can also use more rudimentary anomaly detection techniques such as a generalized likelihood ratio test. But, this is kind of old-school.
H: Need help in deriving Policy Evaluation (Prediction) Policy Evaluation is computing the state-value function for an arbitary policy $\pi$.(suton & barto book). Now \begin{equation} v_{\pi} = E_{\pi}[\;G_{t}\,|\;S_{t}=s] \qquad\qquad\qquad\qquad(4.1)\\\ = E_{\pi}[R_{t+1}+\gamma G_{t+1} |S_{t}=s]\qquad\qquad\quad(4.2)\\ = E_{\pi}[R_{t+1}+\gamma v_{\pi}(S_{t+1})| S_{t}=s]\qquad\qquad(4.3)\\ = \sum_{a} \pi(a|s) \sum_{s^{'},r} p(s^{'},r|s,a)[r+\gamma v_{\pi}(s^{'})]\quad(4.4) \end{equation} Can someone help me to figure out from (4.3) to (4.4). And If equation (4.4) is bellman's equation for state value function, then how is it different from below equation \begin{equation} v_{\pi} = \sum_{a\in A} \pi(a|s)[R_{s}^{a}+\gamma \sum_{s'\in S} P_{ss^{'}}^{a}v_{\pi}({s^{'})}] \end{equation} AI: Going from step (4.3) to (4.4) is turning the expectation (dependent on following policy $\pi$) into a more concrete calculation. To do this, you must resolve your random variables ($R_{t+1}$ and $S_{t+1}$) into the specific values from the finite sets of $\mathcal{R}$ and $\mathcal{S}^+$ - where individual set members are noted $r$ and $s$. Recall that $\pi(a|s)$ is the probability of taking action $a$ given state $s$, so by taking a sum of any function depending on $(s,a)$ e.g. $\sum_{a} \pi(a|s)F(s,a)$ you will get the expected value of $F(s,a)$ starting from state $s$ and following policy $\pi$. So you could write (4.3b) as: $$v(s) = \sum_{a} \pi(a|s)\mathbb{E}_{\pi}[R_{t+1}+\gamma v_{\pi}(S_{t+1})| S_{t}=s, A_t=a]\qquad\qquad(4.3b)\\$$ Similarly, now you have written the expectation when given $s$ and $a$ (so you have all the possible cases of $a$ available to work with), you can use the transition and reward probabilities to fully resolve the expectation and substitute the random variables ($R_{t+1}$ and $S_{t+1}$) for their MDP model distributions described in $p(s',r|s,a)$, leading to equation (4.4). E.g. $$v(s) = \sum_{a} \pi(a|s)\sum_{s',r} p(s',r|s,a)\mathbb{E}_{\pi}[R_{t+1} + \gamma v_{\pi}(S_{t+1})| S_{t+1}=s', R_{t+1}=r]\qquad\qquad(4.3c)\\$$ $$\qquad = \sum_{a} \pi(a|s)\sum_{s',r} p(s',r|s,a)(r + \gamma v_{\pi}(s'))\qquad\qquad(4.4)\\$$ Your last equation is a slightly different formulation of the same result, where: $R_s^a$ is the expected reward when taking action $a$ in state $s$. It is equal to $\sum_{s',r} p(s',r|s,a)r$. It is also equal to $\mathbb{E}[R_{t+1}|S_t=s, A_t=a]$, by definition and independently of $\pi$. $P_{ss'}^a$ is the state transition probability - the probability of ending up in state $s'$ when taking action $a$ in state $s$. It is equal to $\sum_r p(s',r|s,a)$ given $s'$ Technically using $R_s^a$ loses some information about the dynamics of the underlying MDP. Although not anything important to reinforcement learning, which deals with maximising expected reward, so in some ways it is more convenient to already characterise the MDP with expected rewards.
H: How to set parameters to search in scikit-learn GridSearchCV I want to use scikit-learn's GridSearchCV to optimise a BaggingClassifier that uses a support vector classifier (SVC). I want the grid search to search over parameters for both the BaggingClassifier and the SVC. I have tried this setup: svc_pipe = Pipeline([ ('svc', SVC(probability=True)), ]) pipe = Pipeline([ ('bag', BaggingClassifier(svc_pipe, no_estimators=50)), ]) params = { 'bag__bootstrap_features' : [True, False], 'bag__svc__kernel': ['linear', 'rbf'], 'bag__svc__decision_function_shape': ['ovo', 'ovr'] } rnd_search = GridSearchCV(pipe, param_grid=params) but I get this error: ValueError: Invalid parameter svc for estimator BaggingClassifier(base_estimator=Pipeline(memory=None, steps=[('svc', SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=True, shrinking=True, tol=0.001, verbose=False))]), bootstrap=True, bootstrap_features=True, max_features=1.0, max_samples=1.0, n_estimators=50, n_jobs=-1, oob_score=False, verbose=0, warm_start=False). Check the list of available parameters with `estimator.get_params().keys()`. Can someone show me what I have done wrong? AI: There is a typo in pipe, no_estimators should be n_estimators. To address your problem, if you run the following piece of code: for param in rnd_search.get_params().keys(): print(param) This will show you how the parameters are passed to different parts of the pipeline, the parameters of interest are: bag__base_estimator__svc__kernel bag__base_estimator__svc__decision_function_shape So you were almost there, you were just missing base_estimator__ in the svc pipeline parameters. All you need to do is change the svc parameters like so: params = { 'bag__bootstrap_features' : [True, False], 'bag__base_estimator__svc__kernel': ['linear', 'rbf'], 'bag__base_estimator__svc__decision_function_shape': ['ovo', 'ovr'] }
H: Can colors be detected using Neural Nets? How do I represent a color as an activation value within a neuron? ( might be off-topic) I want to detect colors Using Neural Nets In my Knowledge any activation function which is generally used will push the values in between 0-1 which isn't of any use to me... Just wondering how Windows 10 decides on the theme color change when we switch themes.. (led me to think about this or its just simple averaging on pixel values of the channels?) So seems like an impossible task for Neural Nets then? AI: Normally color spaces are not considered to be one dimensional. Given three types of human cone cells, the most natural approach is probably not to use a single but three neurons. If color input is encoded as RGB values, then one neuron would be for red, one for green, and one for the blue channel.
H: Series data structure in pandas In the overview page of the pandas documentation the Series data structure is described as 'homogeneously-typed'. Data Structures Dimensions Name Description 1 Series 1D labeled homogeneously-typed array 2 DataFrame General 2D labeled, size-mutable tabular structure with potentially heterogeneously-typed column However it is possible to create Series objects with multiple data-types. pd.Series(data=[1,2,3,4,5,'x'], index=['a','b','c','d','e','f']) #=>a 1 # b 2 # c 3 # d 4 # e 5 # f x # dtype: object So what could be the meaning of homogeneously-typedmentioned in pandas documentation? AI: If you have multiple different types in a Series, say int and string, all of the data will get upcasted to the same dtype=object (as you can see from your example).
H: What distance should I use for edges weights in textrank algorithm I found this python implementation on github with 400+ stars which use levenshtein distance between each nodes. But original paper (page 4) said: Next, all lexical units that pass the syntactic filter are added to the graph, and an edge is added between those lexical units that co-occur within a window of N word So the question is: is levenshtein distance legit for this algorithm or better to rewrite with windowed edges? Intuitively levenshtein distance must not work because it's not denote importance of word... AI: As lib author said using lev-dist was just en experiment and might not work (and it really doesn't), but for some reason people use broken lib w/o questions ¯\_(ツ)_/¯
H: Area Under Curve with probability I'm coming across a metrics for model evaluation which I had never seen before and I don't know how to further research (since I don't know its proper name). I'm using someone else's code, whose goal is to perform cross-validation to choose the best tree-based algorithm for a binary classification. It is probably worth saying that classes are highly skewed (93% / 7%). The metric which is used is the following: the classifier is trained and then the probability associated with each test element is computed. probas = probas[:,list(clf.classes_).index(1)] Then, these probabilities are ordered from the highest to the lowest and put in the x-axis. On the y-axis, being y the cumulative sum of entries associated with each probability. Then, they compute the area under the obtained curve, as in: joint = zip(probas, truth) joint = sorted(list(joint), key=lambda x:x[0], reverse=True) probas = [x[0] for x in joint] truth = [x[1] for x in joint] # Calculate accumulated number of true labels at each probability point. # Also calculate Area Under Curve (AUC) (higher is better model) truth_cumulative = np.cumsum(truth) / np.sum(truth) area = np.trapz(truth_cumulative, dx=1) / len(truth) Can anyone give me an intuition of what this metric is about and a pointer to some resources to better understand it? Thanks. AI: In your code the Area Under the Curve (AUC) is used to calculate the area under the Cumulative Distribution Function (CDF). Let's go through the code to see how this is done. However, in the last section, I do not agree with the two lines used to calculate the area. If all the labels have a ground truth label of 1, and we have a perfect classifier where the probability of being Class 1 is always 100%, we should have an AUC of 1. However, we will get 50%. I will show this later. You should note that the way this code is written will only work for a binary classifier, with labels 0 and 1. We will first calculate the probability of each instance being in class 0 and class 1 by probs = forest.predict_proba(X_train) Then we will extract the second column, this contains the probability of each instance being in Class 1. probas = probs[:,list(forest.classes_).index(1)] Then we will join the probabilities of each instance with its ground truth label. Then we will sort the groupings based on their probability. The instances with a probability of 1 will be at the start of the list and those with a probability of 0 will be at the end of the list. This is done using joint = zip(probas, truth) joint = sorted(list(joint), key=lambda x:x[0], reverse=True) Then we will separate the probabilities and the ground truth labels into separate lists using probas = [x[0] for x in joint] truth = [x[1] for x in joint] The blue line is the probabilities and the ground truth labels is the orange line. This is where the problems start. The next two lines do not include the probabilities but only the ground truth labels. truth_cumulative = np.cumsum(truth) truth_cumulative = truth_cumulative / np.sum(truth) This means that if we had all class 1, with probability 1. We would get an AUC of 0.5. This does not make any sense. For example, if I force it by doing this probas[100:100000] = 1 truth[100:100000] = 1 There will only be a few 0 values left. Then after the ordering and cumulative sum we get the plot as follow, which is obviously wrong. We need to do exactly what is written in the comment but not done through code. We will get the false positive rates for each false negative rate. Much like what is described below in the Receiver Operator Curve (ROC). This can be done with # Training Set probas_train = forest.predict_proba(X_train) fpr_t, tpr_t, thresholds_t = roc_curve(y_train, probas_train[:, 1]) # Testing Set probas_ = forest.predict_proba(X_test) fpr, tpr, thresholds = roc_curve(y_test, probas_[:, 1]) roc_auc = auc(fpr, tpr) train, = plt.plot(fpr_t, tpr_t, label = 'Train') test, = plt.plot(fpr, tpr, label = 'Test') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() print('Area Under the Curve (AUC): ', roc_auc) Training Set Area Under the Curve (AUC): 0.998140359403 Testing Set Area Under the Curve (AUC): 0.937729144675 Receiver Operator Curve (ROC) This was invented during WWII as a means to detect aircraft using radar. For example, if we have a sensor which requires some kind of threshold to detect planes. We can determine the true positive rate (TPR) and false positive rate (FPR) which results in our experiment. We will then plot these as the x-axis as the false positive rate, and the y-axis as the true positive rate. $TPR = \frac{\text{True Positives}}{\text{Positives}}$ $FPR = \frac{\text{False Positives}}{\text{Negatives}}$ If the sensor is perfect we will always have $0$ false positives and always a $100\%$ true positives. This results in a curve which looks like the blue curve. If the sensor was completely random and garbage, then you would have a random guess which is the diagonal line. A good classifier will be as close to the blue line as possible. Note, that a line below the diagonal is also good, you should just invert your decision. You have a consistent classifier but you are inverting the decision. Intuitively, consider a sensor which reads a value and it is above a threshold $\theta$ we call it a detection. If we lower the threshold of detection, we should expect more detection, however if we end up with an equal amount of true positives and false negatives, then we are going along the diagonal line. This is a bad classifier. Here we have 2 sensors, red and yellow, we want to decide which sensor is better. Just looking at the curves is not very effective because it's hard to see which one is definitely higher. So we can devise a metric named the Area Under the Curve (AUC). This is exactly what it means in the name. Then we an say the curve with the higher AUC is the better one.
H: Including biased data in training model My friend is in the business of getting cats to the top of mountains. He currently uses a set of heuristics to decide which cats are most likely to get to the top. For the ones he likes, he feeds, at great expense, in the hope that they will be strong enough to get to the top. For the ones he doesn't like, he thinks it not worth his time / money to feed, and he's happy and surprised if they get to the top anyway. From looking at the data, we know that on average, the ones he doesn't feed get to the top less often. I am training a classifier to help him decide which to feed. Does it make sense to include rows in the training data where he fed them and where he didn't? When I excluded the rows where he did not feed them, the model did worse, which I did not expect. Forgive the example. It's contrived, but I think retains all of the characteristics of the actual situation. AI: That's a wonderful question, and beautifully posed. I love how you capture the essence of the issue in such a clean way. Unfortunately, the answer is that your data set does not have enough information to help you decide which cats you should feed. Without a properly controlled experiment, you don't have a way to infer causality; you can't rule out the possibility of confounding factors. Suppose that 70% of the cats he feeds make it to the top, and 20% of the cats he doesn't feed make it to the top. Let me make up four alternative explanations for why that might happen: Hypothesis #1: There's only one kind of cat in the world. The characteristics of the cat are irrelevant. If you feed them, they will have a 70% chance of making it to the top; if you don't, they'll have only a 20% chance. This is true for all cats, regardless of their attributes. Hypothesis #2: There are two kinds of cats in the world: the strong, and the weak. The strong have a 70% chance of making it to the top of the mountain, regardless of whether you feed them or not. The weak have a 20% chance of making it to the top, regardless of whether you feed them. You can tell the two kinds of cats apart by their characteristics, and your friend happens to like the strong ones and dislike the weak ones. Hypothesis #3: There are two kinds of cats in the world: the athletes, and the champion nappers. The former have a 70% chance of making it to the top of the mountain if you feed them, or a 50% chance if you don't. The latter have a 40% chance of making it to the top if you feed them, or a 20% chance if you don't. You can tell the two apart by their characteristics, and your friend happens to like the athletes and dislike the nappers. Hypothesis #4: There are two kinds of cats in the world: the hungry, and the overweight. The former have a 70% chance of making it to the top of the mountain if you feed them (they're pretty fit, but they already have all the food they need, so why bother exploring?), or a 100% chance if you don't (hunger drives them to greater heights). The latter have a 21% chance of making it to the top, if you feed them, or a 20% chance, if you don't (either way, they probably won't make it, because they are so overweight and out of shape). You can tell the two apart by their characteristics, and your friend happens to like the hungry ones and dislike the overweight ones. All four hypotheses are equally consistent with the data. The data set is useless at helping you distinguish between them. Yet each hypothesis leads to a different prediction about which cats to feed: the first two say it doesn't matter, the third says you should feed the ones your friend likes, and the fourth says you should feed the ones your friend dislikes. No amount of statistics or machine learning is going to enable you to make a decision based solely on the data. To distinguish between these possibilities, you either need some kind of prior that will give you a basis for choosing among the models that are consistent with the data (perhaps based on your domain knowledge), or you need to conduct a controlled experiment (where the choice of whether to feed a cat is randomized, rather than being selected based on whether your friend likes the cat). What if we have a prior that says that feeding a cat can only help its chances of making it to the top, and not hurt its chances? Well, that's an example of the first of two options I suggest in the last paragraph. However, it's still not informative enough to let us decide which cats to feed. The problem is that we need to tell whether feeding the cats your friend likes improves their odds more than it improves the odds of feeding the cats your friend doesn't like. We just don't have any information on that. The chances could be 70% vs 69% for cats he likes and 90% vs 20% for cats he doesn't (i.e., cats he like have a 70% chance if fed or a 69% chance if not fed, etc.); or it could be 70% vs 0% for cats he likes and 21% vs 20% for cats he doesn't. In the former case, you should feed the cats he doesn't like, as they see a 70 percentage point improvement from feeding, while the ones he likes see only a 1 p.p. improvement. In the latter case, you should feed the cats he does like, because they see a 70 p.p. improvement, while the ones he dislikes see only a 1 p.p. improvement. You just can't distinguish between these two possibilities from the dataset you have. So your prior would need to be considerably more detailed than that, before you can make any headway. If you'd like to learn more about the subject and under what circumstances we can draw conclusions about causality despite the lack of a controlled experiment, you might start by reading Judea Pearl's work.
H: How to estimate GridSearchCV computing time? If I know the time of a given validation with set values, can I estimate the time GridSearchCV will take for n values I want to cross-validate? AI: You could fit your model/pipeline (with default parameters) to your data once and see how long it takes to train. Then you would multiply that by how many times you want to train the model through grid search. E.g. suppose you want to use a grid search to select the hyperparameters a, b and c of your pipeline. params = {'a': [1, 2, 3, 4, 5], 'b': [1, 2, 3, 4], 'c': [1, 2, 3]} cv = GridSearchCV(pipeline, params) By default this should run a search for a grid of $5 \cdot 4 \cdot 3 = 60$ different parameter combinations. The default cross-validation is a 3-fold cv so the above code should train your model $60 \cdot 3 = 180$ times. By default GridSearch runs parallel on your processors, so depending on your hardware you should divide the number of iterations by the number of processing units available. Let's say for example I have 4 processors available, each processor should fit the model $ 180 / 4 = 45$ times. Now, if on average my model takes $10 sec$ to train, I'm estimating around $45 \cdot 10 / 60 = 7.5min$ training time. In practice it should be closer to $8min$ due to overhead. Finally, because some parameters heavily affect the training time of that algorithm, I would suggest using the max_iter argument whenever available so that your estimation doesn't fall far off. Please note : As of July 2021, the default folds is 5. From sklearn documentation : Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
H: Are CNNs insensitive to rotations and shifts in images? Can CNNs predict well if they are trained on canonical-like images but tested on a version of images that are little bit shifted? I tried it using mnist dataset and found the contrary. The accuracy of the test set that was shifted was very low as compared to MLPs. AI: If you use max-pooling layers, they may be insensetive to small shifts but not that much. If you want your network to be able to be invariant to transformations, such as translations and shifts or other types of customary transformations, you have two solutions, at least as far as I know: Increasing the size of data-set Using spatial transformers Take a look at What is the state-of-the art ANN architecture for MNIST and Why do convolutional neural networks work. Thanks to one of our friends, another way is to use transfer learning after data-augmentation.
H: Finding a kernel with feature transformation Suppose we have feature transformation $\Phi(x) = [1, x_1, x_2, x_1x_2]$. Now we want to find the kernel corresponding to $\Phi$. What I have done is using kernel decomposition, we have: $$ K(x, y) = \Phi(x) .\Phi(y)\\ K(x, y) = 1.1 + x_1y_1 + x_2y_2 + x_1x_2y_1y_2 \\ K(x, y) = 1 + \sum_{i=1}^{N}x_iy_i + \prod_{i=1}^{N}x_iy_i $$ $x, y$ are in $\Bbb{R}^2$ Is that kernel valid? AI: Yes, it is valid. Kernel is just an inner product in the feature space. In fact, here you have explicitly started from the feature transformation and we understand the feature transformation.
H: How does "linear algebraic" weight training function work? This answer shows that linear and polynomial function weights can be trained using this matrix operation: $w = (X^TX)^{-1}X^Ty$ Therefore, algorithms such as gradient descent are not necessary for these functions. By my understanding, gradient descent for linear regression model finds perfect derivative for each weight so that cost function is minimal. Before asking for connections between gradient descent and equation above, let's separate the equation in smaller steps: $X = [1,2,3,4,5,6,7,8,9,10,11,12]$ $y=[2.3,2.33,2.29,2.3,2.36,2.4,2.46,2.5,2.48,2.43,2.38,2.35]$ Let's turn $X$ vector to matrix by adding column of 1's which will be used to train the bias value: \begin{bmatrix} 1 & 1 \\ 2 & 1 \\ 3 & 1 \\ 4 & 1 \\ 5 & 1 \\ 6 & 1 \\ 7 & 1 \\ 8 & 1 \\ 9 & 1 \\ 10 & 1 \\ 11 & 1 \\ 12 & 1 \end{bmatrix} Transpose: $X^T$: \begin{bmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 9 & 10 & 11 & 12 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix} Matrix multiplication $X^TX$: \begin{bmatrix} 650 & 78 \\ 78 & 12 \end{bmatrix} Inverse matrix operation $(X^TX)^{-1}$: \begin{bmatrix} 0.00699301 & -0.04545455 \\ -0.04545455 & 0.37878788 \end{bmatrix} Matrix multiplication $(X^TX)^{-1}X^T$: array([[-0.03846154, -0.03146853, -0.02447552, -0.01748252, -0.01048951, -0.0034965 , 0.0034965 , 0.01048951, 0.01748252, 0.02447552, 0.03146853, 0.03846154], [ 0.33333333, 0.28787879, 0.24242424, 0.1969697 , 0.15151515, 0.10606061, 0.06060606, 0.01515152, -0.03030303, -0.07575758, -0.12121212, -0.16666667]]) Matrix vector multiplication $(X^TX)^{-1}X^Ty$: \begin{bmatrix} 0.01174825 & 2.30530303 \end{bmatrix} This seems to be the perfect slope for minimizing cost. But I'm unable to understand how does it exactly work. It seems like that the algorithm such as gradient descent would've took extremely large amount of iterations to estimate perfect slope and bias value. How/why does this equation exactly work? Is it somehow related to differentiation? Could it be compared to algorithms such as gradient descent? If so, how? Is it possible to use this equation with sigmoid functions? AI: Your goal is to find a $w$ such that $$Xw \approx y$$ and way to model this problem is to minimize the objective function: $$\min_w\|Xw-y\|^2.$$ Differentiating with respect to $w$ and equate it to zero gives us $$2X^T(Xw-y)=0$$ $$X^TXw-X^Ty=0$$ $$X^TXw=X^Ty$$ if $X^TX$ is invertible, then we have $$w=(X^TX)^{-1}(X^Ty)$$ Remark: We tend to avoid computing inverse and prefer gradient-based method. Complexity of the normal equation method is cubic.
H: What is the term for data that is too sparse to represent the underlying data model? I am giving a presentation on Data Science, and I want to talk about the idea that data that is not "big" enough is a big barrier for Machiene Learning. Looking online, there are concepts like overfitting and underfitting, but I am more looking to talk about data that, even if fitted optimally, would still not actually be a good model for the system. Is there a good term to use for this? AI: Small sample size is probably the concept you're looking for. A common failure in statistics is trying to draw conclusions from data that isn't "big" enough to accurately represent the underlying distribution. It's worth noting that "small data" increases your chances of overfitting - which is another way of saying that your model is weak to outliers and noise. It's also worth noting that bootstrapping, simulation, and duplication might help (but not always). There may be some other niche term for this, but if you relate it to basic stats there's a high probability everyone in the room will understand what you're talking about.
H: Does the phenomenon of over-fitting of data varies with training algorithms? Suppose I have a dataset which I want to train using Neural network and SVM. Is it possible that with my dataset after training, the Neural network is overfit while the SVM is not? Like can a dataset be overfit for one training algorithms and not be overfit for another training algorithm? Is it even possible? Or if it is not overfit for one training algorithm, can be assume that it will not be overfit for other training algorithms too? AI: A model is over-fitting if it makes good predictions on a test set but bad predictions on new data. This is generally a good indication that the used model is too complex. The complexity of the model is often quantified as the number of free parameters. These are the parameters that need to be set in order to fit the data. More parameters allow more flexibility in what can be expressed but also increase the change of over-fitting. The type of model also restricts what functions can be learned. For example, linear models can only learn linear functions (really). The simple answer to your question is yes. For example, a SVM with little parameters might fit data well while a ANN with many parameters might over-fit on the same data. A fair comparison however compares models with the same number of parameters. But the answer remains yes. One model might be better suited than another to fit the intrinsic structure of the data. For example, on the same data, a decision tree with 100 nodes might over-fit, a linear model might under-fit and a ANN might work perfectly. It all depends on the underlying structure that you want to model.
H: How can I calculate a rolling window sum in pandas across this MultiIndex dataframe? I can't work out how to get the moving annual sum from this data: > revenue > txdate 2014-01-31 2014-02-28 2014-03-31 2014-04-30 .... > user_id > 1 0 10 165 0 > 2 265 265 200 250 > 3 770 985 1235 900 > .... Previously I would have tried something like this and adjusted until it worked: df.groupby(level='practice_id').apply(lambda x: pd.rolling_sum(x, 12)) but it's deprecated and I'm not getting my head around the 0.18 changes to rolling despite reading the docs, and I'm not sure that the shape of the data is helpful (it's close to what needs to be inserted in a db table). The original data format is as follows: > txdate user_id tx_amount > 2014-01-01 2 5 > 2014-01-02 2 5 > 2014-01-02 3 30 > 2014-01-03 3 15 > 2014-01-02 2 10 I reshaped with the following cmd: > df.set_index('txdate').groupby('user_id').resample('M').agg({'revenue': np.sum}) I'm thinking I might need to reverse the order of operations. AI: If anyone else comes looking, this was my solution: # find last column last_column = df.shape[1]-1 # grab the previous 11 columns (also works if there aren't that many in the df) wanted = df.iloc[:, (last_column-11):last_column] # calculate the rounded moving annual total mat_calc = round(wanted.sum(axis=1)/len(wanted.columns), 2) Probably not the most pandastic solution, but it works well.
H: Accuracy value constant even after different runs I am using the neural network toolbox of Matlab to train a network. Now my code is as follows: x = xdata.'; t = target1'; % Create a Pattern Recognition Network hiddenLayerSize = 10; net = patternnet(hiddenLayerSize); net.input.processFcns = {'removeconstantrows','mapminmax'}; net.output.processFcns = {'removeconstantrows','mapminmax'}; net.layers{2}.transferFcn = 'softmax'; net.divideFcn = 'dividerand'; % Divide data randomly net.divideMode = 'sample'; % Divide up every sample net.divideParam.trainRatio = 60/100; net.divideParam.valRatio = 20/100; net.divideParam.testRatio = 20/100; net.trainFcn = 'trainscg'; % Scaled conjugate gradient net.performFcn = 'mse'; net.performParam.regularization = 0.5; %net.performParam.normalization = 0.01; net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotregression', 'plotfit', 'plotconfusion'}; % Train the Network [net,tr] = train(net,x,t); % Test the Network y = net(x); e = gsubtract(t,y); tind = vec2ind(t); yind = vec2ind(y); percentErrors = sum(tind ~= yind)/numel(tind); performance = perform(net,t,y) % Recalculate Training, Validation and Test Performance trainTargets = t .* tr.trainMask{1}; valTargets = t .* tr.valMask{1}; Now I am supposed to get different accuracies with different run of data since the sampling (division of dataset into train test and validation set) is random. But I am getting a constant accuracy (89.7%). The variable 'xdata' contains only those features selected by a feature selection algorithms. Is there any reason why my accuracy value is constant? I have trained an SVM too with the same dataset. There too I am getting the a stale accuracy even with multiple run (94%) The output y contains 2 values. What do those values signify? AI: I see that you have set a Random Number Generator(rng) seed in the following line in your code: rng(1). So, this splits the data in the same way no matter whichever run. So, that is the reason why you are getting the same error values. Try removing the line. Then, the data shall be slit randomly (as there is no seed now). You shall get different error then, depending on how the split is done (which is, randomly). A model which generalizes well should be robust to the choice of seed
H: Different accuracy for different rng values While tuning the SVM classification model in Matlab, I came across the rng function in matlab in which seed (stabilizes the random shuffling of the data in the algorithm) is changed. When the function called is rng(1) then I am getting one accuracy value (99%). When it is changed to rng(2) then I am getting another value (57%). So there is a huge change in accuracy as visible. What does this mean? Am I training it wrong? The train and test set correct rate (in %) that I am getting with different runs without changing rng are(train,test) (96, 82.8) (94.6, 95.3) (96, 85.9) (96, 90) (95, 95) AI: The training errors in this dataset has a huge difference (99% vs 57%). So, maybe the one with the rng(1) split has overfitted your dataset. So there is a huge change in accuracy as visible. What does this mean? Am I training it wrong? The huge change might be due to overfitting. (Also, judge the model through validation curves, and then fit a model which balances the bias-variance plot.)
H: What are the 'hottest' future areas of Machine Learning and Data Science? What would you guys identify as the most important areas of machine learning and data science in the future? What kind of business do you see companies focused on ML doing in the future? To give some background to the question, I'm asking this as someone hugely interested in ML at a conceptual level but with no real experience on it save for some online courses. My intention is to learn the necessary math, coding, etc. and as I'm very entrepreneurial person, eventually found my own ML company. I've found that the field is vast and thus I feel one of the biggest challenges is to select a specific area (I guess method would be a better word here) to focus on and build expertise. AI: There is a brilliant answer by Yann LeCunn recently on a Quora session. What are some recent and potentially upcoming breakthroughs in deep learning? As he quotes, adversarial training proposed by Ian Goodfellow is one of the hottest area. Apart from that, I think Memory Networks, RNNs are widely used right now and can solve various problems up ahead. Coming to what companies might do, I see a lot of text analytic companies. Considering the amount of consumer data we have now, there is a great need to be able to understand and analyze it. Computer Vision using ML or Deep learning is fast catching up. Using cameras for face detection, object tracking, fraud detection, tracking number plates is a huge field for companies to handle. Also, I think this might help you to prepare. To catch up with the overlooked developments, refer this which might give you an insight about what can develop in the future.
H: How exactly does a validation data-set work work in machine learning? With typical machine learning you would usually use a training data-set to create a model of some kind, and a testing data-set to then test the newly created model. For something like linear regression after the model is created with the training data you now have an equation that you would use to predict the outcome of the set of features in the testing data. You would then take the prediction that the model returned and compare that to the actual data in the testing set. How would a validation set be used here? With nearest neighbor you would use the training data to create an n-dimensional space that has all the features of the training set. You would then use this space to classify the features in the testing data. Again you would compare these predictions to the actual value of the data. How would a validation set help here as well? AI: Machine learning models output some sort of function; for example, a decision tree is a series of comparisons that results in a leaf node, and that leaf node has some associated value. This decision tree predicts survival chance on the Titanic, for example: (Image by Steven Milborrow, from Wikipedia) This can be used exactly like the equation generated by linear regression. You feed the new data points into the model, you get predictions out, and then you compare those predictions to the actual values. The comparison you do is applying a "cost function," and what cost function you use is determined by your application. For example, linear regression can be done with different cost functions; typically, people use "ordinary least squares" which tries to minimize mean squared error, but other possibilities exist, like minimizing the absolute value of the error. When the result is a percentage, then something like a log probability loss function is typically a better choice than mean squared difference between the actual and predicted probability. Even k-nearest neighbors generates a function; it splits the space into regions where the k-nearest neighbors are the same set, and then has a flat value for that region. It ends up very similar to the result generated by a decision-tree based method. There's also a terminology point here: typically, people talk about the training set and the test set, or the training set, the validation set, and the test set. The point of the validation set is different from that of the test set; the test set is used to determine out-of-sample error (that is, how much the model has overfit and what its real world error is likely to be like) whereas the validation set is used to determine what hyperparameters minimize the expected test error. One could fit ten different models on 80% of the data, determine which one has the lowest error on another 10% of the data, and then finally estimate real-world error on the last 10% of the data.
H: Similar output from random forest and neural network I used a training data set to train both a random forest and a neural network (one hidden layer). Then I compared how both systems perform on a test data set. Interestingly, both turned out to have about the same prediction probabilities. The forest classified 86% of the data correctly into category 0 or 1, the network achieved 85%. As there is 75% of the data in category 0, I am not too happy with the result and hoped for something better. I then analyzed, which data sets were classified incorrectly. It turned out that there is a large overlap: 86% percent of the data, which was classified incorrectly by the forest, was also classified wrongly by the network. I then compared the probabilities which were attributed to those test samples by the forest and the network. Also the probabilities were almost always comparable so that when the forest was sure that sample A belongs to category 0, then also the network was sure that 0 is the right category. Also, if the probabilities of the forest were around 50% for both categories, also the network was "unsure" which category was correct. Is such an outcome somehow obvious? Is there an explanation? AI: I think that your neural network is probably learning the same features as your Random Forest. I do not think it is obvious because it is hard to say what the neural network is learning, but in this case based on the your model it might be working similar.
H: Regression in Keras I was trying to implement a regression model in Keras, but am unable to figure out how to calculate the score of my model, i.e., how well it performed on my dataset. import numpy as np import pandas as pd from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasRegressor from sklearn.cross_validation import cross_val_score, KFold from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline ## Load the dataset dataframe = pd.read_csv("housing.csv", delim_whitespace=True,header=None) dataset = dataframe.values X_train = dataset[:400,0:13] Y_train = dataset[:400,13] X_test = dataset[401:,0:13] Y_test = dataset[401:,13] ##define base model def base_model(): model = Sequential() model.add(Dense(14, input_dim=13, init='normal', activation='relu')) model.add(Dense(7, init='normal', activation='relu')) model.add(Dense(1, init='normal')) model.compile(loss='mean_squared_error', optimizer = 'adam') return model seed = 7 np.random.seed(seed) scale = StandardScaler() X_train = scale.fit_transform(X_train) X_test = scale.fit_transform(X_test) clf = KerasRegressor(build_fn=base_model, nb_epoch=100, batch_size=5,verbose=0) clf.fit(X_test,Y_test) res = clf.predict(X_test) ## line below throws an error clf.score(Y_test,res) Please tell me how can I get the score for my model and what mistake am I doing in the above code. AI: The syntax is not exact, you should pass the features X_test and the true labels Y_test to clt.score (the method performs the prediction on itself, no need to do it explicitly). score = clf.score(X_test, Y_test) You can also use other metrics available in the metrics module of sklearn. For example, from sklearn.metrics import mean_squared_error score = mean_squared_error(Y_test, clf.predict(X_test)) from sklearn.metrics import mean_absolute_error score = mean_absolute_error(Y_test, clf.predict(X_test)) Just some other remarks on your code that are not directly related to the question: you should not call clf.fit on the test data, you should instead fit on the training data and use the test set to compute the score to check the generalization of your model you should fit StandardScaler only on the training data and use X_test = scale.transform(X_test) to apply the same transformation on the test set
H: Import Orange 2.7 canvas in Orange 3 I have a saved canvas I created in Orange 2.7. I just installed Orange 3 and opening the old canvas gives multiple "UnknownWidgetDefinition" errors. I gather that the naming/location of some widgets has changed between versions. Is there a way to import the old canvas? AI: Not only the naming has changed, also the individual widget settings and layouts are completely different. Some Orange 2 widgets are not even available in Orange 3 yet, or have been consolidated into other ones. I'm afraid Orange 2 workflows are just not compatible with Orange 3.
H: Calculate cosine similarity in Apache Spark I have a DataFrame with IDF of certain words computed. For example (10,[0,1,2,3,4,5],[0.413734499590671,0.4244680552337798,0.4761400657781007, 1.4004620708967006,0.37876590175292424,0.48374466516332]) .... and so on Now give a query Q, I can calculate the TF-IDF of this query. How do I calculate the cosine similarity of the query with all documents in the dataframe (there are close to million documents) I could do it manually in a map-reduce job by using the vector multiplication Cosine Similarity (Q, document) = Dot product(Q, dodcument) / ||Q|| * ||document|| but surely Spark ML must natively support calculating cosine similarity of a text? In other words given a search Query how do I find the closest cosines of document TF-IDF from the DataFrame? AI: There's a related example to your problem in the Spark repo here. The strategy is to represent the documents as a RowMatrix and then use its columnSimilarities() method. That will get you a matrix of all the cosine similarities. Extract the row which corresponds to your query document and sort. That will give the indices of the most-similar documents. Depending on your application, all of this work can be done pre-query.
H: Reason for better performance of variants of SGD when local minimas of Neural Nets are equivalent? From some neural net article I read that if you scale up the Neural Net architecture the differnce in different local minimas in the loss surface diminishes. Essentially all local minimas become equivalent. If that is the case then why do different variants of SGD (like Adagrad, ADAM etc.) works better than plain SGD? I believe reason for using these variants of SGD is to solve "bad" local minima problem, but if all local minimas are more or less the same then what is objective of using these variants? AI: I believe reason for using these variants of SGD is to solve "bad" local minima problem That is not accurate. The variants are mostly about accelerating steps of gradient descent when faced with shallow or rapidly changing gradients, or with gradients that need adaptive learning rates because some parts of the network get stronger gradient signals than others. They do this by weighting or adjusting the step sizes in each dimension, based on additional knowledge to the current gradients, such as the history of previous gradients. Shapes like saddle points or curving gullies, in the cost function "landscape" can cause difficulties for basic SGD. Take a saddle point as an example - there is no "bad" minima, a saddle point can be quite high up in a cost function. But the gradient values can be very low, even if they pick up again if you can take steps away from the saddle point. The trouble for SGD is that using just the gradient is likely to make updates oscillate up and down the steep parts of the saddle, and not move in the shallow direction away from the saddle point. Other difficult shapes can cause similar problems. For a visualisation of the difference in behaviours of some of the optimisers at a saddle point, take a look at this animation (I'd like to find a creative-commons variant of this and include in the answer), and a blog that references it. In addition, for deep learning network, there is a problem that gradients can "explode" or "vanish" as you work back through the layers. Using ReLU activation or similar can help with that, but is not always possible (think of RNNs which need sigmoid activations inside LSTM modules). An optimiser like RMSprop deals with this by normalising gradients used by the weight update steps based on recent history. Whilst SGD could more easily get stuck and fail to update the lower layer weights (weights closer to the input) - either not updating them much at all, or taking too large steps.
H: PCA algorithm problems - Python I have implemented PCA algorithm and I understood it very well but still I have some questions. My code is below and it's very simple implementation. import numpy as np x = np.loadtxt('CCPP', delimiter=',') row, column = x.shape # Mean normalization for i in range(column): x[:,i] = (x[:,i] - x[:,i].mean()) / (x[:,i].max() - x[:,i].min()) sigma = x.transpose().dot(x) / row u, s, v = np.linalg.svd(sigma, 0) z = x.dot(u[:,:3]) ## new features new_x = z.dot(u[:,:3].transpose()) ##reconstruction First Question As you can see above my sigma variable is x.transpose().dot(x) / row It's giving to me an nxn matrix (n is number of features). But sigma's formula is $$\Sigma = \frac{1}{n} \sum_{i=1}^n x^{(i)} {x^{(i)}}^T$$ Why there is a summation symbol in formula? I mean, if I use this sigma formulation then sigma is going to be a number, not a matrix. I have to get nxn matrix, right? So is my sigma implementation correct? or am I missing something about the formula? Second Question When we are reconstructing X (at the bottom in the code), should new_x equal to my first X? I mean, I reduced dimension of the data set, then I reconstructed it, original dataset and reconstructed dataset must be the same, right? This is my second question. Third Question This one is easy. Should I use data compression for each of my dataset which has 1000, 100000 or more features? I mean, can I always use it? Is it a good choice to use it every time? AI: Regarding First Question. In the above formula, if I'm not wrong, x is a matrix of elements. So what the formula wants from you, is to sum all dot products of every line with it's transpose. This will give you scalar. x = np.array([1, 2, 3, 4]) res = x.dot(x.transpose()) # res = 30 So my sugestion would be to change that line of code to: for i in range(row): sigma += x[i].transpose().dot(x[i]) sigma = sigma/row Second Question Because you reduced the dimensionality, the x_new matrix will not be the same. Third Question When to use the PCA is a thing of domain problem. The point of dimensionality reduction is, to get new data set, which will not be as hard to process, but it will loose some information. So if you are your "result"/"time to process" is good, I don't think you should use it.
H: Example of binary classifier with numerical features using deep learning I would like to get more understanding of deep learning. Browsing the web I find applications in speech recognition and hand-written digits. However I would be interested to get some guidance on how to apply this in the classical setting: binary classifier numerical features (each sample is a numerical vector of $K$ entries, no 2D pixels or such). I am doing my own experiments choosing learning rates, number of hidden neurons and so on, but I would be happy to see an application by somebody more experienced. The software that I use offers weight initialziation using Restricted Boltzmann Machines (RBMs). I wonder whether this is useful in this context and whether the other special techniques that one encounters in the literature (convolutional NN) are useful here to. Could anybody share a blog post, a paper or personal experience? AI: I used Binary classification for sentiment analysis of texts. I converted sentences into vectors by taking appropriate vectorizer and classified using OneVsRest classifier. On another approach, my words were converted into vectors and there, I used a CNN based approach to classify. Both when tested on my datasets were giving comparable results as of now. If you have vectors, there are already really good approaches available for binary classification which you can try. On Binary Classification with Single–Layer Convolutional Neural Networks is a good read for you for classification using CNNs for starters. This is one of the first blogs I read to gain more knowledge about this and doesn't require much of pre-requisites to understand(I am assuming you know the basics about convolution and Neural Networks).
H: Error in model.fit() method in Keras I was building a model for a classification problem in Keras for which I used the KerasClassifier, the wrapper scikit-learn. Below is the code for the same. import pandas as pd import numpy as np from sklearn.cross_validation import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score,roc_auc_score from keras.models import Sequential from keras.layers import Dense,Dropout from keras.wrappers.scikit_learn import KerasClassifier from sklearn.grid_search import GridSearchCV # In[3]: def cleanPeople(people): people = people.drop(['date'],axis=1) people['people_id'] = people['people_id'].apply(lambda x : x.split('_')[1]) people['people_id'] = pd.to_numeric(people['people_id']).astype(int) fields = list(people.columns) cat_data = fields[1:11] bool_data = fields[11:] for data in cat_data: people[data] = people[data].fillna('type 0') people[data] = people[data].apply(lambda x: x.split(' ')[1]) people[data] = pd.to_numeric(people[data]).astype(int) for data in bool_data: people[data] = pd.to_numeric(people[data]).astype(int) return people # In[4]: def cleanAct(data, train=False): data = data.drop(['date'],axis = 1) if train: data = data.drop(['outcome'],axis=1) data['people_id'] = data['people_id'].apply(lambda x : x.split('_')[1]) data['people_id'] = pd.to_numeric(data['people_id']).astype(int) data['activity_id'] = data['activity_id'].apply(lambda x: x.split('_')[1]) data['activity_id'] = pd.to_numeric(data['activity_id']).astype(int) fields = list(data.columns) cat_data = fields[2:13] for column in cat_data: data[column] = data[column].fillna('type 0') data[column] = data[column].apply(lambda x : x.split(' ')[1]) data[column] = pd.to_numeric(data[column]).astype(int) return data # In[5]: people = pd.read_csv("people.csv") people = cleanPeople(people) act_train = pd.read_csv("act_train.csv") act_train_cleaned = cleanAct(act_train,train=True) act_test = pd.read_csv("act_test.csv") act_test_cleaned = cleanAct(act_test) # In[6]: train = act_train_cleaned.merge(people,on='people_id', how='left') test = act_test_cleaned.merge(people, on='people_id', how='left') # In[8]: output = act_train['outcome'] X_train, X_test, y_train, y_test = train_test_split(train,output, test_size=0.2, random_state =7) input_len = len(X_train) print(input_len) # In[9]: def base_model(optimizer='rmsprop', init='normal', dropout_rate =0.0): model = Sequential() model.add(Dense(100, input_dim = input_len, activation='relu', init=init)) model.add(Dropout(dropout_rate)) model.add(Dense(50, activation = 'relu', init = init)) model.add(Dropout(dropout_rate)) model.add(Dense(10, activation = 'relu', init = init)) model.add(Dropout(dropout_rate)) model.add(Dense(1, activation = 'sigmoid', init = init)) model.compile(loss = 'binary_crossentropy', optimizer = optimizer, metrics =['accuracy']) return model # In[10]: seed = 7 np.random.seed(seed) model = KerasClassifier(build_fn = base_model) # In[12]: #grid_parameters optimizers = ['rmsprop', 'adam'] init = ['normal', 'uniform'] dropout_rate = [0.0, 0.2, 0.5] epochs = [100, 150, 200] batches = [10,20,30] param_grid = dict(optimizer = optimizers, init=init, dropout_rate = dropout_rate, nb_epoch = epochs, batch_size=batches) # In[ ]: validator = GridSearchCV(estimator=model, param_grid= param_grid) validator.fit(X_train, y_train) print(validator.best_score_) print(validator.best_params_) The following code thrown this error when I ran it on my workstation. Traceback (most recent call last): File "../src/script.py", line 137, in model.fit(X_train, y_train) File "/opt/conda/lib/python3.5/site-packages/Keras-1.0.6-py3.5.egg/keras/wrappers/scikit_learn.py", line 148, in fit history = self.model.fit(X, y, **fit_args) File "/opt/conda/lib/python3.5/site-packages/Keras-1.0.6-py3.5.egg/keras/models.py", line 429, in fit sample_weight=sample_weight) File "/opt/conda/lib/python3.5/site-packages/Keras-1.0.6-py3.5.egg/keras/engine/training.py", line 1036, in fit batch_size=batch_size) File "/opt/conda/lib/python3.5/site-packages/Keras-1.0.6-py3.5.egg/keras/engine/training.py", line 963, in _standardize_user_data exception_prefix='model input') File "/opt/conda/lib/python3.5/site-packages/Keras-1.0.6-py3.5.egg/keras/engine/training.py", line 108, in standardize_input_data str(array.shape)) Exception: Error when checking model input: expected dense_input_1 to have shape (None, 1757832) but got array with shape (1757832, 52) When I trained the model in scikit-learn, there was no such error. Please help! AI: In your base_model function, the input_dim parameter of the first Dense layer should be equal to the number of features and not to the number of samples, i.e. you should have input_dim=X_train.shape[1] instead of input_dim=len(X_train) (which is equal to X_train.shape[0]).
H: Is deep learning a must in a Data Science MSc programme? I am reading the programme outline of this two-year MSc in Data Science and I found that it has no deep learning content (as in many other european ones). I am no expert but as far as I've seen I think that DL is going to be a heavy weight of ML algorithms for a long time. Do you think it is a bad idea to take a strong focus on classical models (e.g. bayesian) instead of teaching DL in a Data Science/ML MSc? AI: No, it's not problematic. Most data scientists do not need or use deep learning. Deep learning is very popular right now, but that does not mean it's widely used. Deep learning can lead to substantial overfitting on small to medium datasets (I'm arbitrarily going to say that means less than 2 GB), which are the sizes that most people have. Deep learning is primarily used for object recognition in images, or text/speech models. If you're not doing either of these two things, you probably don't need to use DL.
H: What is affine transformation in regard to neural networks? I have been reading a paper recently on Highway Neural Networks and found the following: $y=H(x,W_H)$ $H$ is usually an affine transform followed by a non-linear activation function, but in general it may take other forms. After Googling about affine transform I can't say I fully understand what it means. Can somebody please elaborate? AI: It is a linear transformation. For example, lines that were parallel before the transformation are still parallel. Scaling, rotation, reflection etcetera. With regard to neural networks, it is usually just the input matrix multiplied by the weight matrix.
H: Cross-validation strategy I have a regression problem and I am in doubt about how I can calculate RMSE in my life-cycle. I deal with time-series and for every prediction, I want to look N points in the future. It is apparent how to calculate RMSE for a single iteration. My question is how to calculate RMSE for N predictions of N points to get a meaningful prediction performance metric. I guess, I can average RMSE of all iterations though as I said I am not sure at all if this would reflect actual performance. AI: The natural choice would be the total squared error across the N predicted values, averaged across all examples. This is the simple extension of mean squared error from the univariate case. If you're using multivariate linear regression, this is in fact what you want to optimize in order to get the maximum likelihood estimate of the parameters as well.
H: How to understand equations in research papers? I was searching for latent class logit model for conjoint analysis. i found a paper which has equations for this model. I have co-workers who knows how to decipher the meaning of these equations and write these algorithms in any language, from scratch. One of my co-worker told me that he understood what is written in this paper and can write the algorithm in R. My co-workers mostly have masters degree in engineering. I have done graduation in accounting and fortunately landed in this job of analytics. I am very keen to learn this art/science of deciphering equations in research paper. Can anyone suggest which discipline is it or lets say any recommendation of online course or books that can help me learn this? I will be indebted for life. Thanks! AI: Study multiclass logistic regression; it's similar and has many tutorials. Good books include Larry Wasserman's All of Statistics, and An Introduction to Statistical Learning. The way to understand research papers is simply to read more of them. Whenever you encounter something you don't understand follow the references or look it in one of the aforementioned books.
H: Training data from different sources I am working on a binary classification problem. My data contains 100K samples from two different sources. When I perform the training and testing on data from the first source I can achieve classification accuracy up to 98% and when perform training and testing on the data from the second source, I can achieve up to 99%. The problem is when mix both of them, the classification accuracy goes down to 89%. Any idea how to perform the training to achieve high accuracy. Knowing that one of my features is related to the source AI: It seems that you have a domain adaptation problem. The samples from the two sources behaves differently. I suggest reading Frustratingly Easy Domain Adaptation. As the name hints, the solution is easy , popular (800 citation until now) and a good survey of other directions. I understand that the classifier that you run on the entire dataset was train on it. How well does the classifiers trained on the single sources perform on the other sources? How many of the samples belong to the first source? Will you have an indication at production of the source of the sample? The answer to these question might open more directions.
H: What is the significance of model merging in Keras? I have learned that Keras has a functionality to "merge" two models according to the following: from keras.layers import Merge left_branch = Sequential() left_branch.add(Dense(32, input_dim=784)) right_branch = Sequential() right_branch.add(Dense(32, input_dim=784)) merged = Merge([left_branch, right_branch], mode='concat') What is the point in mergint NNs, in which situations is it useful? Is it a kind of ensemble modelling? What is the difference between the several "modes" (concat, avg, dot etc...) in the sense of performance? AI: It is used for several reasons, basically it's used to join multiple networks together. A good example would be where you have two types of input, for example tags and an image. You could build a network that for example has: IMAGE -> Conv -> Max Pooling -> Conv -> Max Pooling -> Dense TAG -> Embedding -> Dense layer To combine these networks into one prediction and train them together you could merge these Dense layers before the final classification. Networks where you have multiple inputs are the most 'obvious' use of them, here is a picture that combines words with images inside a RNN, the Multimodal part is where the two inputs are merged: Another example is Google's Inception layer where you have different convolutions that are added back together before getting to the next layer. To feed multiple inputs to Keras you can pass a list of arrays. In the word/image example you would have two lists: x_input_image = [image1, image2, image3] x_input_word = ['Feline', 'Dog', 'TV'] y_output = [1, 0, 0] Then you can fit as follows: model.fit(x=[x_input_image, x_input_word], y=y_output]
H: How can you determine the growth rate you need to achieve a certain customer base? Say you have 12 K active customers every month in your platform in August 2016, how can you determine at what rate do you need to grow every month to achieve a certain total (say, 1.5 million) by next year, August 2017. Further, how can you generate different growth scenarios? Say, grow aggressively during the first quarter and then plateau. Are there any time series or optimization tools related to this kind of problem? AI: A growth rate is an exponential growth, you multiply your userbase by a certain fraction, which is above 1 in case of growth, else it would be decline. This effect compounds because the amount of users is bigger while being multiplied by the same multiplier. Let's call this multiplier a. We apply this 12 times, once for every month. We are interested in: $$12000 \cdot a^{12} = 1500000$$ This equals: $$a^{12} = 125$$ We can take the 12th root on both sides: $$a = 1.495$$ This means you would need to grow approximately 50% every month. For growing faster first and then plateauing you can do similar equations, although a bit more difficult. You have to express the growth at the first level in terms of the other level because else there are two unknowns which make for infinite solutions. Let's say the first 3 months the growth b is five times as high as the growth a last 9 months. This means $b = 5(a-1)+1$. This leads to the following equation. First we have three months of growth rate b, which means after three months our userbase is $12000\cdot b^3$ and after that we have nine months of growth a which leads to the final user base being $12000\cdot b^3\cdot a^9$. Since $b=5(a-1)+1$ this comes down to $12000\cdot (5(a-1)+1)^3\cdot a^9$, which according to Wolfram Alpha comes down to a being around 1.2785 which means b is around 2.3925. This means the first three months a growth of 139% and then nine months of 27.85% growth.
H: Assigning numerical IDs to variable values in a data file I have the problem that I got a huge source data file which is showing text for all variable values instead of numerical IDs. So for example, I would like to have the variable gender coded as 1 and 2 instead of "female" and "male" written out. And equally the same for 200 other variables of which some have up to hundreds of distinct variable values. Therefore, doing this manually is not really an option here. Could anybody please point me to a solution or hint within R, SPSS or Python how I can assign numerical IDs to each distinct variable value? I thought this would be a problem other people face more commonly as well, but I have found nothing of this kind at all. Thank you for any help! AI: SPSS has an AUTORECODE command which will do the whole job with one command. for example: AUTORECODE vr1 to vr100 /into Kvr1 to Kvr100/PRINT. This will take text variables vr1 to vr100 and recode them into new numerical variables Kvr1 to Kvr100 in which each textual category in the old variable is now automatically numbered in the new variable, with the textual category now used as a value label. The PRINT sub-command will show you in the output window a list of all the number codes chosen for text categories in each variable. Please note - using the TO convention (as in "vr1 to vr100") only works when the variables are consecutively ordered in the file. If they are not, you have to name them separately.
H: How can I look up classes of ImageNet? After downloading the imagenet urls (link), I see that it is a single 1.1 GB text file which starts like this: n00004475_6590 http://farm4.static.flickr.com/3175/2737866473_7958dc8760.jpg n00004475_15899 http://farm4.static.flickr.com/3276/2875184020_9944005d0d.jpg n00004475_32312 http://farm3.static.flickr.com/2531/4094333885_e8462a8338.jpg n00004475_35466 http://farm4.static.flickr.com/3289/2809605169_8efe2b8f27.jpg n00004475_39382 http://2.bp.blogspot.com/_SrRTF97Kbfo/SUqT9y-qTVI/AAAAAAAABmg/saRXhruwS6M/s400/bARADEI.jpg n00004475_41022 http://fortunaweb.com.ar/wp-content/uploads/2009/10/Caroline-Atkinson-FMI.jpg n00004475_42770 http://farm4.static.flickr.com/3488/4051378654_238ca94313.jpg n00004475_54295 http://farm4.static.flickr.com/3368/3198142470_6eb0be5f32.jpg n00005787_13 http://www.powercai.net/Photo/UploadPhotos/200503/20050307172201492.jpg n00005787_32 http://www.web07.cn/uploads/Photo/c101122/12Z3Y54RZ-22027.jpg I'm pretty sure n00004475_6590 is the class of the image. How can I look up what this means (in natural language)? AI: The first part before the underscore can be entered in this URL: http://www.image-net.org/api/text/wordnet.synset.getwords?wnid=n00004475
H: First steps with Python and scikit-learn I believe I have a simple if not trivial question. I have a background in statistics and I tend to use Stata and R quite a bit. I am interested in learning Python. I used it for a while now and recently came into contact with scikit-learn. I am trying to reproduce a simple example. from sklearn import tree features = [[140, 1],[130, 1], [150, 0], [170, 0]] labels = [0 , 0 , 1 , 1] clf = tree.DecisionTreeClassifier() clf = clf.fit(features, labels) print clf.predict([[150, 0]]) As you can see, the tiny script tries to predict - by the means of a decision tree - wether a object with the properties [150, 0] is likely a type 1 or 0. I run the script and get following error: File "clf_decision_tree.py", line 6 print clf.predict([[150, 0]]) ^ SyntaxError: invalid syntax I realy don't get what is wrong... Can you help me out? Best /R PS: I not sure if Cross Validated or Stackoverflow are better places to ask. Let me know. Thanks. AI: In python 3 the print function must have parenthesis, so print(clf.predict([[150, 0]])) will work
H: Large Graphs: NetworkX distributed alternative I have built some implementations using NetworkX(graph Python module) native algorithms in which I output some attributes which I use them for classification purposes. I want to scale it to a distributed environment. I have seen many approaches like neo4j, Graphx, GraphLab. However, I am quite new to this, thus I want to ask, which of them would be easy to locally apply graph algorithms (ex. node centrality measures), preferably using Python. To be more specific, which available option is closer related to NetworkX (easy installation, premade functions/algorithms, ML wise)? AI: Good , old and unsolved question! Distributed processing of large graphs as far as I know (speaking as a graph guy) has 2 different approaches, with the knowledge of Big Data frameworks or without it. SNAP library from Jure Leskovec group at Stanford which is originally in C++ but also has a Python API (please check if you need to use C++ API or Python does the job you want to do). Using snap you can do many things on massive networks without any special knowledge of Big Data technologies. So I would say the easiest one. Using Apache Graphx is wonderful only if you have experience in Scala because there is no Python thing for that. It comes with a large stack of built in algorithms including centrality measures. So the second easiest in case you know Scala. Long time ago when I looked at GraphLab it was commercial. Now I see it goes open source so maybe you know better than me but from my out-dated knowledge I remember that it does not support a wide range of algorithms and if you need an algorithm which is not there it might get complicated to implement. On the other hand it uses Python which is cool. After all please check it again as my knowledge is for 3 years ago. If you are familiar with Big Data frameworks and working with them, Giraph and Gradoop are 2 great options. Both do fantastic jobs but you need to know some Big Data architecture e.g. working with a hadoop platform. PS 1) I have used simple NetworkX and multiprocessing to distributedly process DBLP network with 400,000 nodes and it worked well, so you need to know HOW BIG your graph is. 2) After all, I think SNAP library is a handy thing.
H: Similarity measure for multivariate time series with heterogeous length and content I am interested in clustering multivariate N time series of T'values' each(different lengths) using python. Each variable have many trends and values which are simultaneously numeric and nominal. A sample $T_{i}$ in the dataset has the following format: TimeStamp | Sensor0 | Sensor1| Sensor2 2015-02-05 11:30|<Min | On | off 2015-02-05 11:31|<Min | on | off 2015-02-05 11:32| Action2 | 10 | 0.0001 2015-02-07 11:33| Action2 | 10 | 0.00012 2015-02-07 11:34| Action2 | 10 | 0.00012 2015-02-07 11:35| Action2 | 20 | 0.00015 Another sample $T_{j}$ in the dataset has the following format: TimeStamp | Sensor0 | Sensor1| Sensor2 2015-10-05 11:30| Action2 | 11 | off 2015-10-05 11:31| Action1 | 11 | off 2015-10-05 11:32| Action2 | NAN | 0.0001 2015-10-07 11:33| Action3 | NAN | 0.00012 2015-10-07 11:34| <Min | 10 | 0.00012 2015-10-07 11:35| <Min | 15 | on For the missing values (not numeric), they were not collected by the sensors so my idea was to replace them by minimum values., given that all values are strictly positive. Otherwise, they would be considered as missing values. In which case the problem would be of finding a similiraty measure that can compare missing values (off,on..) and numeric values. I am wondering if there is a similarity / distance measure already exist in the litterature to compare such multivariate timeseries, with hetergonuos lengths, and whether this kind of problem has already been formulated in the papers, books or else for R and python. Thanks for your advice. AI: Try this recent paper: Consistent Algorithms for Clustering Time Series. Your question is very much a current research topic. Here's an older but excellent paper which talks about the fundamentals: Generalized Feature Extraction for Structural Pattern Recognition in Time-series Data.
H: Resources for data science applications in finance/banking What are good resources for data science applications in finance/banking ? I want to know about the following: Trends in finance/banking area What creative companies do in finance/banking area with data science Major open data sources in finance/banking in Canada I subscribed to several data science news but none of them focus on finance/banking. I need daily/weekly news and reports for my internship. This is my job and I will be evaluated based on this so it is very important for me to get enough information in time. Any suggestions? Thank you very much! AI: Take a look at O'Reilly free Ebooks There are a couple of resources for banking/finance/fintech. There are some sites from which you can work on for learning more about predictive modelling in this sector: 1.) https://inclass.kaggle.com/c/name-that-loan-open/data - Kaggle dataset for prediction of interest rates 2.) https://datahack.analyticsvidhya.com/contest/practice-problem-loan-prediction-iii/ - Dataset on Analytics Vidhya for loan prediction 3.) There is also a twitter handle with the name @StatCan_eng posting datasets related to Canada specifically.
H: How to set class weights for imbalanced classes in Keras? I know that there is a possibility in Keras with the class_weights parameter dictionary at fitting, but I couldn't find any example. Would somebody so kind to provide one? By the way, in this case the appropriate praxis is simply to weight up the minority class proportionally to its underrepresentation? AI: If you are talking about the regular case, where your network produces only one output, then your assumption is correct. In order to force your algorithm to treat every instance of class 1 as 50 instances of class 0 you have to: Define a dictionary with your labels and their associated weights class_weight = {0: 1., 1: 50., 2: 2.} Feed the dictionary as a parameter: model.fit(X_train, Y_train, nb_epoch=5, batch_size=32, class_weight=class_weight) EDIT: "treat every instance of class 1 as 50 instances of class 0" means that in your loss function you assign higher value to these instances. Hence, the loss becomes a weighted average, where the weight of each sample is specified by class_weight and its corresponding class. From Keras docs: class_weight: Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only).
H: Why do internet companies prefer Java/Python for data scientist job? I see a many times in job description for data scientist asking for Python/Java experience and disregard R. Below is a personal email I received from chief data scientist of a company I applied for through linkedin. X, Thanks for connecting and expressing interest. You do have good Analytics Skills. However, all our data scientists must have good programming skills in Java/Python as we are a internet/mobile organisation and everything we do is online. While I respect the decision of the chief data scientist, I am unable to get a clear picture as to what are the tasks that Python can do that R cannot do. Can anyone care to elaborate? I am actually keen to learn Python/Java, provided I get a bit more detail. Edit: I found an interesting discussion on Quora. Why is Python a language of choice for data scientists? Edit2: Blog from Udacity on Languages and Libraries for Machine Learning AI: So you can integrate with the rest of the code base. It seems your company uses a mix of Java and python. What are you going to do if a little corner of the site needs machine learning; pass the data around with a database, or a cache, drop to R, and so on? Why not just do it all in the same language? It's faster, cleaner, and easier to maintain. Know any online companies that run solely on R? Neither do I... All that said Java is the last language I'd do data science in.
H: Create multiple matrices from 2 bigger ones in R I have 2 matrices, A (1000x21) and B (1000x7). Matrix A has individuals(=1000) in the rows and their consumption in 21 days at the columns. Matrix B has the SAME individuals(=1000) in the rows and some weights for each day of the week(=7) in the columns. What I would like to have in the end is 1000 (with the dimension of 2x21) matrices (one for each individual), lets call them $X_{i}$. In the first row of each $X_{i}$ I would like to have the consumption of the individual $i$ each of the 21 days (this will come from matrix A), and at the second row of the $X_{i}$ I would like to have the respective weight of that day (this will come from matrix B). So matrix A looks like $[cons_{1,1} \ cons_{1,2} \ ... \ cons_{1,21} \\ \ cons_{2,1} \ cons_{2,2} \ ... \ cons_{2,21} \\ . \\ . \\ . \\ \ cons_{1000,1} \ cons_{1000,2} \ ... \ cons_{1000,21}] $ Matrix B looks like $[weight_{1,1} \ weight_{1,2} \ ... \ weight_{1,7} \\ \ weight_{2,1} \ weight_{2,2} \ ... \ weight_{2,7} \\ . \\ . \\ . \\ \ weight_{1000,1} \ weight_{1000,2} \ ... \ weight_{1000,7}]$ And I would like the matrix $X_{i}$ to be like $[cons_{i,1} \ cons_{i,2} \ ... \ cons_{i,21} \\ \ weight_{i,1} \ weight_{i,2} \ ... weight_{i,k}]$ Any ideas how to do this in R in a loop ? AI: This will extract the rows from matrix A and B. matrix(c(A[x,],rep(B[x,],times=3)),nrow=2,byrow=T) If you want to get them into a list (recommend) single<-lapply(c(1:1000), function(x) matrix(c(A[x,],rep(B[x,],times=3)),nrow=2,byrow=T)) To put them all in diagonal matrix with adiag from magic package d<-adiag(single[[1]]) for(i in 2:1000){ d<-adiag(d,single[[i]]) } I could make it work without the loop (anyone any suggestions?)
H: Good explanation for why regularisation works I am trying to understand regularisation for logistic regression currently, but I am not sure I get it. I understand the issue of overfitting when there are relatively too many features, and I get that you would like to limit the impact of some of these superfluous features, but you are doing in regularisation is impacting all features. And this brings me to my next point, which is that by putting a limit on weights, you can still construct the same hyperplane. As long as you have the same ratio between the weights $w_i$ for $i \in \{1,2,..,n\}$, the "angle" of the hyperplane would still be the same, and then you can simply "adjust" it's position by the intercept $w_0$. Many lecturers and authors use the complex polynomial example with additional terms $f(x_i)$ that make the decision boundary non-linear to illustrate that using regularisation will make this boundary more linear, and thus less prone to overfitting, but then again, you could plot $f(x_i)$ on a separate axis, making decision boundary a hyperplane, and then you could fit the exact same decision boundary with smaller weights, as long as the ratios are conserved. Thus, if you can make the exact same hypothesis with smaller sum of squares of weights $\sum w_i^2$ (that is with smaller regularisation term), what is the point of regularisation? Or in other words, as there obviously exists empirical evidence showing that regularisation works, how does it work? Is there a good proof for it, or some good (and less vague) intuition? AI: The "same angle hyperplane" does not have the same cost. It is the same decision boundary as you describe it, but perpendicular distances to it are larger wrt the norm of the weights. In effect with higher weights in the same ratio (i.e. without any regularisation effect), the classifier will be more confident in all of its decisions. That means the classifier will be more sensitive to getting as many observations as possible in the training set on the "right" side of the boundary. In turn this makes it sensitive to noise in the observations. Your estimated probability for being in the positive class is: $$p(y=1|X) = \frac{1}{1+e^{-W^TX}}$$ This includes $w_0$ and fixed value 1 for $x_0$. If you take the midpoint, decision line where $W^TX$ is zero (and the output is at the threshold value 0.5), that defines your decision hyperplane in $X$ space. When $W$ has the same factors but a larger norm with $w_0$ compensating to make the same hyperplane, then $X$ values that were on the decision hyperplane still give threshold value of 0.5. However, $X$ values away from the hyperplane will deviate more strongly. If instead of 0 you had $W^TX=1.0$ and doubled the weights keeping the same hyperplane, you would get $W^TX=2.0$ for that example. Which changes your confidence from 0.73 to 0.88. The usual cost function without regularisation for logistic regression with example vectors $X_j$ and targets $y_j$ is: $$J = - \sum_{\forall j} y_jlog(\frac{1}{1+e^{-W^TX_j}}) + (1 -y_j)(1 - log(\frac{1}{1+e^{-W^TX_j}}))$$ The cost is more sensitive to distances from the hyperplane for larger weight values. Looking your example for the imaginary item (with 0.73 or 0.88 confidence), when the categorisation is correct (i.e. y=1) the score would improve by 0.19 for that example if the weights doubled. When the categorisation is wrong (y=0) then the score would worsen by 0.81. In other words for higher weights, with same weight ratio, the same miscategorisations are punished more than correct categorisations are rewarded. When training, weights will converge to specific balanced weight vector for the minimum cost, not to a specific ratio that forms a "best decision hyperplane". That's because the hyperplane does not correspond to a single value of the cost function. You can demonstrate this effect. Train a logistic regression classifier - without any regularisation to show it has nothing to do with that. Take the weight vector and multiply by some factor e.g. 0.5.Then re-train starting with those weights. You will end up with the same weights as before. The cost function minimum clearly defines specific weight values, not a ratio. When you add regularisation, that changes the cost and how the weights will converge. Higher regularisation in effect makes the classifier prefer a boundary with lower confidence in all its predictions, it penalises "near misses" less badly because the weights are forced down where possible. When viewed as a hyperplane, the boundary will likely be different.
H: Is there any book for modern optimization in Python? I was reading Modern Optimization with R (Use R!) and wondering if a book like this exists in Python too? To be precise something that covers stochastic gradient descent and other advanced optimization techniques. Many thanks! AI: You should be able to translate code written in one language -- even pseudo-code -- to another, so I see no reason to avoid books for R. If you want one specifically for python, there's Machine Learning in Action by Peter Harrington. One of scikit-learn's core committers is a releasing a book in October: Introduction to Machine Learning with Python: A Guide for Data Scientists.
H: Input for individual perceptron in input layer in MLP For better understand of neural networks I started implementation of Multi Layer Perceptron. For now I'm implemented single Perceptron that resolve XOR problem. From this point I want start build MLP but I'm not sure if I correctly understand MLP structure. Assume I have data instance with 3 attributes and 2 classes and 5 input perceptrons. Should I give every perceptron same input and every output of perceptron should go to every hidden layer perceptron's input? I attach neural net pic. AI: With MLPs you can follow a set of simple rules: The number of neurons in your input layer equals the number of features / data instances you have. A single neuron takes only one input (one feature) The output from one neuron can go to multiple neurons in the next layer which may or may not be fully connected. May the force be with you ;)
H: Measuring performance of different classifiers without class type in data To measure the performance of a classification algorithms on a dataset that has an attribute for class type, I divide my dataset to training and test samples and then create a confusion matrix for False Positive, False Negative, True Positive and True Negative samples. Hence there is a class type attribute(e.gYes or No ), the confusion matrix is pretty easy to calculate. Now suppose that I have a dataset that lacks a class type attribute and all samples are of the class Yes . How can I measure the performance of different classification algorithms using these kind of datasets? AI: Normal classification will not work in this case, classifiers learn functions that are able to seperate the training examples. In your case you only have one class which will not work for all the classical examples. There is a machine learning task called One-class classification, see this Wikipedia page. In your case PU Learning (Positive and Unlabeled) seems most appropriate. Evaluating the performance is extremely difficult without having some negative examples as well however. See this question and answer on the Stats stackexchange.
H: Problems with accuracy.score sklearn I am learning python and trying myself out at mashine learning. I am reproducing a super simple example - based on the infamous iris dataset. Here it goes: from sklearn import datasets iris = datasets.load_iris() X = iris.data y = iris.target from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .5) from sklearn import tree nordan_tree = tree.DecisionTreeClassifier() nordan_tree.fit(X_train, y_train) from sklearn.metrics import accuracy_score I get the following error message: Traceback (most recent call last): File "tree3.py", line 17, in <module> print(accuracy_score(y_test, predictions)) NameError: name 'predictions' is not defined I don't get it. As far as I understand, predictions is the vector containing all the predictions produced with DecisionTreeClassifier? What am I doing wrong? AI: You have not defined the variable predictions anywhere. You will need to get them from your classifier somehow. You have fit your nordan_tree on your training data, now you can use the fitted nordan_tree to generate the predictions, for example like this: predictions = nordan_tree.predict(X_test) Then your line of: print(accuracy_score(y_test, predictions)) should work.
H: XGBoost increase the error when changing evaluation function I have changed the eval function of XGBoost to rmsle and the optimisation increase the error after the iteration [2] instead of decreasing it. If I change to the default eval function, RMSE, this does not happen. This is the code of RMSLE used: def evalerror(preds, dtrain): # this is compatible with DMatrix labels = dtrain.get_label() assert len(preds) == len(labels) labels = labels.tolist() preds = preds.tolist() terms_to_sum = [(math.log(labels[i] + 1) - math.log(max(0,preds[i]) + 1)) ** 2.0 for i,pred in enumerate(labels)] return 'error', (sum(terms_to_sum) * (1.0/len(preds))) ** 0.5 This is the parameters of XGBoost used: param = {'bst:max_depth':1, 'bst:eta':0.025, 'silent':False, 'objective':'reg:linear','eval_metric':'rmse' } bst = xgb.train( param, d_train, num_rounds,early_stopping_rounds=20, evals=eval_list, verbose_eval=True, feval=evalerror) and this is the evaluation: [0] eval-error:0.836219 train-error:0.835095 Multiple eval metrics have been passed: 'train-error' will be used for early stopping. Will train until train-error hasn't improved in 20 rounds. [1] eval-error:0.809301 train-error:0.806747 [2] eval-error:0.792647 train-error:0.78908 [3] eval-error:0.803355 train-error:0.798805 [4] eval-error:0.803261 train-error:0.79835 [5] eval-error:0.809352 train-error:0.804283 [6] eval-error:0.810453 train-error:0.805126 [7] eval-error:0.811059 train-error:0.805646 [8] eval-error:0.815261 train-error:0.809722 [9] eval-error:0.820237 train-error:0.814521 [10] eval-error:0.823378 train-error:0.817408 [11] eval-error:0.824981 train-error:0.81868 [12] eval-error:0.826607 train-error:0.820176 [13] eval-error:0.827813 train-error:0.821358 [14] eval-error:0.827625 train-error:0.821007 [15] eval-error:0.823347 train-error:0.816547 [16] eval-error:0.824362 train-error:0.81752 [17] eval-error:0.82529 train-error:0.818321 [18] eval-error:0.824621 train-error:0.817463 [19] eval-error:0.824103 train-error:0.816766 [20] eval-error:0.814759 train-error:0.807234 [21] eval-error:0.807961 train-error:0.800186 [22] eval-error:0.808398 train-error:0.800246 Stopping. Best iteration: [2] eval-error:0.792647 train-error:0.78908 It may be the case that I need to adjust my objective function to this evaluation metric? AI: If your goal is to minimize the RMSLE, the easier way is to transform the labels directly into log scale and use reg:linear as objective (which is the default) and rmse as evaluation metric. This way XGBoost will be minimizing the RMSLE direclty. You can achieve this by setting: dtrain = DMatrix(X_train, label=np.log1p(y_train)) where np.log1p(x) is equal to np.log(x+1). When you want to make your prediction in the original space, you will need to compute the inverse transform of np.log1p, that is to say np.expm1: predictions = np.expm1(bst.predict(dtest)) If you are just interested into monitoring the RMSLE through the training of your XGBoost which actually is minimizing the RMSE, then you should expect to see the RMSLE behave a little strangely as it is not what you are minimizing.
H: Similarity between bordermode and zero padding in Keras In Keras, the border_mode = 'valid' doesn't zero pad the input. Thus, we subsequently get an output feature map that is not the same size as the input. Likewise, setting border_mode = 'same' gives output feature maps as the input. My question is if we set border_mode = 'same', should we also perform zero padding using the zero_padding function? AI: I think I found the answer to this here https://github.com/fchollet/keras/issues/1984. If the stride is 1, border_mode = 'same' does the job of padding to ensure that the output feature maps are the same size as the input.
H: Splitting data in scikit-learn I know how to split the dataset into train and test sets using train_test_split but is there any way that I can split the dataset into three different sets, i.e., "Train set", "Test set" and "Validation Set". An example should be enough. AI: train_test_split is just a utility function around ShuffleSplit, which on its turn just randomly assigns each sample to either train or test, taking the desired probability into account. You can do that however you'd like, and there's no real reason to use that specific function. Its not too hard to come up with some code that does that for three values or N values, if you rather avoid calling train_test_split twice.
H: Which programming languages should I use to integrate different analysis software packages? I am currently writing a package to streamline data analysis for a research lab. There are several different analysis software packages that we use, based out of unix, matlab, and (rarely used) python. A typical data set is about 250GB (raw), and requires at least 4 different preprocessing steps before analysis. The finished product typically ends up taking up about 1TB. The goal of my package is to allow the user to pick and choose which existing package to use for each step before running the analysis, and then the program will execute it without further user intervention. Since the goal is to integrate these different packages, written in different languages, I decided to write the program in bash to make it easy to call the actual analysis scripts no matter what language they are written in. The program is starting to come along, but it is getting very complex because of the various idiosyncratic expectations and conventions of each analysis package. I realize bash may not be the most suitable language for complex tasks, but I like that it's easy to call scripts in different languages from there, and that it's relatively simple. The program also does a lot of file handling, which bash is good at. On the other hand, I hear it's also very slow, and it gets clunky when things get more complicated. I'm wondering if bash is the best choice for this task. Does anyone have suggestions for other languages, or combinations of languages, that might be better suited to my needs? I should note that I am a self-taught programmer and this is my first real programming challenge. I am mostly familiar with bash, matlab, R, and a little bit of python, but I'd like to learning new things too (C maybe?). Also, this is all going to run on unix. AI: If you are mostly stitching up together calls into other software, like unix utilities (awk, grep, sed ...), python, and matlab scripts, bash is just fine or possibly even best for the job to construct simple pipelines and workflows. It's easy in bash to read user input, store it in variables, then launch other software depending on the set variables. It's perfectly fast enough for that, and nothing else gets any easier. If you, however, were to use bash for preprocessing itself, like looping through files line by line, packing and unpacking tab-separated values into arrays etc., that would be excruciatingly slow and not recommended.
H: What techniques to use for image matching I have a database with around 30,000 pictures. All of them are a different object. They are all from a certain perspective, the pictures itself are the same size but the objects vary in size. I want to build a system that you can query with a new picture, and that will return it's nearest neighbor, given that it is similar enough. The queried images will look relatively similar to the originals, there might be some horizontal and or vertical translations, a bit different lightning and sometimes there will be a sticker on a different place. Some queried objects will not be in the set, and that needs to be returned as well. What are good techniques to try and what would be the downside? Getting multiple pictures of each object is infeasible. Here's some ideas, I'm wondering if there is more to try: Euclidean distance on the raw data (very sensitive but fast) Use traditional keypoint matching, linear matching is very slow unfortunately Use (denoising) autoencoder for lower dimensional feature representation, linear match on this encoded space (smallest Euclidean distance, at least faster linear search) Learn siamese network for linear matching (don't know how fast this works but seems slow too) Learn deep binary autoencoder onto 28 bits which allows for very quick narrowing of the search space to do one of the previous methods, by using these bits as memory mapping to a list of candidate solutions Any other ideas? AI: Hashing is the way to go if you want fast -- constant time -- retrieval of nearest neighbors. Here's a recent example using neural networks to learn a binary hash: Deep Learning of Binary Hash Codes for Fast Image Retrieval (code) (slides) You want to avoid doing all-pairs computations like correlations.
H: forecast monthly shipment (time series) for 300 products individually in R I can build ARIMA model with regressor to forecast monthly shipment for one product. but I have 300 products and each of them needs a monthly forecast. my question is, instead of building 300 models, is there another approach in R to deal with the goal I am trying to accomplish? thank you! AI: It is difficult to answer this question with the limited information provided here. Nonetheless, I have following recommendations. Do you have a (data backed) reason to believe that you do not need 300 different models? For example, are the shipments of different products correlated? Alternatively, can you instead model the problem at higher levels like product categories? If either case, you can take a look at hierarchical time series modeling. This journal article by Rob Hyndman, author of forecast package in R can get you started on hierarchical time series modeling.
H: Transition from clustering to classification? To date, I have done several ad-hoc text clustering projects which use combinations of topic modeling, k-means, and other algorithms. Basically, the point of these projects was to produce themes for different events based on associated text. The themes were named manually after the appropriate levels of clustering were determined, and are now stored in a csv in the following format: event_id majortheme minortheme majortheme_id minortheme_id 12 Job Failure TWS Issue 1 major1minor1 14 Job Failure TWS Issue 1 major1minor1 15 Job Failure Job Abend 1 major1minor2 16 Access Issue Unable to Login 2 major2minor1 17 Access Issue Unable to Connect 2 major2minor2 I want to transition from clustering to classification (from descriptive to prescriptive analytics), i.e. being able to take new events (with new event_ids) and classify them based on previous clusterings. This would be sort of an iterative training dataset, as new classifications would be added to previous clusterings, after verifying that the model hasn't gone completely awry. Using Python, what is the best approach to implement this sort of classification pipeline? Is it as simple as saving my initial clustering results and then just using that data as a training set going forward? Then saving the test predictions to the original training dataset, and so on? AI: In order to build a model to make predictions you need a labeled training set, that is, a training set in which each training example is assigned a class label. Training sets are usually labeled by human experts that use their domain knowledge to manually classify the examples in the training set. You have already done that, as described in your first paragraph. However, sometimes this process is expensive. In order to decrease cost sometimes semi-supervised learning is applied. In semi-supervised learning, and under some assumptions, a small amount of labeled, together with a large amount of unlabeled data, are used to build a predictive model. The unlabeled data are assigned labels during the training process. This seems to match your idea in your last paragraph, and it is already implemented in Python. I think that if you found clear clusters in your data it may work very well. Worth a try at least.
H: Algorithm selection for controlling a model vehicle I've built an autonomous sailing robot (https://github.com/kolosy/ArduSailor). Turns out, the problem of piloting it is fairly complex, and my procedural approach to solving it hasn't worked well (or at all). I think that an ML-based approach may be better, and I'm trying to figure out the right algorithm to use. I'm viewing it as an optimization problem of sorts - I've got a small set of parameters: Position (lat, lon) Orientation (in 9 DOF) Speed Wind speed & direction Distance to waypoint Heading to waypoint Sail position (basically winch orientation, a single value between 0 and 180) Rudder position (same as above) If I'm thinking about this right, I need to vary my rudder and winch over time in response to my current position, orientation and wind direction to minimize the difference between my orientation and the heading to the waypoint, and minimize the distance to the waypoint. My approach right now, is to train an ANN (using this code) by recording a manual run through my course. Is this the right approach and algorithm? Is there a better / more suitable way of thinking about this? AI: Please, do not use ANN, because what you are looking for actually is SLAM method. It is a set of algorithms working together to localize your ship in an environment and mapping it simultaneously. You can find an introduction on the Wikipedia: https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping This is what autonomous cars are using.
H: Need to prepare the data to Link Analysis project? I've a dataset with following schema: Customer_ID - Unique ID Product - ID of purchased product Department - ID of the department that sells the product Product_Type - The purchased product type Date - The date of purchase Quantity - The number of units purchased I need to do a link analysis project to analyze some consumption patterns of the products and answer the following questions: "If product B is purchased then customer will also take product A" I will use Scala/Python to make the link analysis over the datasets but the examples I have seen are dataset with direct links to the project of "Flight Data" in the schema is: ID Origin Destination My question is: there I need to prepare my dataset to make the Link Analysis (there exists some best practices to do this?) or can I analyze the dataset with that structure? Many thanks! Sorry my inexperience on this topic! AI: One option is that you could make a bipartite graph from your TLOG and then implement some link analysis. Depending on the requirements that you need (volume of the data) there are different frameworks that you can use. One which is quite popular for not so large data is networkx where (manual) you can find already implemented algorithms for link analysis and link prediction. Maybe, the community would be able to help you more if you try to be more specific about what kind of link analysis you want and what kind of problem you try to solve (does it has to do probably with Supervised or Unsupervised learning).
H: Query regarding neural network model I used the Neural Network Toolbox in matlab to train my data. I used four training algorithms, Scaled Conjugate Gradient (SCG), Gradient Descent with momentum and adaptive learning back-propagation (GDX), Resilient Back Propagation (RBP) and Broyden-Fletcher-Goldfarb-Shanno quasi-Newton back propagation (BFG). I have fixed the seeds at different points and obtained the accuracy. This is what I get: The first column contains the size of the feature set. I have added features and increased the size of feature set to analyse the performance. Initially I have ranked the features and then taken the top 8 feature as one set, the top 16 feature as the next set and so on. The first number before the '-' , is the performance of the algorithm on the training set, the second number after the '-' is the accuracy of the testing set. The train and test set has been divided into 60 and 20 respectively. The other 20 is the validation set. The learning algorithms each have been run with the same seed values to fix the accuracy. I have fixed the seed to obtain each of the results btw, Like I have used rng(1), rng(10), rng(158), rng(250) and averaged the results to obtain one single pair of train-test accuracy, and I have done this for each pairs. As you can see I am getting the same accuracy for all feature set size for each of the individual training algorithm. The same data shows perturbation in SVM chen I change the set size. What does this mean? AI: To debug this case, I suggest you try the following steps: Reduce the features step-by-step until you end up with using just 1 feature and see whether the accuracy changes or not. Add a sine-wave and a random noise to the feature set and see whether it effects any of these optimization algorithms. Re-evaluate how you selected or derived these features, check if these are highly correlated. Are your classification targets highly imbalanced? If so, then under/over sample them to achieve a more balanced training set. Then check the performance of you algorithm after training over this balanced dataset. As already highlighted by Jan van der Vegt, its extremely odd that changing the no of features from 8 to 40 has no impact on test set accuracy.
H: Neural Network data type conversion - float from int? In case of image classification with 0-255 pixel values of integer type, is it necessary/recommended to convert the values into float with Neural Networks? This conversion seems me unnecessary (moreover, float slows down matrix operations) but several implementations follow this practice. Why? AI: Technically with most languages you could pass in integer features for the input layer, since the weights will be floats, and multiplying a float by an integer will give you a float. Also, you don't usually care about partial derivatives of the input data, so it doesn't matter that the values are discrete. However: For all weights and neuron activations, if you are using a method based on backpropagation for training updates, then you need a data type that approximates real numbers, so that you can apply fractional updates based on differentiation. Best weight values are often going to be fractional, non-whole numbers. Non-linearities such as sigmoid are also going to output floats. So after the input layer you have matrices of float values anyway. There is not much speed advantage multiplying integer matrix with float one (possibly even slightly slower, depending on type casting mechanism). So the input may as well be float. In addition, for efficient training, the neural network inputs should be normalised to a specific roughly unit range (-1.0 to 1.0) or to mean 0, standard deviation 1.0. Both of these require float representation. If you have input data in 0-255 range - float or not - you will usually find the network will learn less effectively. There are exceptions to these rules in some architectures and learning algorithms, where perhaps an integer-based input would work, but for the most common NN types, including MLP and "deep" feed-forward networks, it is simpler and easier to use float data type.
H: Feeding R agnes object into cutree I'm using agnes to group terms that frequently appear with one another in a set of documents. I get a dendrogram which verifies that things are working as expected. From this, I'd like to to retrieve the cluster of each member. I'm using cutree to cut the tree at a specified height, but it is returning one extra cluster, I'm not sure why. length(flex75$order.lab) [1] 132 length(cutree(flex75, h = 1.5)) [1] 133 It occurs for varying heights, even up to h = max(flex75$height) and all clusters = 1. So I cannot bind the labels to their clusters. EDIT: I found the issue to be when I reorder flex75$height to be in ascending order (I was previously receiving an error that: the 'height' component of 'tree' is not sorted (increasingly)). I was also reordering the $order and $order.lab components using the $height order, and turns out that height is of length n-1 and order/order.lab is of length n. My question now is: What is the relationship between the agnes object's $height vector, and the $order/$order.lab vectors? Can I permute the heights, as I have, and feed that object into cutree? Should any of the object's other properties be correspondingly changed? AI: I found an answer in a related post here. Another user was getting a similar error about ordering height, but theirs came with the suggestion to apply as.hclust() first. I converted the agnes object to an hclust and passed that directly into cutree. That seemed to solve the problem.
H: logistic regression - why exponent (log ratio) is linear I am new to ds and stats in general. I read numerous article to understand logistic regression. I got some idea why it works and how it fits to scattered plot of 1s and 0s when target variable is binary. however one piece of puzzle I still don't understand is how come someone derived that ln(p/1-p) = B0 + B1X1 + .. I see all articles assume that this is the link function and then go on talking about how it solves our regression problem for binary variable. but how this link function came about. AI: p is a probability so it is strictly between 0 and 1. So ln(p/(1-p)) is: for p = 0: ln(0/1) = -Inf for p = 1: ln(1/0) = +Inf So now you've rescaled the probability to +/- Inf. In the GLM framework you can use practically any function that scales probability to +/-Inf (see any GLM textbook, or https://stats.stackexchange.com/questions/20523/difference-between-logit-and-probit-models). How did it come about? Well, mathematicians and statisticians realised they needed a function with the above properties, had a think, came up with a few, decided on the ones that had tractable asymptotic properties, explored how they worked with real data and decided the sensible ones were logit (as above), probit (see references) and a few others. Of course you should always test your model assumptions against your data, and choice of link is just another one of those assumptions, just like assuming linearity in a covariate.
H: Shared weights in convolutional neutral network In convolutional neutral network, the weights are shared within a feature map. What about two different feature map? How to make them different (so that we don't learn the same thing again). Q: What exactly in the training algorithm to make it so that the weights are different across different feature maps. For example, if I define 2 feature maps, does the network guarantee that the weights are different in feature map A and feature map B? AI: By randomizing your initial weights, the gradients will flow differently through the network. If all the feature maps would converge to the same weight set, there is much less to learn and your loss will be higher than learning different features. Let's say they start very, very close to each other, then making them more different from eachother makes the network more expressive. Due to the nature of backpropagation these weights will diverge as opposed to converge (in general).
H: Lack of activation function in output layer at regression? I supposed that the output layer should have certain kind of activation function (preferably linear or tanh) for regression, but I recently read that in case of regression this is not necessary. This seems me reasonable, after all, the node(s) of the output layer produce(s) numeric values themselves. Which solution is better: with or without activation func? AI: Activation "linear" is identical to "no activation function". The term "linear output layer" also means precisely "the last layer has no activation function". Whether you use one or the other term might be down to how your NN library implements it. You may also see it described either way around in documents, but it is exactly the same thing mathematically: $$a^{out}_j = b^{out}_j + \sum_{i=1}^{N^{hidden}} W_{ij}a^{hidden}_i$$ Where $a$ values are activation, $b$ are biases, $W$ is weight matrix. For a regression problem with a mean squared error objective, this is the most common approach. There is nothing stopping you using other activation functions. They might help you if they match the target variable distribution. About the only rule is that your network output should be able to cover possible values of the target variable. So if the target variable is always between -1.0 and 1.0, with higher density around 0.0, perhaps tanh could also work for you.
H: Which Algorithm or combination of Algorithms to use to develop supervised Video Event detection? I have to develop a video event detection tool in ticketing counter.The tool must take photographs of persons who jumb over gate without taking tickets.I have a set of videos in which people jumbing over gate.So with that data in hand How can i implement a Video event detection tool?I am new to video analytics.I don't know where to start.Where can i find some good tutorial about supervised video event detection.? AI: I recommend using a 3D convoluted neural network. Traditional 2D convoluted neural networks excel at identifying objects in images, and 3D convolution is used for identifying objects or movements also across multiple frames (i.e., time). Tensorflow offers this capability.
H: Market Basket Analysis - Data Modelling Imagine that I've the following dataset: Customer_ID Product_Desc 1 Jeans 1 T-Shirt 1 Food 2 Jeans 2 Food 2 Nightdress 2 T-Shirt 2 Hat 3 Jeans 3 Food 4 Food 4 Water 5 Water 5 Food 5 Beer I need to make the consumer behaviour and predicte what products are associated. For do that I think that will a good strategy make the relationships first and then count the occurrences (don't know if anyone have a better idea). The first step is to conclude this relationships: Jeans-T-Shirt-Food Jeans-Food-Nightdress-T-Shirt-Hat Jeans-Food Food-Water Water-Food-Beer How can do this? With Apache PIG or with Spark? Many thanks!!! AI: You may use groupByKey or combineByKey in Spark
H: Encoding features in sklearn Suppose I have a dataset of size(10000, 45). One of the features in the dataset is activity_type in which the values vary from 1 to 15 as shown below: df = pd.read_csv('actTrain.csv') df['activity_type'].head() The output of the above code is as: 0 1 1 1 2 2 3 1 4 3 Name: activity_type, dtype: int64 Will encoding the activity_type in the above code using OneHotEncoder in sklearn improve the model in anyway? Is it necessary to encode that feature? And if yes, which one should I choose : LabelEncoder or OneHotEnocder? AI: LabelEncoder converts strings to integers, but you have integers already. Thus, LabelEncoder will not help you anyway. Wenn you are using your column with integers as it is, sklearn treats it as numbers. This means, for example, that distance between 1 and 2 is 1, distance between 1 and 4 is 3. Can you say the same about your activities (if you know the meaning of the integers)? What is the pairwise distances between, for example, "exercise", "work", "rest", "leasure"? If you think, that the pairwise distance between any pair of activities is 1, because those are just different activities, then OneHotEncoder is your choice.
H: How to train neural networks with large sized data sets? I have a dataset size of ~500000 with input dimension 46. I am trying to use Pybrain to train the network but the training is extremely slow for the whole dataset. Using batches of 50000 data points, each batch takes more than 2 hours for training. What are caveats of optimizing the network design so that the training is faster? AI: Here are some of the things that influence your training speed: Number of weights in your network Speed of your CPU Package you are using (mostly engine it is working on, in PyLearn this is Theano) If all your data fits in memory or you are reading from disk in between batches With regards to network design the only thing you can really do is make the network more shallow to reduce the number of weights. To reduce the number of epochs there might be other options like adding residual connections but that will not decrease the training time of 1 epoch. Without more information it is unclear where the bottleneck is, but 20 hours for one epoch seems a bit high. The easiest and biggest improvement you will be able to get is to use a good GPU, which should be possible using pylearn since it is built on top of Theano.
H: Unexpected Error Message in RStudio; while using 'twitterR' Package I'd install an R package twitterR in my RStudio for accessing Twitter data for a particular handle. But whenever I do try to access the Twitter data it's showing me an unexpected message: In doRppAPICall(“search/tweets”, n, params = params, retryOnRateLimit = retryOnRateLimit, : 100 tweets were requested but the API can only return 0 Any thoughts on that? I tried different parameters, handles and functionalities but still getting the same issue. Here's my code for the particular functionality-- my_tweets <- searchTwitter('@jabhij', geocode='20.593684, 78.96288, 2000km', n = 100) my_tweets_df <- do.call("rbind", laaply(my_tweets, as.data.frame)) View(my_tweets_df) Any thoughts on that will be very helpful. Cheers! :) AI: It's because of the whitespace in the parameter for geocode. It should be: my_tweets <- searchTwitter('@jabhij', geocode="20.593684,78.96288,2000km")
H: is there big difference between data Science , big Data and database? is there big difference between data Science , big Data and database? i am confused in these three can anyone help me to out of this confusion? AI: Well, they are absolutely different things but that are somehow linked. I gonna go through each of them. Data Base Think of a data base (DB from now) like a computer which only purpose is to store data accesable to be read. By data, and focusing only in SQL-like DB, I mean basically tables of information like and excel file with columns and rows. You can think of a SQL DB like an ecosystem of excel tables which share some common field. So basically a DB is a hardware infrastructure which allows to write and read a given amount of information within it (in the very beggining they were plain computers, of course with the rise of internet specialized hardware appeared). You can build your own DB in your personal computer. Big Data " An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed ... for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes." P.S.Laplace Laplace did not think over it very deeply before formulate his sentence, obviusly if god had come to him and give what he wanted soon or later he would have realized all that information was indeed useless for him. Where could him store all that? From where should him start to read? What can he do with such amount of information he could never end to compute? In case he could read everything what should he calculate first? These all are questions Big Data tries to answer and find a solution to. Big data appeared together with the huge websites internet gave birth, such as Amazon or Google. At some point they need to store so many information that it was imposible to store it in a single computer, not even in a big one, so they need to use a set of computers for which previous standard technologies for DB didnt work anymore. This fact was also the seed for No-SQL DB. More about Big Data and non-sql here: http://www.kdnuggets.com/2016/07/seven-steps-understanding-nosql-databases.html Data Science Finally Data Science is a statistical science, which aim is to extract order out of chaos as any other science, however meanwhile the rest of the sciences are focused in a single "narrow" piece of knowledge, such as biology, chemistry, etc; data science is on the other hand multidisciplinar science that could face problems from a broaden origin. Examples would be marketing or business oriented, cosmology, etc. So data science uses mathematical and computer science algorithms to provide some useful information out of a disordered set of data. And here is where the link with Big Data comes, actually is in the question is set before: What can he do with such amount of information he could never end to compute? So Data Science and Big data are a very usual marriage in most IT companies nowadays and in more specific fields day after day. But, data science is apply a set of mathematical algorithms to data (like apply a calculation in an excel file to create a new row) and big data is the technology to have a huge amount of excel files (I use the word "excel files" here is just to make it easier to grasp).
H: Apply GroupByKey to identify the Frequently Products Purchase Together I've a data set with this fields: Transaction_ID Customer_ID Department Product_ID And I'm trying to obtain a tuple with the products associated with each customer transaction. Like: Transaction_1 -> Product_ID 1, Product_ID 2, Product_ID 3 Transaction_2 -> Product_ID 1, Product_ID 2, Product_ID 4 .... I've this code: But It not return the dataset as I want: case class transactions (Transaction_ID: String, Customer_ID: String, Department: String, Product_ID: String) def csvToMyClass(line: String) = { val split = line.split(',') transactions(split(0),split(1),split(2),split(3)) } val csv = sc.textFile("FILE").map(csvToMyClass) csv.take(10) csv.saveAsTextFile("PATH/output.csv") How can I obtain the list of products associated group by Transaction_ID?? AI: Since you are using Spark 1.6, I'd rather do these kind of transformations with DataFrame as it's much easier to manipulate. You'll need to use SQLContext implicits for this : import sqlContext.implicits._ // not need in spark-shell Now, let's create some dummy data just to follow the code snippet that you have provided : scala> val data = sc.parallelize(Seq("1,2,3,4", "2,3,4,5", "1,3,4,5", "1,6,6,7")) // data: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[13] at parallelize at <console>:27 Back to your code, we define a case class Transactions and the CSV converter : scala> case class Transactions(Transaction_ID: String, Customer_ID: String, Department: String, Product_ID: String) // defined class Transactions scala> def csvToMyClass(line: String) = { val split = line.split(','); Transactions(split(0), split(1), split(2), split(3)) } // csvToMyClass: (line: String)Transactions We will use implicits now to convert our data into a DataFrame after converting it to Transactions : scala> val df = data.map(csvToMyClass).toDF("Transaction_ID", "Customer_ID", "Department", "Product_ID") // df: org.apache.spark.sql.DataFrame = [Transaction_ID: string, Customer_ID: string, Department: string, Product_ID: string] Let's take a look at the DataFrame : scala> df.show // +--------------+-----------+----------+----------+ // |Transaction_ID|Customer_ID|Department|Product_ID| // +--------------+-----------+----------+----------+ // | 1| 2| 3| 4| // | 2| 3| 4| 5| // | 1| 3| 4| 5| // | 1| 6| 6| 7| // +--------------+-----------+----------+----------+ Now all we have to do is a simple group by and perform a collect_list aggregation on the first dataframe : scala> val df2 = df.groupBy("Transaction_ID").agg(collect_list($"Product_ID")) // df2: org.apache.spark.sql.DataFrame = [Transaction_ID: string, collect_list(Product_ID): array<string>] We can check the content of our new DataFrame df2 : scala> df.groupBy("Transaction_ID").agg(collect_list($"Product_ID")).show // +--------------+------------------------+ // |Transaction_ID|collect_list(Product_ID)| // +--------------+------------------------+ // | 1| [4, 5, 7]| // | 2| [5]| // +--------------+------------------------+ I hope that this answers your question. Note: If you wish to know what's the difference between RDD and DataFrames, I advice you to read Databrick's blog entry about it here.