text
stringlengths 83
79.5k
|
---|
H: About 1M rows of data. Should I restrict myself to few columns as well?
I'm trying to build a predictive model from about 1 million rows of data. My goal is to predict a certain numerical value.
I have the intuition that I should use very few numerical binary columns so I don't get data points that are too separated, a.k.a., the curse of dimensionality. Is this true? Besides, is it the same for numeric columns?
AI: Did you try it?
In general, I believe the curse of dimensionality is way overrated. As a rule of thumb, the curse of dimensionality refers to having more columns than rows. I doubt that you have 1M columns...
A robust algorithm, such as Logistic Regression, can often deal with sparse data. I sometimes get the feeling that this algorithm doesn't get the credit it deserves. Especially in production (due to high bias and low variance), I prefer it whenever possible. Nothing worse than an algorithm that fails in production... |
H: Elements of Statistical Learning - question on p. 12
I am starting to work through Elements of Statistical Learning, and right off the bat I am coming across things that I don't understand. I would be grateful for any help from this community. Please let me know if this is not the appropriate forum to post these questions (in which case, if you're feeling extra-nice, please point me to the correct forum).
On page 12, the authors present the familiar expression for linear regression:
Y-hat = X-transpose * β
The authors then state:
Here we are modeling a single output, so Y-hat is a scalar; in general Y-hat
can be a K–vector, in which case β would be a p×K matrix of
coefficients. In the (p + 1)-dimensional input–output space, (X, Y-hat)
represents a hyperplane. If the constant is included in X, then the
hyperplane includes the origin and is a subspace;
Questions:
Is the input-output space only p+1 dimensional assuming that Y is a scalar, rather than a K-vector? If Y is a K-vector, would the input-output space be p+K dimensional?
What does the statement that "(X, Y-hat) represents a hyperplane" mean? Assuming that X-transpose is a single column vector representing the fact that there is only one input variable, could you help me visualize what the hyperplane would look like?
Thank you in advance!
AI: You are correct, at p+1 it is returning to talk about the K=1 case again. p inputs produce 1 output. The linear function $\beta$ you fit maps p dimensional X to a p dimensional subspace of the p+1 dimensional space that includes the 'output' and it's specifically just a hyperplane. Try plotting $z = 3x + 2y$ in a graphing calculator for a quick intuition of what single output from p=2 inputs feels like. |
H: How to save a knn model?
I need to save the results of a fit of the SKlearn NearestNeighbors model:
knn = NearestNeighbors(10)
knn.fit(my_data)
How do you save to disk the traied knn using Python?
AI: import pickle
knn = NearestNeighbors(10)
knn.fit(my_data)
# Its important to use binary mode
knnPickle = open('knnpickle_file', 'wb')
# source, destination
pickle.dump(knn, knnPickle)
# close the file
knnPickle.close()
# load the model from disk
loaded_model = pickle.load(open('knnpickle_file', 'rb'))
result = loaded_model.predict(X_test)
refer: https://www.geeksforgeeks.org/saving-a-machine-learning-model/ |
H: May I use the same data in several time series intervals?
I am playing with RNNs / LTSMs for a classification task in predicting financial data.
I have a time-series going many years back, and are planning to divide it into a number of shorter time-intervals that will be used to obtain a classification.
For example divide the data into random length intervals from 30 minutes to 24 hours, with a binary response 'higher' and 'lower' that answers if the value goes higher or lower for the next hours after the interval.
My question is if I can "reuse" the data, as in for example for a certain 24 hour period create the following intervals: 24 hours, 12 hours, 6 hours, 3 hours and 1 hour and train a model on all of these examples. If I do so, obviously the same data will be reused with different responses.
Or is it better to create a number of models; for example a separate model for 24 hour intervals than the 3 hour intervals?
AI: I dont see a problem in „reusing“ observations. Its just new/different info coming in. The question is more if the model will train/perform well with a short lookback or timelag. Why dont use a long(er) sequence of data to predict (eg) the movement in 3h steps. I wonder why you dont use all the data you have to train the LSTM and update the data stepwise. More data usually is better! |
H: Autoencoder gets ~0% accuracy / doesn't train at all
So I wanted to get into the topic of 'Autoencoder', and just tested how well it would work on random vectors of size 200.
model = keras.models.Sequential()
model.add(keras.layers.Dense(50, activation='tanh'))
model.add(keras.layers.Dense(200))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(vectors, vectors, epochs=50, batch_size=100)
Loss is somehow negative, and continously decreases. Accuracy is pretty much 0%, with values looking like x.xxxxxxxe-07.
The architecture is supposed to be input=200, hidden-layer=50, output=200.
It really doesn't learn anything. What did I overlook, or what is the actual right approach here?
AI: A few things. Firstly I don't think what you are trying to do is very meaningful. An autoencoder tries to find patterns in your data and take advantage of them to be able to compress and reconstruct the data. You are trying to use it with random data, which means there will be no patterns for the autoencoder to learn. I suggest you try using MNIST or another dataset.
Another thing is that your last activation function, your loss function, your metric and possibly your data does not work together. It is hard to tell what is right since I don't know what random numbers you generated. Probably your numbers are not binary, and you would want:
model.add(keras.layers.Dense(200))
model.compile(optimizer='adam', loss='mse')
Or if your random numbers are binary then you would want:
model.add(keras.layers.Dense(200), activation='sigmoid')
model.compile(optimizer='adam', loss='binary_crossentropy')
Lastly, using accuracy as a metric does not make sense for an autoencoder. Accuracy is only useful when you are performing classification. |
H: Extracting email id gives error
I am extracting email ids and storing them into a new column variable, but I am getting the issue:
import re
def email_extract(comments):
comments1 =re.findall(r'[\w\.-]+@[\w\.-]+',comments)
return comments1
data["email_id"] = data.COMMENTS.apply(lambda x: email_extract(x))
--- TypeError Traceback (most recent call
last) <ipython-input-33-d9b73bdc4f8e> in <module>()
----> 1 data["email_id"] = data.COMMENTS.apply(lambda x: email_extract(x))
C:\ProgramData\Anaconda4\lib\site-packages\pandas\core\series.py in
apply(self, func, convert_dtype, args, **kwds) 3192
else: 3193 values = self.astype(object).values
-> 3194 mapped = lib.map_infer(values, f, convert=convert_dtype)
3195 3196 if len(mapped) and
isinstance(mapped[0], Series):
pandas/_libs/src\inference.pyx in pandas._libs.lib.map_infer()
<ipython-input-33-d9b73bdc4f8e> in <lambda>(x)
----> 1 data["email_id"] = data.COMMENTS.apply(lambda x: email_extract(x))
<ipython-input-32-97f3705d1972> in email_extract(comments)
2 def email_extract(comments):
3 #re_pattern = re.compile(r'[\w\.-]+@[\w\.-]+')
----> 4 comments1 =re.findall(r'[\w\.-]+@[\w\.-]+',comments)
5 return comments1
C:\ProgramData\Anaconda4\lib\re.py in findall(pattern, string, flags)
221
222 Empty matches are included in the result."""
--> 223 return _compile(pattern, flags).findall(string)
224
225 def finditer(pattern, string, flags=0):
TypeError: expected string or bytes-like object
How can I fix this issue?
AI: You may have missing data (e.g. np.nans) in your COMMENTS field. This will throw an error in your email_extract function.
Try to filter the problematic rows out, e.g. with
filtered_data = data[~data.COMMENTS.isna()] #purge problematic comments
and applying the email extraction to the now "clean" column:
filtered_data.COMMENTS.apply(lambda x: email_extract(x)) |
H: MinMaxScaler when LSTM predictions fall outside of training range?
I am using MinMaxScaler on my training set and applying the transformations to my test set and inverse_transform to my model’s outputs. If this were, say, a stock prediction problem, my training set may have values between 1-200, but in the most recent time steps, the values are toward the high end of that range.
How can I handle future time-steps that should be predicted to land above 200 (since activation function outputs values between 0-1)? I guess my question is, is there a way to manually (or through some “rule of thumb”) specify a maximum value for the training set? So in this case, would it be possible to use MinMaxScaler to use 0-500 as the min-max values to transform to 0-1, that way values in my test set or predictions can be inverse_transformed accordingly? Is this an out-of-the-box scikit-learn feature? What’s the most efficient way to implement this, especially if this has to be done independently for multiple features?
AI: Inverse-transforming with MinMaxScaler should be capable of producing something outside of the training data's range. It seems that, in your use case, using a final activation that lands in $[0,1]$ might not be appropriate. Even if you transform the training data to land in, say, $[0,0.7]$, applying a sigmoid or some-such on the final layer seems to lack motivation.
As to the question of whether it can be done: yes, just with a little roundabout thinking. You can't specify what you want the scaler to think your data's max and min are, but you can specify the output range you want (parameter feature_range), which amounts to the same thing. |
H: How to predict NaN (missing values) of a dataframe using ARIMA in Python?
I have a dataframe df_train of shape (11808, 1) that looks as follows:
Datum Menge
2018-01-01 00:00:00 19.5
2018-01-01 00:15:00 19.0
2018-01-01 00:30:00 19.5
2018-01-01 00:45:00 19.5
2018-01-01 01:00:00 21.0
2018-01-01 01:15:00 19.5
2018-01-01 01:30:00 20.0
2018-01-01 01:45:00 23.0
2018-01-01 02:00:00 20.5
2018-01-01 02:15:00 20.5
and a second df nan_df of shape (3071, 1) that looks as follows:
Datum Menge
2018-05-04 00:15:00 nan
2018-05-04 00:30:00 nan
2018-05-04 00:45:00 nan
2018-05-04 01:00:00 nan
2018-05-04 01:15:00 nan
2018-05-04 01:30:00 nan
2018-05-04 01:45:00 nan
2018-05-04 02:00:00 nan
2018-05-04 02:15:00 nan
The nan values in the nan_df need to be predicted using time series forecasting.
What I have done:
The code below divides the df df_train and runs the ARIMA model on that to predict the values for the test set
import pandas as pd
from pandas import datetime
import matplotlib.pyplot as plt
from statsmodels.tsa.arima_model import ARIMA
from sklearn.metrics import mean_squared_error
def parser(x):
return datetime.strptime(x,'%m/%d/%Y %H:%M')
df = pd.read_csv('time_series.csv',index_col = 1,parse_dates =[1], date_parser = parser)
df = df.drop(['Unnamed: 0'],axis=1)
df_train = df.dropna()
def StartARIMAForecasting(Actual, P, D, Q):
model = ARIMA(Actual, order=(P, D, Q))
model_fit = model.fit(disp=0)
prediction = model_fit.forecast()[0]
return prediction
NumberOfElements = len(df_train)
TrainingSize = int(NumberOfElements * 0.7)
TrainingData = df_train[0:TrainingSize]
TrainingData = TrainingData.values
TestData = df_train[TrainingSize:NumberOfElements]
TestData = TestData.values
#new arrays to store actual and predictions
Actual = [x for x in TrainingData]
Predictions = list()
#in a for loop, predict values using ARIMA model
for timepoint in range(len(TestData)):
ActualValue = TestData[timepoint]
Prediction = StartARIMAForecasting(Actual, 3, 1, 0)
print('Actual=%f, Predicted=%f' % (ActualValue, Prediction))
Predictions.append(Prediction)
Actual.append(ActualValue)
Error = mean_squared_error(TestData, Predictions)
print('Test Mean Squared Error (smaller the better fit): %.3f' % Error)
# plot
plt.plot(TestData)
plt.plot(Predictions, color='red')
plt.show()
Now, I wanted to do the same to predict the nan values in the nan_df, this time using the entire df_train dataframe and I did it as follows:
X = df_train.copy().values
nan_df = df.iloc[11809:, :].values
real = [x for x in X]
nan_Predictions = list()
#in a for loop, predict values using ARIMA model
for timepoint in range(len(nan_df)):
nan_ActualValue = nan_df[timepoint]
nan_Prediction = StartARIMAForecasting(real, 3, 1, 0)
print('real=%f, Predicted=%f' % (nan_ActualValue, nan_Prediction))
nan_Predictions.append(nan_Prediction)
real.append(nan_ActualValue)
When I do this, I get the following error:
Traceback (most recent call last):
File "<ipython-input-42-33f3e242230d>", line 4, in <module>
nan_Prediction = StartARIMAForecasting(real, 3, 1, 0)
File "<ipython-input-1-043dac0dd994>", line 17, in StartARIMAForecasting
model_fit = model.fit(disp=0)
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 1157, in fit
callback, start_ar_lags, **kwargs)
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 946, in fit
start_ar_lags)
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 562, in _fit_start_params
start_params = self._fit_start_params_hr(order, start_ar_lags)
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\statsmodels\tsa\arima_model.py", line 539, in _fit_start_params_hr
if p and not np.all(np.abs(np.roots(np.r_[1, -start_params[k:k + p]]
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\numpy\lib\polynomial.py", line 245, in roots
roots = eigvals(A)
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\numpy\linalg\linalg.py", line 1058, in eigvals
_assertFinite(a)
File "C:\Users\kashy\Anaconda3\envs\py36\lib\site-packages\numpy\linalg\linalg.py", line 218, in _assertFinite
raise LinAlgError("Array must not contain infs or NaNs")
LinAlgError: Array must not contain infs or NaNs
So, I would like to know how can I predict the nan values in the nan_df?
AI: You may apply Wolfram Language to your project. There is a free Wolfram Engine for developers and with the Wolfram Client Library for Python you can use these functions in Python.
I will first create some data (too few rows provided in OP) in a pandas.DataFrame using a Python WolframLanguageSession to simulate a ARIMAProcess with RandomFunction.
Imports
import pandas as pd
import iso8601
from wolframclient.evaluation import WolframLanguageSession
from wolframclient.language import wl, wlexpr
Start WolframLanguageSession
wolfSession = WolframLanguageSession();
A simulation can be run by
print(
wolfSession.evaluate(
wl.RandomFunction(
wl.ARIMAProcess(1.4, [-0.9], 1, [-0.01, -0.08], 2.6),
[0, 3]
)('Values')
)
)
[1.1054178694529107, 1.860340990531042, 1.5519448249118848, 5.088452598965132]
Run a simulation of 100 steps (n) with dates in 15 minute intervals as in example data snip.
n=100;
df = pd.DataFrame(
{
'Datum' : pd.date_range('2018-01-01 00:00', periods=n, freq=pd.offsets.Minute(15)),
'Menge' : wolfSession.evaluate(
wl.RandomFunction(
wl.ARIMAProcess(1.4, [-0.9], 1, [-0.01, -0.08], 2.6),
[0, n-1]
)('Values')
)
},
);
The result can be visualised with DateListPlot by Exporting in one of the supported Raster Image Formats or Vector Graphics Formats.
wolfSession.evaluate(
wl.Export(
'<path with image filename>',
wl.DateListPlot(wl.Query(wl.Values,wl.Values)(df), PlotTheme='Detailed')
)
)
Now that there is some data a forecast can be performed. First TimeSeriesModelFit will be use to fit an ARIMA process. Then TimeSeriesForecast to predict future values.
The Wolfram Engine interprets a pandas.DataFrame as a Dataset object. TimeSeriesModelFit expects a TimeSeries or a list of time-value pairs. Therefore, using Query the conversion is made to a list of time-value pairs.
ts_model=wolfSession.evaluate(
wl.Query(
wl.RightComposition(
wl.Values,
wl.Function(wl.TimeSeriesModelFit(wl.Slot(1),'ARIMA'))
),
wl.Values
)(df)
);
print(wolfSession.evaluate(ts_model('BestFit')))
ARIMAProcess[1.4524650139005593, [-0.9099324923212446], << 1 >>, [-0.07171874225371022], 2.507600357444524]
20 steps forward can be simulated with
ts_forecast=wolfSession.evaluate(wl.TimeSeriesForecast(ts_model,[20]));
print(wolfSession.evaluate(ts_forecast('Values')))
[69.49895300256293, 73.9505213906962, 71.35235968644419, 75.16897645534839, 73.14857786048489, 76.43946920329194, 74.89744525567367, 77.75304796344956, 76.60710728838431, 79.10230095679928, 78.28430817717482, 80.48109139973985, 79.93463198084233, 81.88433817573274, 81.56270217242249, 83.30783423643538, 83.17234688189897, 84.74809624199086, 84.76673571338941, 86.20224006662474]
ts_forecast is a TemporalData object whose properties include 'Dates' and 'Values'. These can be use to convert it into a Python pandas.DataFrame for further processing in Python.
df_forecast = pd.DataFrame(
{
'Datum' : list(map(
lambda d: iso8601.parse_date(d),
wolfSession.evaluate(
wl.Map(
wl.Function(wl.DateString(wl.Slot(1),'ISODateTime')),
ts_forecast('Dates')
)
)
)),
'Menge' : wolfSession.evaluate(ts_forecast('Values'))
},
);
print(df_forecast.iloc[:3,:])
Datum Menge
0 2018-01-02 01:00:00+00:00 69.498953
1 2018-01-02 01:15:00+00:00 73.950521
2 2018-01-02 01:30:00+00:00 71.352360
Further processing can also be continued with the Wolfram Engine. For example, 95% confidence interval bands for the forecast.
conf = .95;
quant = wolfSession.evaluate(wl.Quantile(wl.NormalDistribution(), 1 - (1 - conf) / 2));
errors = wolfSession.evaluate(wl.Sqrt(ts_forecast('MeanSquaredErrors')));
error_bands = wolfSession.evaluate(
wl.TimeSeriesThread(
wl.Function([wl.Dot([1, -quant], wl.Slot(1)), wl.Dot([1, quant], wl.Slot(1))]),
[ts_forecast, errors]
)
);
wolfSession.evaluate(
wl.Export(
'<path with image filename>',
wl.DateListPlot(
[wl.Query(wl.Values,wl.Values)(df), error_bands, ts_forecast],
PlotStyle=[wl.Automatic, wl.Gray, wl.Automatic],
Filling=[wl.Rule(2,[3])],
PlotTheme='Detailed'
)
)
);
Terminate the session
wolfSession.terminate();
Hope this helps. |
H: Test dataset with categorical variable value not present in train dataset & transformer
I want to replace values of a categorical variable ( named 'six' ) by the mean of my target variable ( named 'target' ).
I am fitting a transformer doing just that on a train dataset df and then transform the test dataset df2.
How do I deal with a value appearing solely in the test dataset ?
When fitted on the train dataset the transformer received no mean value of the target variable on that value.
For example :
myarray = np.array([ [ 1 , 1 , 3 , 'v' , 'a' , 'x' , 0 ] ,
[ 2 , 2 , 2 , 'v' , 'b' , 'y' , 1 ] ,
[ 4 , 5 , 1 , 'w' , 'c' , 'z' , 1 ] ,
[ 2 , 1 , 9 , 'w' , 'c' , 'x' , 1 ] ,
[ 1 , 0 , 4 , 'w' , 'b' , 'y' , 1 ] ,
[ 2 , 2 , 3 , 'v' , 'b' , 'z' , 0 ] ] )
colnames = [ 'one', 'two', 'three' , 'four' , 'five' , 'six' , 'target' ]
df = pd.DataFrame( myarray , columns = colnames )
myarray2 = np.array([ [ 2 , 7 , 3 , 'v' , 'a' , 'x' , 0 ] ,
[ 9 , 2 , 2 , 'v' , 'a' , 'y' , 0 ] ,
[ 4 , 5 , 1 , 'w' , 'c' , 'k' , 1 ] ] )
colnames2 = [ 'one', 'two', 'three' , 'four' , 'five' , 'six' , 'target' ]
df2 = pd.DataFrame( myarray2 , columns = colnames2 )
df is my train dataset, df2 my test dataset.
We can see variable 'six' has the k value not existing in the train dataset.
Next :
df[ 'target' ] = df[ 'target' ].astype( 'float64' )
Next ( my homemade transformer ) :
class Cat2TargetMean( BaseEstimator , TransformerMixin ) :
def __init__( self , col2trans , tgt_col ) :
self._col2trans = col2trans
self._tgt_col = tgt_col
def fit( self, X, y = None ) :
self._dic_col_p = {}
for col in self._col2trans :
p = X.groupby( col ).mean()[ self._tgt_col ]
self._dic_col_p.update( { col : p.to_dict() } )
return self
def transform( self , X , y = None ) :
for col , dic_p in self._dic_col_p.items() :
X.replace( { col : dic_p } , inplace = True )
return X
Then :
tsf = Cat2TargetMean( [ 'four' , 'five' , 'six' ] , 'target' )
tsf.fit( df )
tsf.transform( df )
tsf.transform( df2 )
Result :
one two three four five six target
0 2.0 7.0 3.0 0.333333 0.0 0.5 0.0
1 9.0 2.0 2.0 0.333333 0.0 1 0.0
2 4.0 5.0 1.0 1.000000 1.0 k 1.0
'k' value of column 'six' has not been transformed.
AI: I usually replace unseen and NaN values with the global target mean.
There are also already implemented transformers for target encoding that you could use that gives you some options such as smoothing: scikit contrib - target encoder. |
H: How does one individually extract elements from a tensor?
Let us assume that I have a tensor in tensorflow with 2 elements.
How do I extract these individual elements and perform calculations on them?
def multiplication(a):
return a[0]*a[1]
def main():
sess=tf.Session
a=tf.constant([[2,3]])
answer= multiplication(a)
print(sess.run(answer))
#answer should be 6
AI: First recognize that your input array has shape (1,2), so a[0]*a[1] will not work.
Following is the code that is working for me-
def mul(a):
return a[0][0]*a[0][1]
a = tf.constant([[2,3]])
b = tf.contrib.eager.py_func(func = mul, inp=[a], Tout = tf.int32)
with tf.Session() as sess:
sess.run(b)
If you want to pass custom functions, check this link- https://www.tensorflow.org/api_docs/python/tf/py_function |
H: Removing duplicates and keeping the last entry in pandas
I have a table and I'm trying to remove all the duplicate and keep the
the rows that has the latest datestamp.
Here is the table:
email address orgin_date new_opt_in_date datestamp
[email protected] 1/1/1900 1/1/1900 3/15/2016
[email protected] 1/1/1900 1/1/1900 3/15/2016
[email protected] 2/15/2015 3/5/2015 6/6/2017
[email protected] 2/15/2015 3/5/2015 7/6/2018
[email protected] 2/15/2015 3/5/201 7/6/2018
[email protected] 2/15/2015 3/5/201 5/6/2018
I'm trying to keep only the data that has the recent datestamp and the
output will like this:
email address orgin_date new_opt_in_date datestamp
[email protected] 1/1/1900 1/1/1900 3/15/2016
[email protected] 2/15/2015 3/5/2015 6/6/2017
[email protected] 2/15/2015 3/5/201 7/6/2018
I use this formula:
df.drop_duplicates(keep = False)
or this one:
df1 = df.drop_duplicates(subset
['emailaddress', 'orgin_date', 'new_opt_in_date','datestamp'],keep='first')
print(df1)
but nothing works
AI: You can see from the documentation of the method that you can change the keep argument to be "last".
In your case, as you only want to consider the values in one of your columns (datestamp), you must specify this in the subset argument. You had tried passing all column names, which is actually the default behaviour. Now we can use this (along with the correct value for the keep argument) to get this:
For example, a dataframe with duplicates:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'datestamp': ['A0', 'A0', 'A2', 'A2'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['B0', 'B1', 'B2', 'B3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3]).T
In [3]: df
Out[3]:
datestamp B C D
0 A0 B0 B0 D0
1 A0 B1 B1 D1
2 A2 B2 B2 D2
3 A2 B3 B3 D3
Now we drop duplicates, passing the correct arguments:
In [4]: df.drop_duplicates(subset="datestamp", keep="last")
Out[4]:
datestamp B C D
1 A0 B1 B1 D1
3 A2 B3 B3 D3
By comparing the values across rows 0-to-1 as well as 2-to-3, you can see that only the last values within the datestamp column were kept. |
H: How to club the orders in such a way that maximum number of items are common amongst them?
Consider the following data set:
The above table shows the quantity of each item used in the orders SO1 SO2 etc. I need to club the orders in such a way that maximum number of items are common amongst them.
For example:
SO1 SO3 SO5 can be clubbed for 8 items ( Item 1,2,3,5,6,8,9,11) and SO2 and SO4 can clubbed together because 9 items are common ( Item 1,3,4,5,6,8,9,10,11).
The approach I followed was:
I found out the number of combinations possible as in (SO1 SO2), (SO1 SO3), (SO1 SO4)……(SO1 SO2 SO3), (SO1 SO2 SO4),….(SO1 SO2 SO3 SO4),…(SO1 SO2 SO3 SO4 SO5). For n number of SOs I would be getting around 2^n -n -1 combinations.
Later I compared the data in each combination for equality.
Based on the number of matches, I thought I would be able to select the combinations. But this process would become cumbersome for 300 shop orders with around 2000 items. And it is taking a lot of time to compute as well.
AI: Have you tried using frequent itemset mining?
It finds the most frequent item combinations.
The key idea is that when item A is not frequent, you don't need to further explore any combinations including A. |
H: Is the number of iterations in gradient tree boosting just the number of trees?
I have been searching for a while and I just can't find any indication. When people talk about iterations in algorithms like XGBoost or LightGBM, or Catboost, do they mean how many decision trees i.e. base learners will be built? I.e. XGboost m=100 means the algorithm will build a total of 100 base learners, each calculating and optimizing towards the residual value of the previous prediction?
Or is it more like 1 epoch in deep learning?
AI: Your first interpretation is correct. One base learner will be added per boosting iteration/round and that is probably what people are referring to when talking about iterations.
From wiki:
One natural regularization parameter is the number of gradient
boosting iterations M (i.e. the number of trees in the model when the
base learner is a decision tree).
Iterations take place in other parts of the algorithm, for instance in the gradient descent, but I don't think that is what is discussed if it is simply referred to as "iterations".
This was really helpful when I was trying to understand GBMs |
H: What is this called by gathering the meaning from a sentence?
What would this process of gathering the meaning of a sentence be called? What would the segments derived from the sentence be called?
"John and Derrek both love cake"
-> John loves cake
-> Derrek loves cake
"John was mad that the weather was rainy today"
-> John was mad
-> weather was rainy today
AI: It is called relation extraction, it is a subdomain of Natural Language Processing. Often, you first have to apply Named Entity recognition or Concept extraction to extract the entity or concept. You can then extract the relations betweens the entities with several approaches. |
H: If a neural network is a universal function approximator, is the irreducible error 0?
How to marry the fact that (most) neural networks with a single hidden layer are universal function approximators with the fact that in the bias-variance decomposition we consider there to be an irreducible error?
Can the irreducible error be 0? (Then it would be inappropriate named.)
AI: The answer to the title question is "no"; the contexts are a little different.
The bias-variance tradeoff holds in the context that $y=f(X)+\varepsilon$, with $\varepsilon$ having mean 0 and variance $\sigma^2$; and $\sigma^2$ turns out to be the irreducible error, so named because it is a lower bound on the expected squared-error. So to answer your last question, the irreducible error can be zero, if there is no noise in the data.
That neural networks are universal function approximators refers to the actual output function itself (many linear functions with, say, ReLU activators), and not to the fitting procedure. Notice too that the universality has restrictions: the approximation is only guaranteed on a compact domain, and the error in approximation can be pushed down by using enough neurons in the hidden layer.
So what the universality really implies, w.r.t. the bias-variance tradeoff, is that it is possible to push the bias to zero (or as close as desired, by using enough neurons). Of course, this will generally result in a badly overfit model.
wikipedia - Bias-variance decomposition |
H: How to find optimal number of trees in random forest using Grid search in R?
From below code, I am getting optimal number of mtry. What is this mtry ? and How should I find the optimal number of tree that to be assigned to Random forest algorithm so that it will give High accuracy.
Any comment will be highly appreciated ! Thanks in Advance.
classifier = train(form = Survived ~ ., data = training_set_scaled, method = 'rf')
classifier
classifier$bestTune
AI: mtry is the parameter indicating how many of the features are checked in each split decision. http://topepo.github.io/caret/train-models-by-tag.html#random-forest , see method 'rf'
In Random Forest, usually more trees give more stable results, and overfitting due to number of trees is rare. Moreover, since the trees are built independently, you could just fit many trees then take subsets to get smaller models. But, see e.g. hyperparameter tuning in mlr for how to perform grid, random, Bayesian, and other hyperparameter searches, or grid in caret or random in caret. |
H: Survival regression with major event that won't happen
I would like to do some survival regression about the duration before the "death" of an individual. The final purpose is to know, given an individual, how long it should take before he'll most likely "die" (probability of the survival function to be less than 0.1 for instance).
My problem here is that I have, in my training set, a variable that influences a lot my target variable, but is not available for the test set (and won't happen in real life).
Let's say my training data is the following:
id status poison_time death_time sex
0 1 90 92 f
1 0 90 150 f
2 1 90 91 f
3 1 60 130 m
4 0 60 150 m
5 1 60 62 m
With :
status = 1 for a dead person and 0 for a censored data
poison_time : time corresponding to the injection of a poison
death_time : time of the death or last follow-up
sex : sex of the individual (not relevant here, imagine a bunch of useful variables)
I can't just ignore the influence of poison_time: although for some individuals, the poison won't be as effective (individual with id 3, or individuals that ended up right-censored). It has a real impact on death_time.
In my test data the poison is not injected, but I still would like to have a good idea of "how long should it take before an individual most likely die", knowing my other variables (sex, etc.)
Is it possible to still have relevant results with such corrupted data as a training set?
AI: If I understand your problem correctly, I think it's possible¹, but you have to do some extra work and you may be limited with what models you can use.
First, you have a time-varying dataset, so that must be handled correctly. Your poison comes long after birth, and that's important to model. Otherwise you are biasing your model. Ex: Suppose everyone is poisoned at age 90, if they live that long. Then just the act of being poisoned is a signal of living a long time, when in reality, it will hasten the death of the subject. Give this a read too, for many more arguments why it's important to model time-varying covariates.
Okay, so I took your dataset, and I modelled it into a time-varying dataset (see below). Notice:
A subject could have more than 1 rows (in your case, everyone will have at most two rows, one for prior to poisoning, one for after)
Each row for a subject is mutually exclusive (see start and stop columns). The status is True iff they died at the end of that interval.
poisoned is a boolean flag for activating after they were poisoned.
start poisoned sex stop id status
0 0 0 f 90 0 False
1 90 1 f 92 0 True
2 0 0 f 90 1 False
3 90 1 f 150 1 False
4 0 0 f 90 2 False
5 90 1 f 91 2 True
6 0 0 m 60 3 False
7 60 1 m 130 3 True
8 0 0 m 60 4 False
9 60 1 m 150 4 False
10 0 0 m 60 5 False
11 60 1 m 62 5 True
The above matrix has the exact same information, but presented differently.
Okay, so we now need to pick a model that can handle time-varying datasets. Let's take a step back and ask "after fitting a model on the training data, can I "transport" that to the testing data (which has no poisonings)?" I argue that, yes, this is valid, and will be more valid you have individuals who died before being poisoned. What your model will do is understand the risk factors associated with your variables given poisonings / no poisoning. Hence, when you move to your test data, the risk factors are already learned and prediction can be done. Here's an analogy: I'm studying the student exam scores on Canadian and American students, and have other variables like lunch, grade, background etc and possibly interactions between them. The objective of "fitting" is to learn the relationship of lunch on exam scores given the other variables. So when I focus my model on only Canadian students in the test data, my model is still valid.
(However, if I had no Canadian students in my training data, then the model's validity can be questioned if I wish to focus on Canadian students in the test data. This is why I hope there are subjects in your training data who died before being poisoned).
We can fit the time-varying dataset using Cox's proportional hazard model. The (linear) model's output looks like:
---
coef exp(coef) se(coef) z p -log2(p) lower 0.95 upper 0.95
poisoned 17.60 4.41e+07 2670.86 0.01 0.99 0.01 -5217.18 5252.38
sex -0.82 0.44 1.25 -0.66 0.51 0.97 -3.28 1.64
---
From your description, it sounds like your test data is not time varying. That's fine, it still fits into the time-varying framework: simply each test subject has a single row. You can choose to leave the poisoned variable out, or just leave it as 0 throughout. The latter is probably easier, especially if you have interactions.
¹ I could be missing some important gotcha that renders this all useless. Generally, working with time-varying data is easy to mess up causality/data leakage. |
H: ValueError: Expected 2D array, got scalar array instead using predict method
I am trying to get a predicted value instead of whole features for a particular level using predict method.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Importing Dataset
dataset = pd.read_csv('C:/Users/Rupali Singh/Desktop/ML A-Z/Machine Learning A-Z Template Folder/Part 2 - Regression/Section 7 - Support Vector Regression (SVR)/Position_Salaries.csv')
print(dataset)
X = dataset.iloc[:, 1:2].values
Y = dataset.iloc[:, 2].values
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
sc_Y = StandardScaler()
X = sc_X.fit_transform(X)
Y = sc_Y.fit_transform(Y.reshape(-1,1))
#Fitting SVR model to dataset
from sklearn.svm import SVR
regressor = SVR(kernel='rbf')
regressor.fit(X,Y)
#Visualizing the dataset
plt.scatter(X, Y, color = 'red')
plt.plot(X, regressor.predict(X), color = 'blue')
plt.show()
# Predicting a new Result
Y_pred = regressor.predict(6.5)
print(Y_pred)
This is my dataset, here I am trying to predict value only for level 6
Position Level Salary
0 Business Analyst 1 45000
1 Junior Consultant 2 50000
2 Senior Consultant 3 60000
3 Manager 4 80000
4 Country Manager 5 110000
5 Region Manager 6 150000
6 Partner 7 200000
7 Senior Partner 8 300000
8 C-level 9 500000
9 CEO 10 1000000
This is the error message I am getting:
File "C:/Users/Rupali Singh/PycharmProjects/Machine_Learning/SVR.py", line 34, in <module>
Y_pred = regressor.predict(6.5)
File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\sklearn\svm\base.py", line 322, in predict
X = self._validate_for_predict(X)
File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\sklearn\svm\base.py", line 454, in _validate_for_predict
accept_large_sparse=False)
File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\sklearn\utils\validation.py", line 514, in check_array
"if it contains a single sample.".format(array))
ValueError: Expected 2D array, got scalar array instead:
array=6.5.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
I would be really grateful for any kind of help.
AI: Try:
Y_pred = regressor.predict(np.array([6.5]).reshape(1, 1))
Scikit does not work with scalars (just one single value). It expects a shape $(m\times n)$ where $m$ is the number of features and $n$ is the number of observations, both are 1 in your case. |
H: Relating two data sets at the row level where table one may correlate to zero or more rows in another
I'm just looking for some high level recommendations on libraries, design patterns, or algorithms here. I obviously don't expect you to build a model for me.
I would like to predict the monetary impact that an outage will have on the electric grid.
I have data of historical outages that I could train a model on. This basically consists of the equipment-ID that was on outage, with a start and end timestamp.
An outage may cause zero or more constraints. A constraint is where a line becomes loaded to the extent that power must be flowed an alternate way. I already have data on which constraint-IDs occurred at what start and end timestamp, as well as the economic impact of that particular occurrence of that constraint. Note that a constraint may exist without any outages, so there is a bit of a loose relationship. Also, outages may occur without any constraints occurring, or it may take a combination of outages to cause a particular constraint. Generally speaking though, there is a moderate to strong relationship between outages and constraints.
I presume that there should be an algorithm of sorts, that can deduce which outages generally correlate to what constraints after they have been seen to overlap throughout history.
To recap, an outage may cause a constraint. I would like to feed the model outage data and constraint data, so that if I have planned outage data, it can predict if zero or more constraints are X% likely to occur.
I prefer to use Python, but am open to R.
How would you go about modeling a problem like this? In simplified terms, how do you correlate a row in one table to zero or more rows in another where there is no direct mapping? Most solutions I have seen correlate at the column level as opposed to the row.
Edit:
After some thinking about this, I've come up with what I think is a solution. Make each row with a timestamp for that particular hour of the day. One hot encode all outages for that hour. Also, create a column for each potential constraint, with the monetary impact as the value (Scaled to a 0-1 scale).
The thing that seems crazy to me, is that this data set is going to end up being thousands of columns wide if I do it this way. Are super-wide data sets like this common in ML? - or, is there a better approach?
AI: I would like to predict the monetary impact that an outage will have on the electric grid.
It may help to think of your problem as two problems:
Given an outage (or set of outages), predict the number of constraints that will result
Given a constraint, predict the monetary impact
You would need to build two regressive models - one for each of the problems. It's difficult to give advice about which specific techniques or algorithms to use without a better understanding of the domain and of your dataset.
It sounds like you have a set of timestamped historical outages and a set of timestamped historical constraints. To me, your description implied that there was no authoritative mapping between outages and constraints, so the first wrinkle may be assigning blame for each outage to one or more constraints. This likely requires domain knowledge that most SO contributors won't have.
Another wrinkle is that you only have date and time information about outages and constraints. If the temporal information explains a lot of the variance in number of outages and cost of those outages, then no problem. Otherwise, it's going to be difficult to build accurate models.
If you go with the approach I suggested above, then you have a timeseries forecasting problem on your hands. Traditional algorithms for this type of problem include auto-regressive models like ARMA, ARIMA, SARIMA, etc. You can read up on those to see if they seem suitable for your problem.
Alternatively, Facebook recently released an open-source forecasting library Prophet designed to be easy-to-use and to work well out-of-the-box. It's super easy to try out, so you could give it a shot. |
H: Does increasing kernel size in a CNN result in higher accuracy on the training set?
In a convolutional neural network, does increasing the size of kernel always result in better training set accuracy? For example, if I use 5x5 kernels in a CNN instead of 3x3 ones, will it always generate better training accuracy?
Increasing kernel size means effectively increasing the total number of parameters. So, it is expected that the model has a higher complexity to address a given problem. So it should perform better at least for a particular training set. Or will it be harder to learn a bigger kernel?
I am not concerned about the validation set accuracy here.
AI: I'd say there is no direct relation between the kernel size and the accuracy.
If you start using larger kernel you may start loosing details in some smaller features (where 3x3 would detect them better) and in other cases, where your dataset has larger features the 5x5 may start detect features that 3x3 misses.
So, I'd say "no".
Anyway, if you add a second convolutional layer on top of your first you'd start having something that's close to larger features. E.g. your small kernels in the first layer would detect small features but the second layer would detect features which are composed of several features from the previous layer. |
H: Identifying users based on smartphone data (Google)
I've heard that Google has now a technique to identify users based on the smartphone touch input (how the user is using the phone). I have found nothing about that online. Is there a paper or some article available explaining how this is done (e.g. what features are used)?
AI: Even I was unable to find a paper on sucha topic by Google. But, I can discuss some features which Google might be using to uniquely identify its users based on smartphone data.
Fingerprint scanning: One of the preliminary methods of scanning users as the fingerprint is unique to every individual. Also, most devices are now equipped with a fingerprint scanner.
Home location: The Google account also keeps a track of the user's home location which can't be directly used to identify a user, but could be an important feature.
Data Personalization: Suppose a user has habit of viewing videos, blogs and websites related with the top tag "artificial intellgence". This personalized information could be used to track down a user.
Data persoanlization could be applied to apps, videos and websites with which the user interacts. This data will be super complex and probably unique to each individual.
Device Usage: Every individual likes to personalize and use his/her device in their own way. Android 9 had this feature of tracking apps which are most used by the user and also at which times. This is another great factor for identification.
Preliminary device details: IP address, the device's model, Android version, account password are also stored with the Google account.
The above features could be brought together to uniquely identify users as they provide information in different dimensions regarding the user.
The weaker features ( Data personalization, device usage ) can be used with the powerful features ( IP address, passwords ) to uniquely identify a particular user.
All the above information was not mentioned in any paper.
Hope this helps. :-) |
H: Change values of a particular column to value_count()
Suppose i have the following data set. I need to replace the value of a particular column by the value_count(). I saw few posts where it is done for the entire data set. I need to do it for a particular column.
data = pd.DataFrame({'Item 1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Item 2': [1,5,2,7,8,4,9,0,3],
'Item 3': ['a','b','a','c','d','c','e','e','e']})
How do i replace the values of column 'Item 1' by the value_counts()?
I tried the following code.
data.apply(lambda x: x.map(x.value_counts()))
But this is applying to whole data. I need to do it for one column.
The resulting value should be of the form -
'Cheese':3
'Potato':2
AI: Try this:
# Do value_count outside of loop to avoid doing it multiple times
vc_item1 = data['Item 1'].value_counts()
data['Item 1'].apply(lambda x: vc_item1[x])
But if understand your comment correctly you want both key and value in each cell:
vc = data['Item 1'].value_counts()
data['Item 1'].apply(lambda x: str(x) + ':' + str(vc[(x)])) |
H: Convert .cbf to lossless image format with defined "dynamic range"
We collected electron diffraction data and converted the data into .cbf (Crystallographic Binary File). Those are about 10'000 files. Now I would like to convert the .cbf into an image format like .tif but with a specified dynamic range.
I tried it with fabIO which is a python module. But I could not figure out how to successfully convert .cbf into .tif. .edf worked but it's not the desired format. Also I don't know how to pick the dynamic range.
Maybe it is possible to use Adxv in the command line? I've been using it to export .tif but was only able to do so one by one.
What follows is my python script to convert .cbf into .edf:
import os
from glob import glob
import fabio
path = '/Users/myuser/Documents/Crystals'
result = [y for x in os.walk(path) for y in glob(os.path.join(x[0], '*.cbf'))]
i = 0
text = ''
while i < len(result):
if i == 4000:
image = fabio.open(result[i])
image.convert("edf").save("4000.edf")
i += 1
Do you know how to use fabIO in order to export .tif with specific dynamic range? Or do you maybe know how to use Adxv in the command line in order to process many images? Or maybe you have another solution?
AI: import os
from glob import glob
import fabio
path = '/Users/myuser/Documents/Crystals'
result = [y for x in os.walk(path) for y in glob(os.path.join(x[0], '*.cbf'))]
i = 0
command = ''
while i < len(result):
path = os.path.dirname(result[i])
filename = os.path.splitext(result[i])[0]+'.tif'
command = '/Applications/Adxv.app/Contents/MacOS/Adxv -smv32bits -sa -overload 500 '+result[i]+' '+filename
os.system(command)
i += 1
In the end I figured it out using Adxv in stand alone -sa and -overload N as dynamic range. |
H: Unable to use readline() for function() input
I recently started learning R and was trying to make a script that would give specific values of a probability distribution. Ideally, in this script, there would be a list of probability distributions with an associated number. Users would be able to enter a number and it would print out a message while giving them a bunch of variables relevant to the specific probability distribution to enter.
This is my code:
DistriName <- readline(prompt = "Enter Distribution: ")
DistriName <- as.numeric(DistriName)
function(DistriName) {
if(DistriName = 1) {
print("You have selected the binomial distribution")
} else {
print("I am not familiar with the distribution. Sorry.")
}
}
For some reason, it keeps giving me this error:
Error: unexpected '}' in " }"
Error: unexpected '}' in "}"
I have checked multiple times for any missing opening brackets, but could not find anything.
AI: Try
if (DistriName == 1)
(two == means "is equal“ in a logical way) |
H: How to store a trained Random Forest model in python or matlab as a matrix
Is it possible to store a random forest model in python (or matlab). Then use that trained model in a C program?
I am trying to do this because I am making a myoelectric prosthetic, and you can only write to the chip we are using (PIC32MK0512MCF064) with C code.
My first thought on how to attempt this was to store the model as a matrix but I dont know if that is even possible?
Any clues? anything that could help would be greatly appreciated
AI: Not sure about a matrix, but here's a repo for porting sklearn models into other languages, including C.
https://github.com/nok/sklearn-porter |
H: Tool/dataset for matching first names and nicknames
I'm trying to identify the same individuals in a large dataset where sometimes the individuals may be listed by their full first name (e.g., "Michael Douglas") and sometimes by its nickname (e.g., "Mike Douglas"). Does anyone know of a dataset that has (at least) English first names and their corresponding common nicknames? Or a tool that will do such matching for you? I searched for a while and couldn't find anything like this.
AI: Maybe this helps. I could not check the content, but it says that it provides a csv mapping names and nicknames.
https://github.com/carltonnorthern/nickname-and-diminutive-names-lookup |
H: How is Stochastic Gradient Descent used like Mini Batch gradient descent?
As I know, Gradient Descent has three variants which are:
1- Batch Gradient Descent: processes all the training examples for each iteration of gradient descent.
2- Stochastic Gradient Descent: processes one training example per iteration. Hence, the parameters are being updated even after one iteration in which only a single example has been processed.
3- Mini Batch gradient descent: which works faster than both batch gradient descent and stochastic gradient descent. Here, b examples where b < m are processed per iteration.
But in some cases they use Stochastic Gradient Descent and they define a batch size for training which is what I am confused about.
Also, what about Adam, AdaDelta & AdaGrad, are they all mini-batch gradient descent or not?
AI: But in some cases they use Stochastic Gradient Descent and they define
a batch size for training, I am confused about that?
If you defining a batch size, then you are performing mini-batch gradient descent.
Stochastic Gradient Descent ( SGD ) and Mini Batch Gradient Descent are often used interchangeably in many places. The idea is that if the batch size is 1, then its totally SGD.
If the batch size is equal to the number of samples in the dataset, then its Batch Gradient Descent.
Also, what about Adam, AdaDelta, AdaGrad, are they all mini-batch
gradient descent? or not?
Adam, Adadelta and AdaGrad are all different in which we update the parameters. The main concept of GD is the calculation of gradient of the loss function which is fundamental to all the above optimizers.
The difference is where on how much samples from a dataset are the gradients of the loss function calculated.
After the gradients are calculated, the different optimizers mentioned use modified methods to update the parameters using the calculated gradients.
Finally,
SGD, Mini batch/Batch GD are methods determining on how much samples are gradients calculated.
AdaGrad, Adadelta etc. update the parameters differently based on
calculated gradients. |
H: Linear regression with white Gaussian noise
I am new to machine learning , so this question may sound fundamental. My task is to estimate the parameter vector of the equation with the least squares method:
$y = \theta_0 + \theta_1x + \theta_2x^2 + η$
Where η corresponds to white Gaussian noise with mean 0 and variance 0.1
Also , I have been given the prior values of the parameter vector , say [-1,0.3,0.6] . I have to generate the N points of the training set .
Should regresor be like this :
[1 0.1 (o.1)^2] (for x = 0.1)
And should I calculate a priori results by adding noise
AI: η is an error term with distribution (0, sigma). The error or residual is how the true x differ from the estimated function.
Do you mean the theta when you say „parameter vector“.
In this case, the estimated function is:
y_hat = -1 + 0.3x + 0.6x^2.
So if you want to infer the training set, you need to generate x points, evenly scattered around the quadratic function (mean of error = 0) with a variance of 0.1. |
H: What's a classifier capable of predicting a variable number of classes
I want to solve what I understand as a classification problem regarding tagging. Let's say an Entity can have 0 or more tags and I want to be able to predict which tags (if any) an entity might get
I would like the classifier to output 0 or more tags.
I have never run into this kind of structure with variable length in input data (many tags) and output (0, one or more tags for each entity) before and I'm not even sure what terminology / words to search for.
AI: This is an interesting problem. What is your xtrain? I guess it boils down to a multi label problem. In a simple setup, you would train on the presence of a certain label (present or not). So for n tags/labels, you would train n+1 (1 is no label) models.
I‘m not really in to multilabel problems, but I guess this is the way to go. |
H: Why using a frozen embedding layer in an LSTM model
I'm studying this LSTM mode:
https://www.kaggle.com/paoloripamonti/twitter-sentiment-analysis
They use a frozen embedding layer which uses an predefined matrix with for each word a 300 dim vector which represents the meaning of the words.
As you can see here:
embedding_layer = Embedding(vocab_size, W2V_SIZE, weights=[embedding_matrix], input_length=SEQUENCE_LENGTH, trainable=False)
The embedding layer is frozen which means that the weights are not changed during training.
Why is this done?
AI: The embedding matrix which used in the initialization of the Embedding layer is highly trained on a large corpus of text. The training and the data are so huge that the embedding has learnt a type of association between words.
A pretrained embedding like Word2Vec will produce vectors for words like school and homework which are similar to each other in the embedding space.
Many such associations are learnt after rigorous training mostly on high-end machines and precisely calculated parameters.
Why is the Embedding layer set to trainable=false?
As mentioned in the code, we have given a pretrained embedding matrix to the Embedding layer through the weights= argument. As the word suggests, its "pretrained" and requires no additional training.
We can enjoy the benefits of such an embedding by keeping it untrainable.
Additional training in the context of our task, may result in unusual behaviour of the Embedding layer and also distort the learned associations.
In some cases, the Embedding layer is kept trainable. |
H: Training textblob with 16k rows of labeled data won't work (only few are working)
I've got labeled data in a csv which looks like:
title,type
Women Jacket A,Clothes
Mens Running Shoes B,Shoes
Children backpack,Bags
and a script:
from textblob.classifiers import NaiveBayesClassifier
with open('train_small.csv', 'r') as fp:
cl = NaiveBayesClassifier(fp, format="csv")
print ( cl.classify("Womens running shoe") )
If I use train_small.csv with 3 categories and 32 rows, I get the printed result, correct and immediate.
If I use my full training set (train.csv, 16k rows, 10 categories) I won't get a result. Today, I ran my laptop for 6 hours and then I had to shut it down by pressing the power button since it was at a load of 60 and irresponsive.
System is:
Operating System: Kubuntu 19.04
Kernel Version: 5.0.0-15-generic
OS Type: 64-bit
Processors: 4 × Intel® Core™ i5-7200U CPU @ 2.50GHz
Memory: 7.7 GiB of RAM
Not great, but completely insufficient ..?
Do I need to use another library / algorithm or just different hardware?
AI: Oh Wow, textblob's default setting can't deal with even a reasonable amount of data.
TL;DR: You do need better control over what the library does.
The class NaiveBayesClassifier, or more exactly it's superclass NLTKClassifier uses a default for of feature_extractor. In this case it uses basic_extractor, which, apart from stemming the words in some way (not relevant for us here) it does the following:
features = dict(((u'contains({0})'.format(word), (word in tokens))
for word in word_features))
Once per each document (sample).
Ouch! That will create a python dictionary for every single document. And store in these dictionaries the term frequecy for each document. On 16k rows (documents/samples) that is very likely to be way too much memory.
On Ubuntu you likely have swap space, which is extra disk space used as memory, your OS figuring out it is out of memory starts swapping pages of memory onto this swap space. Yet, swap space if super slow compared to actual memory, therefore the computation takes forever. And, in the end, it is this swap space and this slowness that keeps the algorithm running forever, instead of overheating your CPU or making the machine completely frozen and out of memory.
Better approach to the problem
ML techniques do not understand words or strings in general, they only understand numbers. textblob hides this fact from you because it performs the feature extraction behind the scenes. That works for toy problems but will likely impair the solution of any more complex problem (be it because of problem complexity or amount of data, or both).
You need to take control of the feature extraction yourself. A Naive Bayes classifier is a rather simple algorithm and will work well with a plethora of extraction techniques. Term Frequency (simply counting the number of each distinct word in every document), will work well enough as long as you do not use expensive python dictionaries to hold the frequencies in memory. But once you get control of your feature extraction there are many more techniques to use: TF-IDF, n-grams. I find the sklearn's explanation of text features very enlightening. But I digress.
Unfortunately nltk (the library under texblob) does not make things any easier. It requires its training sets to have a dictionary interface. In other words, without a good deal of hacks nltk cannot train on big amounts of data.
Option 1: Hook into nltk directly
First lets assume that your file is named data.csv and we will try to extract features reading the file line-by-line. First let me generate a data file:
import nltk
nltk.download('gutenberg') # just in case
from nltk.corpus import gutenberg
corpus = gutenberg.words('melville-moby_dick.txt')
with open('data.csv', 'w') as f:
for i in range(10**4): # 10k
f.write(' '.join([w for w in corpus[i*3:i*3+3] if w != ','])
+ ',' + str(i%2) + '\n')
f.close()
This was only generation of some data, nothing too relevant to the problem at hand. I do not have your file so I just generate a random 10k rows CSV using the words from Moby Dick text at random.
Onto the actual code. We will still use texblob's Naive Bayes but will not allow it to construct its huge list of dictionaries. Instead, we will hook into it directly:
from textblob.classifiers import NaiveBayesClassifier
import csv
cls = NaiveBayesClassifier('')
class FeatureDict(object):
def __iter__(self):
with open('data.csv') as f:
for row in csv.reader(f):
yield {w: w for w in row[0].split()}, row[1]
cls.train_features = FeatureDict() # this is passed to nltk
cls.classify('bang Boom!')
It is a hack but runs in constant memory.
Option 2: switch to sklearn
sklearn's vectorizers will perform the operation within NumPy (or within
the NumPy arrays inside pandas' data frames). Which are memory efficient - especially if compared with thousands of python dictionaries.
With sklearn one will need to do this in two steps: one for data vectorization and another for the prediction. make_pipeline joins the steps. Assuming the same data prepared in data.csv we can do:
from sklearn.feature_extraction.text import tfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import make_pipeline
import pandas as pd
data = pd.read_csv('data.csv', names=['doc', 'label'])
# some words are badly formatted, sklearn have issues with them
data = data.dropna()
model = make_pipeline(TfidfVectorizer(), MultinomialNB())
model.fit(data['doc'], data['label'])
model.predict(['buggy ho!'])
The sklearn solution is slightly more elegant but may blow memory at some point (500k samples perhaps). In that case, one would need to hack sklearn itself to do things in constant memory. |
H: Feature Selection with one-hot-encoded categorical data
I have a dataset with 400+ columns. Almost 90% of these are categorical data with One-Hot-Encoding (OHE). I'm using the dataset for a classification problem.
My professors asked me to perform feature selection using sequential forward selection (mlxtend).
Is there really a point of doing this since it is also very time consuming? Is it logical to remove categorical data? If so, what would be the k_features number for sfs that I should use? Or is the method(sfs) even suited for this?
AI: Sequential forward selection appears to be a greedy search algorithm if I am not mistaken? It appears you initially fit all possible one variable models, and then choose the one variable model that is highest performing. You then attempt to add a second variable to the highest performing one variable model from the prior step, by fitting all possible two variable models. You choose the highest performing two variable model, and if it is superior to the best one variable model, you move to the two variable model. The process continues, trying all three variable models, then four (if the best three variable model is superior to the best two)... until you choose the best k features or until the model stops improving.
The best k seems quite arbitrary and if there is a way to let the algorithm continue until there is no improvement then that is what I would do. However, here are some initial impressions of this approach that I have:
I suppose that a problem with this is that you have a large chance of not finding the most optimal model (local solution). A candidate variable may work particularly well with other variables that have yet to be included in future steps, or you may not even get the chance to use potentially useful predictors if the algorithm terminates beforehand.
However, the largest problem with this method as I see it is just how expensive it is. You state that you have 400+ columns. Say you have 415 variables. Then, in the first pass you will have to fit 415 models, then on the second pass, 414, etc...and this is not even including possible hyperparameter tuning and cross validation. This is a huge amount of models, and to be honest this is often the problem with wrapper based methods of feature selection in particular. The majority of them end up fitting a large amount of models for potentially marginal, if any, gain in model performance. The problem is made worse when you have hyperparameters that need to be tuned, and so I find the tradeoff between performance gains and the computational time spent using these methods to not be worthwhile, unless you want the absolute best model.
In your case, Peter's advice in the comments is the route I would take purely because it is way faster and often good enough. Almost every modern ML method offers regularization in some way that will explicity feature select for you aka embedded feature selection (by not using unhelpful predictors at all) or by strongly limiting their influence via. shrinkage. Ridge/LASSO/Elastic Net mentioned by Peter is a terrific suggestion. Other methods such as those based on trees also have embedded feature selection and may work well in your case, considering the dimension of your dataset. |
H: Image features (spectral bands) other than RGB for image analysis
What features can I derive from the image, other than its RGB, to help me analyze an image?
Actually, I'm looking for some papers, etc about this. But I couldn't find.
I know I can use some features like its lightness, etc. But I don't want a single value as lightness, I want a 2d array just like R or G or B.
AI: I think what you are looking for is called a colour space. There are multiple available, and each of them was created to solve a certain problem.
The RGB colour space is the most common I guess, since that is close to the way the computer displays show information (although they do apply some transformations before lighting the pixels). It is also similar to the way we see light (we have red, green and blue receptors).
Light is additive. If you keep adding colours, you get to white. Paint is subtractive. That leads to some differences into how colours mix and lead to the creation of the CMYK standard for printing. It represents colours in terms of cyan, magenta, yellow and black.
Perhaps more important for you is the HSB color space. It is rather nicely shaped, as a 3D cylinder. H stands for hue, the colour nuance, S stands for saturation, how pronounced is the colour and B stands for brightness. HSL is a similar color space, where the vertical axis is lightness instead of brightness. I find these two color spaces quite useful in analysis.
The RG cromacity space is used to analyse the color itself, and discard the brightness information. You basically represent the amounts of red and green from a normalized color (a color where the sum of r g b is 1). This way, you operate in a simpler, 2D space, and analyze the color information. See [1], [2].
The CIE colorspace is based on measurements of our color perception. It tries to mimic the way our photoreceptors react to color. The original CIE colorspace, known as CIEXYZ, was published in 1931. It has been revised in 1976 to form the CIELab colorspace, which is perceptually uniform. This means that it accounts for our non-liniar perception of color, making sure that the same increase produces the same perceptual difference across the color space. See [3], [4], [5].
The TSL colorspace (tint, saturation, lightness) was proposed in [6] for face detection.
There are many other colorspaces, but these are the ones I found most useful. There are usually conversions across different color spaces.
A publication by Logvinenko, 2014 might help with understanding the geometry of a color space and choosing a colorspace that helps your analysis [7].
Also, have a look at [8]. |
H: Embedding Values in word2vec
Are the embedding values for a particular word using word2vec Skipgram model the weights of the first layer or the softmax output of the function?
Does the embedding value change according to the training corpus?
AI: The word embeddings are the weights of the first layer i.e. the embedding layer and not the softmax output of the function. The embedding values represent a vector which gives the location of the word with respect to other words in a high dimensional vector space. And yes, the embedding values change according to the training corpus. However, if you are using a given language (for example English) and have a large amount of training data the final values of the vectors will turn out to be pretty close even with training corpus of different contexts. |
H: Creating a Object Detection model from scratch using Keras
I have a dataset containing 330 images which contain guns. Along with the images, I have a text file associated with each image file which contains,
The number of objects ( guns ) in the image.
Coordinates for bounding boxes around the gun in the image.
I need to train a model which takes an image as an input and outputs 4 integer values which are the coordinates for the bounding box ( vertices of the bounding box ).
For training an object detection model, should the image be kept as an input and the coordinates as the output of the model? Should there be Convolution layers for feature extraction and then FC layers for learning the features for producing 4 outputs ( coordinates of the bounding box )?
Is this notion of the model architecture correct? Any other tips/suggestions?
I am creating this model entirely in TensorFlow Keras without using any of the pretrained stuff.
AI: Before I answer your question, let me tell you this, You can go on and train a model from scratch, but you will definitely end up using one of the object detection architectures, be it Mask R-CNN, Faster R-CNN, Yolo or SSD. Your problem is a simplified version of what these architectures are trying to solve. These are generic object detectors that some of which supports more 1k classes. You have a single class detection problem.
Now back to your question.
For training an object detection model, should the image be kept as an input and the coordinates as the output of the model? Should there be Convolution layers for feature extraction and then FC layers for learning the features for producing 4 outputs ( coordinates of the bounding box )?
No, it is not that simple. Training a FCN to output 4 values as Bounding boxes wont work.
All object detectors mentioned earlier are based on assumptions, Faster R-CNN for example generates proposals (regions) its assuption is that these regions are very likely to contain an object, then does an additional step by classifying which class each region contains (you only have one class) and finally refining the output. The most important thing in Faster R-CNN is region proposal network, which iterates through the final convolution layer in a sliding window fashion, generating proposals in the different aspect ratios, for example 1:1, 1:2 and 2:1.
Why am I saying all this? Because I want you to understand, that the problem is not as easy as you think it is.
Further Reading:
https://medium.com/@tanaykarmarkar/region-proposal-network-rpn-backbone-of-faster-r-cnn-4a744a38d7f9
https://medium.com/diaryofawannapreneur/yolo-you-only-look-once-for-object-detection-explained-6f80ea7aaa1e |
H: Which comes first? Multiple Imputation, Splitting into train/test, or Standardization/Normalization
I am working on a multi-class classification problem, with ~65 features and ~150K instances. 30% of features are categorical and the rest are numerical (continuous). I understand that standardization or normalization should be done after splitting the data into train and test subsets, but I am not still sure about the imputation process. For the classification task, I am planning to use Random Forest, Logistic Regression, and XGBOOST (which are not distance-based).
Could someone please explain which should come first? Split > imputation or imputation>split? In case that split>imputation is correct, should I follow imputation>standardization or standardization>imputation?
AI: Always split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak.
Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially seen data.
I don't think the order of scaling/imputing is as strict. I would impute first if the method might throw of the scaling/centering.
Your steps should be:
Splitting
Imputing
Scaling
Here are some related questions to support this:
Imputation before or after splitting into train and test?
Imputation of missing data before or after centering and scaling? |
H: Do we have to split our dataset into training & testing when using ARIMA model?
I am working on a project where I predict the total quantities sold at the ITEM/DAY leve. As for the model, I decided to with an ARIMA model (I'm using R). For guidance, I decided to follow the two tutorials below:
The first one here
The second one here
The thing that I am confused about is that, in the second one, they split the data into training and testing and they fit the model on the training set and did the evaluation on the test set (all that makes sense). However, in the first article, they didn't do the splitting.
Can anyone guide me through which is the correct approach and provide me with an explanation?
Thank you.
AI: Splitting in Train and Test sets or not depends from the purpose of your analysis. You can follow a statistical approach or a machine learning approach.
In the classical, statistical approach, you fit a model on the whole batch of data. Your goal here is to check the sign of the variables' parameters, and whether they are significant or not. Scientifically speaking, each of those parameters represents the test of an hypothesis.
In the machine learning approach, you want a model that is good at predicting data it has never seen before. You don't care whether a given variable has a positive or negative association with you dependend variable, you don't care whether your parameters are 95% significant or not, you just care that the model predicts the output as precisely as possible.
So, the answer to your question is: it depends! What do you need your model for? |
H: Seeking advice on knowledge discovery
Background Information
I work for a fire department in Florida and the fire chief posed a question to me; At any given moment in time during the calendar year 2018, how many fire trucks are busy, how many are available, how many are standing by and how many are out of service? Using the SQL Server database I was able to distill this information into tabular data.
Software available to me.
Also, because the data is in SQL Server I can parse out the date time stamp by hour, weekday, month, etc. Then Using Python or Excel I can generate statistical tests and univariate statistics and charts. I have modest experience with R. Therefore my tools are SQL Server, R, Python and Excel.
The data
For the first record with the primary key 21546912 at the time of january 1, 2018 at 00:02:31.800 there was 1 fire truck working, 9 fire trucks are available for work and 0 are standing by or out of service. For the entire year I have 104,179 observations showing at exactly that time stamp how many are working, available or standing by or out of service.
I have enclosed 30 records as a comma separated values file below my question for you to see a sample.
The Question !
So what do I do with this data? I have already generated histograms showing the frequency of FireTrucksWorking, etc. Is there a statistical test such as regression that I can perform on this data? How do I bring meaning to this dataset? Are there any general patterns to the dataset that I can discover? Can I create a probability model out of this data such as; at 08:00 during the month of July there is x% probability of N fire trucks working ? Would you use queueing theory on this dataset?
I am open to all suggestions. At this point I have a large dataset but it is really just a jumble of numbers, how do I generate meaning out of these numbers?
This is cross-posted in Mathematics Exchange. But here I make the following revision.
I am looking for some type of knowledge discovery that will show me something about my data set the naked eye cannot see or the human brain cannot discern. For example: "there is a much higher likelihood of working during the hours of 08:00 to 12:00 than the rest of the day". "Are there patterns in my data I do not recognize"?
Do you have any thoughts on this?
Thirty sample records of data
PrimaryKey,DateTimeStamp,FireTrucksWorking,FireTrucksAvailableForWork,FireTrucksStandingBy,FireTrucksOutOfService
21546912,2018-01-01 00:02:31.800,1,9,0,0
21546921,2018-01-01 00:04:46.720,1,9,0,0
21546950,2018-01-01 00:09:39.400,1,9,0,0
21546951,2018-01-01 00:09:47.320,2,8,0,0
21546955,2018-01-01 00:11:16.780,3,7,0,0
21546959,2018-01-01 00:12:04.840,2,8,0,0
21546962,2018-01-01 00:12:09.030,3,7,0,0
21546963,2018-01-01 00:12:14.470,3,7,0,0
21546966,2018-01-01 00:12:17.790,3,7,0,0
21546967,2018-01-01 00:12:21.240,2,8,0,0
21546970,2018-01-01 00:12:40.240,2,8,0,0
21546973,2018-01-01 00:12:46.720,2,8,0,0
21546990,2018-01-01 00:14:24.610,2,8,0,0
21547002,2018-01-01 00:16:03.130,2,8,0,0
21547036,2018-01-01 00:19:59.450,1,9,0,0
21547043,2018-01-01 00:21:21.950,0,10,0,0
21547064,2018-01-01 00:24:50.470,0,10,0,0
21547065,2018-01-01 00:25:13.000,0,10,0,0
21547165,2018-01-01 00:43:31.130,0,10,0,0
21547344,2018-01-01 01:15:00.980,1,9,0,0
21547361,2018-01-01 01:16:58.320,1,9,0,0
21547383,2018-01-01 01:21:38.130,1,9,0,0
21547421,2018-01-01 01:30:42.250,0,10,0,0
21547436,2018-01-01 01:33:30.320,1,9,0,0
21547442,2018-01-01 01:33:48.470,1,9,0,0
21547449,2018-01-01 01:33:58.780,1,9,0,0
21547474,2018-01-01 01:37:30.550,1,9,0,0
21547484,2018-01-01 01:39:12.350,1,9,0,0
21547487,2018-01-01 01:40:41.290,0,10,0,0
21547510,2018-01-01 01:47:02.420,0,10,0,0
AI: There are a lot of different possibilities. It is a bit hard to get the message through here in the forum. But I try and focus on one thing that is relatively easy to implement:
One of the easiest things you can do is to do a linear regression. You dependent variable (y) would be „fire trucks working“ (or the share of working ones). You can estimate this dependent on time. So you can (for example) generate indicator variables (=1 if something is true vs =0 if not) which indicate things like weekday, hour of the day. Use this indicators as explanatory (x) variables in the regression.
The result will be, that each indicator shows you the average truck use at some point in time. Say indicator A is „monday“, B is „16:00“, you will see how many trucks are on average dispatched in this time window. You can also easily see if the difference is significant or not.
In the past such models were used to estimate the number of callers in callcenters. The thing is called „dummy variable regression“ (at least in some statistic fields). https://en.m.wikipedia.org/wiki/Dummy_variable_(statistics)
Note that you observe effects from each „dummy“ variable in contrast to a baseline. Imagine you have truck use (y) and only 1 indicator/dummy, which is „monday“. Say the intercept of the model (the result with all dummies =0) is 5, meaning on average 5 trucks are employed. If the effect/coefficient of your dummy is 2, it means there are 2 more trucks employed (on average) on mondays. So each dummy/indicator (viz in case it is 1) shows you an effect in contrast to the base category (where all the dummies are 0).
Here is some script in Python which I guess refers to a similar problem, maybe it helps for orientation: https://stackoverflow.com/questions/50733014/linear-regression-with-dummy-categorical-variables
Also this may help: https://songhuiming.github.io/pages/2017/01/21/linear-regression-in-python-chapter-3-regression-with-categorical-predictors/
Another note: In principle I would try to get many different dummies/indicators from the timestamp, e.g weekday, hour of day, special days or seasons (Christmas, winter etc). You need to try combinations which work for you and the model. But don‘t plug too many indicators into the model. Keep things easy in a first step and try to improve the model later.
In order to check the „quality“ of your model you can look at R2 (a measure that shows how well you are able to explain the data which is between 0,1). You can also randomly (!) set aside some 10-20% of the data (do not use them to estimate the model in the first place) to check how well you model can explain real truck use at some point in time.
I would use R for such a task (I find it more intuitive than Python). Here is some example code for R: https://stats.idre.ucla.edu/r/modules/coding-for-categorical-variables-in-regression-models/
Hope this helps: Good luck with your interesting project! |
H: What should I master better for professional data science in economics and finance?
First, excuse me for the noob and long question which is probably doesn’t even belong to here, I know there are several question been answered like this out there, but I think this is going to be up-to-date. Stack Overflow deleted my question and redirected me to here.
I study economics and finance on undergraduate level, and to be honest, I am not really into programming so far. However, I must admit it you can't doing really well nowadays without specific softwares and programming languages on economics/finance related fields.
According to my curriculum, I’ve encountered Matlab, some econometrics softwares, and of course MS Office, especially Excel with VBA. I have some shady framework in my mind, and please feel free to correct me if I am wrong. So as I experienced, for numerical calculations and doing the vast majority of math, Matlab, Octave and Mathematica exists. For econometrics, there are professional softwares like eViews, STATA, SPSS or the open source Gretl and Tableau for data visualization. And last, we can use Excel to manage databases.
Long story short, my basic question would be that, are these above the best tools for doing the job ? Or should I switch to more professional tools – like real programming languages - to being better in solving mathematical problems, numerical calculations, econometrics, data science and exquisite, high-quality data visualization? What are the most desirable skills in the data science industry nowadays in economic/financial areas?
I heard that R is a quite trending statistical programming language in these days, and getting better and better each day - I already wrote some functions and visualizations in Rstudio. I also heard that SQL is also a better option to manage really massive data sets instead of Excel, but is SQL able to do every kind of stuff with data what can be done in Excel ? It seems to me Python is generally the number one language for data analysis, it’s flexible and usable on a broad scale. I find Python libraries - such as matplotlib, numpy, pandas, bokeh - extremely attractive. What about Julia , is this going to be the next R in the future ? To be honest, I am also still confused a little bit by such terms like data science, data analysis, data mining, machine learning, big data – are there any serious difference between these phrases?
From above, which one is that I should really focus on and master it ? Keep practicing on popular softwares, or switch to R , Python, Julia, SQL ? Maybe both of them? Again, we are talking about only graduate and undergraduate level of economics and finance, and related jobs. I don’t want to develop serious and complex softwares/applications, just quantitatively analyze stock prices, corporate and economic data, like annual reports, employments, GDP and so on.
Experienced data analysts, please guide me through the confusing forest of data analysis tools. I appreciate every kind of comment.
AI: welcome to the forum. I‘m a trained economist, I do a lot of econometrics, and I work in research. My opinion (and it is only opinion of course) is that you should focus on R in the first place and consider learning Python. Both is not so hard after all.
Why? R is for free and it offers a lot of support for econometrics (it is well regarded in the community). It also is well regarded in the ML community and you can work with high spec things such as Keras, LightGBM etc. So with R you can‘t be wrong. Many top research papers in economics are done with R. Stata is good for some things (like Panel), but there are so many things you cannot do with Stata. R is more powerful.
Why Python? Python and R are „similar“ wrt what you can do in econometrics/statistics. But Python offers more flexibility in many aspects (some might disagree but this is how I see it). In essence, Python is a thing that offers you a lot of possibilities on top of working with data. So have a look at it. Can be very beneficial for your career.
Data science is an extremely broad field. But if you focus on econometrics/statistics, many other things become relevant as well in the moment. Examples are: working with „big“ data and getting a structure into data (e.g. from online sources etc). I also think that neural nets and tree based things such as boosting will become more relevant in economics in the future. Actually outside the academic world, these methods are used quite a lot in the context of economic problems.
Final note: If you think that you cannot do both (R and Python) it would still be fine to have some knowledge/experience in both. You grow with your problems. Even if you have only little experience, you can advance fast if you know where/how to start. This can also be good wrt getting a good job.
P.S. For Python there are also some good packages/applications for economists: https://lectures.quantecon.org/py/ |
H: Is there a Feature selection process for ARIMA model?
I have a dataset representing sales per day for certain products. It contains 30000 observations and 6 features (target included). Since my task is to make a prediction about the number of pieces sold, I decided to use an ARIMA model (following this tutorial here).
I realized that the only preprocessing step for auto ARIMA is removing all the columns except target. (since auto ARIMA trains on the previous values).
So, my question is: is it common to not perform any feature selection and only keep the Target variable in ARIMA?
Thank you.
AI: In some sense, it is common to do feature selection before you fit the ARIMA model, or at the very least, it is natural (in my opinion).
The problem is that there seems to be little development in automatic feature selection techniques for statistical time series models that can use exogenous variables (like ARIMA). Thus, it is not clear as to how we can do feature selection. To make things worse, auto.arima doesn't do any feature selection on exogenous variables, it just uses AICc to find the most optimal order of your model (in a stepwise fashion in its default setting). If you include exogenous variables in your model, they will always be included in all models in the selection process.
Basically, one way to do variable selection would be to try all possible combinations of exogenous variables, use auto.arima to find the "best" orders based on AICc, record this model's AICc (recall that AICc penalizes models that have large amounts of fitted parameters that do not increase the model's likelihood by a justifiable amount), and then pick the absolute best model out of all combinations of exogenous variables. Kind of a pain, and possibly very time consuming.
I hope this helps. |
H: Multiclass classification with high number of classes, high number of features and small sample size
I am working on a biology related dataset with over 300K features, and I only have about 5K samples. I want my model to classify many classes. For this problem in particular the class is age. Each age such as 10 or 35 will be individual classes. So roughly 80 classes (range from 10 to 90) are needed for this problem.
I immediately know that regularization is needed to shrink the number of features to prevent overfitting. I just don't know whether such a dataset can be treated as a multiclass classification problem with many number of classes. If I need more data, how many data will be enough for the model to learn? Or are there any clever ways I can do for this problem?
AI: You can try dimensionality reduction techniques such as PCA, which will reduce your numbers of features and maximizing the information describing your objects.
Moreover, if you want to classify the age which is a continuous variable, you may want to try regression instead of classification |
H: What's the difference between Haar-like Features and Haar Cascades?
Are they the same in terms of their algorithm? Or do they differ in their respective detection methods?
AI: Haar-like features are, as the name says, features. They are basically some filters, just like the CNN filters. The main difference is that CNN learns the filters by training, while Haar filters are hand designed. These filters are called Haar-like features due to their similarity with Haar wavelets.
Haar cascades are a bunch of Haar-like features arranged into a classifier. |
H: Why does putting a 1/2 in front of the squared error make the math easier?
Per wiki, the mean squared error (MSE) looks like:
$$
\operatorname {MSE} ={\frac {1}{m}}\sum _{i=1}^{m}(y_{i}-{\hat y_{i}})^{2}
$$
The professor added a $1\over2$ in front of the formula and explained it a little bit. I am a little bit confused. How does putting a $1\over2$ in front of the squared error make the math easier?
AI: A major reason for using MSE is to optimize the parameters of a regression model. From calculus, you know how to find the minimum of a function by taking the derivative. That puts a "2" out in front, which is irritating to keep writing, so it is reasonable to put a "1/2" at the beginning so the derivative doesn't need a constant out front.
We get away with it because the minima of $f(x)$ and $\frac{f(x)}{2}$ are achieved at the same value(s) of $x$. |
H: In the context of image binary classification, is it necessary to divide dataset into true positive, false positive, true negative, false negative?
I am working through this course. It seems that the professor is not dividing the dataset into true positive, false positive, true negative, and false negative.
In the context of image binary classification, is it necessary to divide the dataset into true positive, false positive, true negative, and false negative?
AI: No it's not. The numbers of true positives, false positives, true negatives, false negatives, are something you can check from a confusion matrix when you evaluate the performance of your trained model on a test set. It's useful to know where your model is making more mistakes.
It's not something you can do before training. |
H: PRML Bishop - Doubt in Entropy Formula (Section 1.6, equation 1.94)
Please refer to the following excerpt from "Pattern Recognition and Machine Learning" Bishop.
We can understand this alternative view of entropy by considering a
set of N identical objects that are to be divided amongst a set of
bins, such that there are $n_i$ objects in the $i^{th}$ bin. Consider
the number of different ways of allocating the objects to the bins.
There are N ways to choose the first object, $(N - 1)$ ways to choose
the second object, and so on, leading to a total of N! ways to
allocate all N objects to the bins, where N! (pronounced ‘factorial N
’) denotes the product N \times (N - 1) $\times \ldots \times 2 \times
> 1$. However, we don’t wish to distinguish between rearrangements of
objects within each bin. In the $i^{th}$ bin there are $n_{i}!$ ways
of reordering the objects, and so the total number of ways of
allocating the N objects to the bins is given by:
$${\bf W} = \frac{N!}{\prod_{i}n_{i}!}$$ which is called multiplicity.
I am confused about the bold-faced part.
If bin 1 were to have n=1 object and bin 2 were to have 2 objects and so on, then the number of options for placing the objects in $i^{th}$ bin would be like - (with successive bin have only remaining objects to choose from):
$$^{N}C_{1} * ^{N-1}C_{2} * ^{N-3}C_{3}* ^{N-6}C_{4} \cdot \cdot $$
Were am I missing the point?
P.S.: If each bin has only one object, then I can understand N!.
AI: You are not missing anything. You simply found another derivation of the same relationship.
We know that
$$^{n}C_{k} = \frac{n!}{k!(n-k)!} $$
If we expand the terms in your product,
$$^{N}C_{1} * ^{N-1}C_{2} * ^{N-3}C_{3}* ^{N-6}C_{4} \cdot \cdot $$
, we get
$$\frac{n!}{1!(n-1)!} \cdot \frac{(n-1)!}{2!(n-3)!} \cdot \frac{(n-3)!}{3!(n-6)!} \cdot \frac{(n-6)!}{4!(n-10)!} \dots $$
If we simplify the factorials,
$$\frac{n}{1!} \cdot \frac{(n-1)(n-2)}{2!} \cdot \frac{(n-3)(n-4)(n-5)}{3!} \cdot \frac{(n-6)(n-7)(n-8)(n-9)}{4!} \dots $$
And now it's obvious that this is
$$ \frac{n!}{\prod_i i!} $$ |
H: Changing categorical data to binary data is not reflected on the dataset
I am working through the Titanic competition. This is my code so far:
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/test.csv")
train['Sex'].replace(['female', 'male'], [0, 1])
train['Embarked'].replace(['C', 'Q', 'S'], [1, 2, 3])
# Fill missing values in Age feature with each sex’s median value of Age
train['Age'].fillna(train.groupby('Sex')['Age'].transform("median"), inplace=True)
linReg = LinearRegression()
data = train[['Pclass', 'Sex', 'Parch', 'Fare', 'Age']]
# implement train_test_split
x_train, x_test, y_train, y_test = train_test_split(data, train['Survived'], test_size=0.2, random_state=0)
# Training the machine learning algorithm
linReg.fit(x_train, y_train)
# Checking the accuracy score of the model
accuracy = linReg.score(x_test, y_test)
print(accuracy*100, '%')
This line previously looked like this: data = train[['Pclass', 'Parch', 'Fare', 'Age']], which ended up giving me an accuracy score of 19.5%. I realized that I didn't include sex so I went ahead and did this:
data = train[['Pclass', 'Sex', 'Parch', 'Fare', 'Age']]
Then, I got the following error:
ValueError: could not convert string to float: 'female'
Here I realized that the changes that I've done to my train['Sex'] and train['Age'] did not reflect on the training and the testing of the model, which seems to be the reason why my model performed at 19.5%. How do I come across this problem?
AI: Though you have converted the values into integer but you are not assigning it.
train['Sex'].replace(['female', 'male'], [0, 1])
train['Embarked'].replace(['C', 'Q', 'S'], [1, 2, 3])
should be like that
train['Sex'] = train['Sex'].replace(['female', 'male'], [0, 1])
train['Embarked'] = train['Embarked'].replace(['C', 'Q', 'S'], [1, 2, 3])
.
train['Age'].fillna(train.groupby('Sex')['Age'].transform("median"), inplace=True)
this code is working without assigning the value just because you have used inplace=True. Otherwise you have to assign it back as I mentioned for "Sex" and "Embarked".
scikit.preprocessing provides us various util methods for handling all such issues. Like LabelEncoder, Imputer for these purposes.
LabelEncoder, will convert string to integer values whereas Imputer, will replace the missing value.
Sample code for your reference:
from sklearn.preprocessing import Imputer, LabelEncoder
from collections import defaultdict
data = train[['Pclass', 'Sex', 'Parch', 'Fare', 'Age']]
#If you want to convert all features string to integer
d = defaultdict(LabelEncoder)
data = data.apply(lambda x: d[x.name].fit_transform(x))
#Otherwise you can convert each feature strings separately as mentioned below
encoder = LabelEncoder()
data['Sex'] = encoder.fit_transform(data['Sex'])
imputer = Imputer(strategy="median")
data = imputer.fit_transform(data) |
H: How do you determine the architecture of a model(keras)?
I have solved quite a few kaggle playground problems lately, but I can't understand how to come up with good enough model architecture which gets 0.9+ validation accuracy and without overfitting.
Is there some formula, or is it some hit and trial method for determining filters and neurons.
And also I'm always using relu activation in hidden layers(read it somewhere) when will other activations be used in hidden ones.
Can you lay down some guidelines that I should follow?. Considering I mostly work with image classification problems.
And I you have some other tips as well you can share in your answer.
AI: There is no theoretical understanding, that would take a problem and specify the optimal network architecture for you. So no, there is no formula, I'm afraid. What does exist, are strategies to arrive at good solutions.
Good strategies are to start simple and try to make iterative improvements. Define a single KPI (error metric) that you're trying to optimize, build the simplest model you can imagine that will hopefully work a little bit, then try to improve from there. You get improvements by trying to understand why your model does not perform optimally. You need to be able to distinguish between underfitting and overfitting to make the right modifications. Study the errors that your model makes on specific images and try to make improvements to address those mistakes with changes to the model.
I think you may find this course on Coursera interesting: Structuring Machine Learning Projects. |
H: How to know if a model is overfitting or underfitting by looking at graph
Just recently got my hands on tensorboard, but can you tell me what features should I look for in the graph (Accuracy and Validation Accuracy)
And please do enlighten me about the concept of underfitting as well.
AI: Overfitting is a scenario where your model performs well on training data but performs poorly on data not seen during training. This basically means that your model has memorized the training data instead of learning the relationships between features and labels.
If you are familiar with the bias/variance tradeoff, then you can think of overfitting as a situation where your model has high variance, memorizing the random noise in the training set.
Overfitting is easy to diagnose with the accuracy visualizations you have available. If "Accuracy" (measured against the training set) is very good and "Validation Accuracy" (measured against a validation set) is not as good, then your model is overfitting.
Underfitting is the opposite counterpart of overfitting wherein your model exhibits high bias. This situation can occur when your model is not sufficiently complex to capture the relationship between features and labels (or if your model is too strictly regularized).
Underfitting is a bit harder to diagnose. If Accuracy and Validation Accuracy are similar but are both poor, then you may be underfitting.
Edit 1:
Strategies to avoid overfitting/underfitting
Recall that overfitting is caused by the model memorizing the training data instead of learning the more-general mapping from features to labels.
This commonly occurs when training a model with so many parameters that it can fit nearly any dataset. As von Neumann so eloquently put it, "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."
You can combat overfitting by reducing the complexity of your model (i.e. reducing the number of trainable parameters). The specifics of how this is accomplished vary depending on the learning algorithm and the domain.
For neural networks, you can use fewer layers (shallower networks), fewer neurons per layer, sparser connections between the layers (as in convolutional nets), or regularization techniques like dropout.
In the same vein, you can combat underfitting by increasing the complexity of your model. This has been the driving force behind the push for ever-deeper neural networks in recent years. With more layers, the network can learn more sophisticated relationships and perhaps perform well on difficult real-world tasks.
Of course, you can't recklessly add layers to a network and expect great performance. Training deep neural networks is hard, for a number of statistical and technical reasons (one of which is avoiding overfitting).
So to answer your question directly: If your network is overfitting, adding more layers will almost certainly make the problem worse, since you're increasing model complexity. If your network is underfitting, adding more layers can help, but it's rarely so straightforward. You need to think carefully about how you expect the network to operate and what strategies you can employ to ensure that it doesn't begin to overfit.
Edit 2:
P.S. If you're new to the field of machine learning, then it may be helpful to experiment with more intuitive models than neural nets while learning about over/underfitting and the bias/variance tradeoff.
I would recommend decision trees (or model trees for regression problems). Tree-based models are easy to interpret, and playing around with parameters like the max depth and minimum impurity decrease of a decision tree might help you gain some intuition about the relationship of model complexity to overfitting. |
H: ValueError: Found input variables with inconsistent numbers of samples [43,19]
So, I've been trying to split my dataset into a 70-30 ratio using train_test_split in order to work things out with sklearn's PLS. However, I encountered an error that says:
# Read data
df = pd.read_csv(
filepath_or_buffer='colontumor.csv',
header=None,
sep=',')
df.tail()
X = df.iloc[:, :-1].values
# Split data to train and test on 70-30 ratio
X_train, X_test = train_test_split(
X, test_size=0.30, random_state=None)
pls = PLSRegression(n_components=2)
pls.fit(X_train, X_test)
Y_pred = pls2.predict(X_train)
and somehow encountered this error:
Line (17): pls.fit(X_train, X_test)
ValueError: Found input variables with inconsistent numbers of samples: [43, 19]
Is there any solution for this? I've been circling around for like hours now.
AI: The line that gives the error is:
pls.fit(X_train, X_test)
The second argument in fit should be your labels, the value you want your model to be able to predict. Instead you are inputting X_test, which contains the same features as X_train but with another length, causing your error. You want an y_train with your target value and make it look like this:
pls.fit(X_train, y_train)
But to get this value you will need to get your labels from you dataframe. It is hard to know how without knowing the data, but this might give you an idea:
X = df.drop(['name of label column'], axis=1).iloc[:, :-1].values
y = df['name of label column'].iloc[:, :-1].values
# Split data to train and test on 70-30 ratio
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
pls = PLSRegression(n_components=2)
pls.fit(X_train, y_train) |
H: Automatically uses several cores on R
I am using a library called MFE to generate meta-features. However, I am working right now with several files and I notice that I am using only 1 core of my machine and taking too much time.
I have been trying to implement some libraries as I saw in another question:
library(iterators)
library(foreach)
library(doParallel)
This one, but me being dumb could not implement it ='(.
I just would like to put this snippet running in all my cores so I can make it faster:
## Extract general, statistical and model based measures
metafeatures(Species ~ ., iris, groups=c("general", "statistical", "model.based"))
So how would I do this? MFE library
[EDIT]:
What I am doing right now:
library(iterators)
library(foreach)
library(doParallel)
library(mfe)
# foreach
split = detectCores()
eachStart = 25
# set up iterators
iters = iter(rep(eachStart, split))
# set up combine function
cl = makeCluster(split)
registerDoParallel(cl)
result = foreach(nstart=iters) %do%
metafeatures(Species ~ ., iris, groups=c("general", "statistical", "model.based"))
stopCluster(cl)
result
SAMPLE OF EXPECTED RESULT
What I am expecting is this following output and using all cores from my CPU.
AI: First, you are not dumb. We are all learning here.
Second, I have looked at your code and I can see the problem closer now.
The code below works, but it doesn't give anything useful (nor does it lead to any speedup). I'll try to explain why.
library(iterators)
library(foreach)
library(doParallel)
library(mfe)
data = iris
# foreach
split = detectCores()
eachStart = 25
# set up iterators
iters = iter(rep(eachStart, split))
# set up combine function
cl = makeCluster(split)
registerDoParallel(cl)
result = foreach(nstart=iters, .packages = c("iterators", "mfe")) %dopar% {
metafeatures(Species ~ ., iris, groups=c("general", "statistical", "model.based"))
}
stopCluster(cl)
This will return a list of length equal to the number of cores on your machine, but all of the elements in the list will be exactly the same. You are getting no speedup (in fact, your code will run slower if you use this) because all you are doing is running the exact same piece of code n = (number of cores on your machine) number of times, namely this:
metafeatures(Species ~ ., iris, groups=c("general", "statistical", "model.based"))
You aren't iterating over anything in your foreach loop to give any benefit to running in parallel. The overall idea of running stuff in parallel (in this context, anyway) is to run the exact same piece of code (i.e. a function) on each worker node, but with different inputs to the function's arguments in each call. In the context of a for loop, you can think of this as a single iteration of the for loop being sent to a single worker (splits, as you defined them in your code) on your computer. The benefit of course is that if you have many cores on your computer, different iterations of your for loop can be computed at the exact same time (vs. on a single core machine, you would have to work on a single iteration of the for loop one at a time). It is important to note that this all implies that each iteration in your for loop is independent of all other iterations and that the order in which you compute iterations does not matter (i.e., your algorithm is NOT sequential).
Here is one example that I can think of where you could consider running in parallel: good old k-fold cross validation. Each resulting training set receives the exact same pre processing but all you are changing is what folds make up the training and test sets.
This could be written like this:
library(foreach)
library(doParallel)
library(caret)
library(e1071)
library(tidyverse)
data = iris
cl = makeCluster(detectCores())
#Generate three folds from our dataset
folds = createFolds(y = data$Species, k = 3)
#Train a support vector machine in parallel using foreach. Returns the predictions from the fitted model.
results = foreach(i = 1:length(folds), .packages = c("caret", "e1071", "tidyverse")) %dopar% {
training = data[-folds[[i]], ]
test = data[folds[[i]],]
#Pre processing, if desired
#Train the support vector machine. Parameters are just for example, ripped from a webpage that just so happened to be using the exact same dataset.
model = svm(Species~., data=training,
method="C-classification", kernal="radial",
gamma=0.1, cost=10)
tibble(Row = folds[[i]], Predicted.Class = predict(model, newdata = test))
}
#Bind all the data frames in the list "results"
processedResults = bind_rows(results) %>%
arrange(-Row)
Notice how we are iterating over different training/test splits (with these lines):
training = data[-folds[[i]], ]
test = data[folds[[i]],]
but the function itself remains the same. Each core will receive a different train/test set in each iteration of the for loop. In your code, you aren't changing arguments to your function and so there is no point in running your code in parallel.
Now, if I understand mfe correctly, it is generating some summary statistics from your dataset. This could maybe run in parallel (maybe some statistics can be computed in parallel? I doubt there is any speed to be gained here), but I highly doubt it is much faster due to the overhead associated with running stuff in parallel (and hence, why the package creator did not choose to implement such a feature).
You mentioned random forest and how you were able to run it in parallel easily. This is because a random forest is easily made parallel (it is "embarrassingly parallel"). Each individual tree can be fit on a single worker (because each tree is fit independently of all other trees in the forest on a bootstrapped training set). The order in which the trees are fit doesn't matter either, because we just average over all of the trees predictions to get our final predictions.
My two cents: running in parallel is only useful if the algorithm itself can be run in parallel. You can't take advantage of all of your cores if the algorithm itself cannot be "divided" and then "combined". And even then, if the algorithm is trivial to run on a single thread then the computational overhead associated with running stuff in parallel makes it generally worse in these cases. |
H: What are true positive, false positive, true negative, false negative in the given example?
Assume there are 100 images, 30 of them depict a cat, the rest do not.
A machine learning model predicts the occurrence of a cat in 25 of 30 cat images. It also predicts absence of a cat in 50 of the 70 no cat images.
In this case, what are the true positive, false positive, true negative and false negative?
AI: Assuming cat as a positive class,
Confusion Matrix:
TN | FP
FN | TP
True Positive(TP): Images which are cat and actually predicted cat
i.e. 25
True Negative(TN): Images which are not-cat and actually predicted not-cat
i.e. 50
False Positive(FP): Images which are not-cat and actually predicted as cat
i.e. 20
False Negative(FN): Images which are cat and actually predicted as not-cat
i.e. 5
Precision: TP/(TP+FP)
Recall: TP/(TP+FN)
Precision: 25/(25+20) = 0.55,
Recall: 25/(25+5) = 0.833 |
H: LDA for sentiment analysis
As far as I understand it, LDA works by assuming that a corpus was written by a set of topics ands words corresponding to that topic by a specific distribution. I'm however not enterely sure what the output of the LDA process is and more importantly, how one can (for example) model a corpus of political speeches to their sentiments.
AI: LDA is a unsupervised technique which identifies the set of words that compromise of a topic.
This is done by assuming that there is a generator that generates the sentences using the words.
To simplify the assumptions on the number of topics are made which ideally has to be learnt from the data.
The output of LDA is the words that represent a topic and probability of a sentence/ text observation being a mixture of the N topics we initially assumed.
We will get a 1xN row vector for each text observation that has probability of it containing topic n (which is one of the N topics) |
H: In Affinity Propagation do the cluster centres minimize mean distance to all other points in the cluster?
For Affinity Propagation, do the cluster centers minimize the mean distance to all other points in the cluster?
AI: No.
AP maximizes responsibility, which is a combination of availability and affinity. Distances only serve as an indirect source to input affinities.
If you want to minimize average distance to the cluster center, use PAM It minimizes the sum of distances to the representatives, and that is equivalent because the number of points is constant. |
H: B.Tech Project for final year of College
I am an Engineering student in NSIT, Delhi, India. In our last year of B.Tech degree we have a B.Tech project, aka, BTP. I am in ECE(Electronics and Communication Engineering) dept.
Usually, my ECE batch mates would perform the project majorly regarding electronics, but I want to do in data Science or data manipulation under other departments(COE or IT). I approached a COE mentor under whose supervision I would like to do the project. But I have been struggling in choosing the right topic for project.
There are a lot of suggestion online, and many are preferably done by students who just want to complete a project instead of it been already done and executed by others. I want some decent approach suggestions for choosing the right project topics as I am quite confused right now.
What student level data Science or, specifically, Data manipulation
projects can I do in my final year of College?
AI: Do machine learning on microcontrollers or other edge devices (link1, link2, link3, link4, link5). This is a hot topic, new emerging technologies, and it will make your profs happy from both electronics and IT departments.
There was a recent news article (link) where a team from the Fraunhofer IMC did a handwritten digits recognition on an Arduino Uno. They claim that they not only run a trained neural network (which would be easy) but actually train the network on Arduino.
However, they don't show their code. On their license page (link) they only say some blah-blah-blah about discussing possibilities with partners and customers and boast 30 years of experience in the development of microelectronic circuits. This is lame. I am sure that this can be done by an undergraduate student without 30 years of experience and that student will write a blog post about it.
If you develop a machine learning code that would do such tasks on an Arduino or even something smaller, like ATTINY85 that would be cool. Your model should be able to be trained on a microcontroller given the restrictions of its memory and computational capacity. Then upload your code on a github or a blog, put your link here. If your code is good, it will get lots of likes, you will showcase your abilities to your employers, and will put those Fraunhofer dinos who hide their "great discoveries" to shame. Besides, it is very useful because of emerging technologies with IoT, 5G, etc.
By the way, it does not have to be a neural network. Other models, like SVM, KRR would also be great. Also, it does not have to be Arduino. Other microcontrollers, like PIC or ST are also OK. |
H: Why does logistic function use e rather than 2?
sigmoid function could be used as activation function in machine learning.
$${\displaystyle S(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}.}$$
If substitute e with 2,
def sigmoid2(z):
return 1/(1+2**(-z))
x = np.arange(-9,9,dtype=float)
y = sigmoid2(x)
plt.scatter(x,y)
the plot looks similar.
Why does the logistic function use $e$ rather than 2?
AI: Since you are going to minimize later on the log likelihood, there is actually no big difference between $\log 2^x=x * \log2$ and $\log e^x=x$. You see the difference is simply a constant.
Nevertheless one could argue to use $2^x$ instead of $e^x$ und also use $\log_2$ instead of $\log$ when it comes to the optimizing step. In fact it is possible to use $2^x$ and also many other functions, which show some desired properties.
Which are:
$\lim\limits_{x \rightarrow \infty}{f(x)}=1$
$\lim\limits_{x \rightarrow -\infty}{f(x)}=0$
$f(x) = -f(-x) + 1$, (symmetric in $(0, 0.5)$
Here is an example of suitable functions from wikipedia. |
H: Logistic Regression doesn't predict for the entire test set
I am working through Kaggle's Titanic competition. I am mostly done with my model but the problem is that the logistic regression model does not predict for all of 418 rows in the test set but instead just returns predictions for 197 rows. This is the error PyCharm gives:
Traceback (most recent call last):
File "C:/Users/security/Downloads/AP/Titanic-Kaggle/TItanic-Kaggle.py", line 37, in <module>
submission = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': predictions})
File "C:\Users\security\Anaconda3\envs\TItanic-Kaggle.py\lib\site-packages\pandas\core\frame.py", line 392, in __init__
mgr = init_dict(data, index, columns, dtype=dtype)
File "C:\Users\security\Anaconda3\envs\TItanic-Kaggle.py\lib\site-packages\pandas\core\internals\construction.py", line 212, in init_dict
return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
File "C:\Users\security\Anaconda3\envs\TItanic-Kaggle.py\lib\site-packages\pandas\core\internals\construction.py", line 51, in arrays_to_mgr
index = extract_index(arrays)
File "C:\Users\security\Anaconda3\envs\TItanic-Kaggle.py\lib\site-packages\pandas\core\internals\construction.py", line 328, in extract_index
raise ValueError(msg)
ValueError: array length 197 does not match index length 418
When I print(predictions) to confirm, this is what it gives:
[0 0 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 1 0 1 0 1 0
0 0 0 0 0 0 0 1 0 0 1 0 0 1 1 1 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 1 1 1 0 0 0
0 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0
1 0 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1
1 0 0 1 1 0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0
0 1 0 0 1 1 0 1 1 0 0 0]
This is my full code:
import pandas as pd
import warnings
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
warnings.filterwarnings("ignore", category=FutureWarning)
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/test.csv")
train['Sex'] = train['Sex'].replace(['female', 'male'], [0, 1])
train['Embarked'] = train['Embarked'].replace(['C', 'Q', 'S'], [1, 2, 3])
# Fill missing values in Age feature with each sex’s median value of Age
train['Age'].fillna(train.groupby('Sex')['Age'].transform("median"), inplace=True)
# Creating a new column called "HasCabin", where passengers with a cabin will get a score of 1 and those without cabins will get a score of 0
train['HasCabin'] = train['Cabin'].notnull().astype(int)
train['Relatives'] = train['SibSp'] + train['Parch']
logReg = LogisticRegression()
data = train[['Pclass', 'Sex', 'Relatives', 'Fare', 'Age', 'Embarked', 'HasCabin']]
# implement train_test_split
x_train, x_test, y_train, y_test = train_test_split(data, train['Survived'], test_size=0.22, random_state=0)
# Training the model with the Logistic Regression algorithm
logReg.fit(x_train, y_train)
predictions = logReg.predict(x_test)
submission = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': predictions})
filename = 'Titanic-Submission.csv'
submission.to_csv(filename, index=False)
UPDATE
As per what the users have pointed out, I went ahead and tried to remedy my mistake (ignore the code repetition. I'll be solving that later):
import pandas as pd
import warnings
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
warnings.filterwarnings("ignore", category=FutureWarning)
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Titanic-Kaggle/master/test.csv")
train['Sex'] = train['Sex'].replace(['female', 'male'], [0, 1])
train['Embarked'] = train['Embarked'].replace(['C', 'Q', 'S'], [1, 2, 3])
train['Age'].fillna(train.groupby('Sex')['Age'].transform("median"), inplace=True)
train['HasCabin'] = train['Cabin'].notnull().astype(int)
train['Relatives'] = train['SibSp'] + train['Parch']
train_data = train[['Pclass', 'Sex', 'Relatives', 'Fare', 'Age', 'Embarked', 'HasCabin']]
x_train, x_validate, y_train, y_validate = train_test_split(train_data, train['Survived'], test_size=0.22, random_state=0)
test['Sex'] = test['Sex'].replace(['female', 'male'], [0, 1])
test['Embarked'] = test['Embarked'].replace(['C', 'Q', 'R'], [1, 2, 3])
test['Age'].fillna(test.groupby('Sex')['Age'].transform("median"), inplace=True)
test['HasCabin'] = test['Cabin'].notnull().astype(int)
test['Relatives'] = test['SibSp'] + test['Parch']
test_data = test[['Pclass', 'Sex', 'Relatives', 'Fare', 'Age', 'Embarked', 'HasCabin']]
logReg = LogisticRegression()
logReg.fit(x_train, y_train)
predictions = logReg.predict(test[test_data])
submission = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': predictions})
filename = 'Titanic-Submission.csv'
submission.to_csv(filename, index=False)
As you can see, I tried to input the select test features into my algorithm
test_data = test[['Pclass', 'Sex', 'Relatives', 'Fare', 'Age', 'Embarked', 'HasCabin']]
...
predictions = logReg.predict(test[test_data])
Right now, I'm getting the following error:
Traceback (most recent call last):
File "C:/Users/security/Downloads/AP/Titanic-Kaggle/TItanic-Kaggle.py", line 29, in <module>
predictions = logReg.predict(test[test_data])
File "C:\Users\security\Anaconda3\envs\TItanic-Kaggle.py\lib\site-packages\pandas\core\frame.py", line 2914, in __getitem__
return self._getitem_frame(key)
File "C:\Users\security\Anaconda3\envs\TItanic-Kaggle.py\lib\site-packages\pandas\core\frame.py", line 3009, in _getitem_frame
raise ValueError('Must pass DataFrame with boolean values only')
ValueError: Must pass DataFrame with boolean values only
Its telling me that I need to pass boolean values into my algorithm but I don't understand why. There wasn't such a prerequisite when I was using the exact same data format while training the model.
AI: Your predictions are those for x_test, which was split out from train, but your submission's PassengerIds are those from test.
It appears you want to submit predictions for test, so you need to call logReg.predict on that instead of x_test. However, as @Peter notes in a comment, that will fail since the columns in test are not the same as in train and therefore x_train and x_test. Your feature encodings, null replacement, and engineering need to be done for test as well (but take care to to use train's median when filling test's missing ages). |
H: Algorithms inherently supporting multilabel classification
In the documentation of sklearn, it says that several algorithms inherentrly support multilabel classification, such as RandomForest or MLP : https://scikit-learn.org/stable/modules/multiclass.html
Does it come from the implementation or the structure of the models? Moreover, how does it works for these algorithm? Is is a One-vs-the-Rest strategy or something else?
Thanks.
AI: If the algorithm inherently supports multi-label classification, then it's usually an implicit feature of the algorithm rather than an implementation detail.
For example, MLPs inherently support multi-label classification because the output layer has a perceptron for each class, and each of these perceptrons output a probability for that class. The vector of outputs will predict an example's membership among all labels. Similarly, the the leaves of a Random Forest (or any other tree-based algorithm) can contain arbitrary-length vectors that describe the probability of the example belonging to each label.
The one-vs-rest strategy is used generalize binary classifiers (e.g. logistic regression) to multinomial problems. Multinomial problems are distinct from multi-label problems.
Edit:
Clarified my answer w.r.t. the differences between multinomial and multi-label problems. |
H: Concrete example to elaborate on the relationship between `linear function` and `affine function` in the context of machine learning
I am learning the deeplearningbook
Section 5.1.4 says:
so the mapping from parameters to predictions is still a linear
function but the mapping from features to predictions is now an affine
function
Can any one give a concrete example to elaborate on the relationship between linear function and affine function in the context of machine learning?
AI: There's a great answer on the Mathematics stack exchange about the distinction between linear and affine functions.
A linear function fixes the origin, whereas an affine function need not do so. An affine function is the composition of a linear function with a translation, so while the linear part fixes the origin, the translation can map it somewhere else.
In section 5.1.4, the book introduces linear regression:
$$\hat{y} = \pmb{\omega}^{T} \pmb{x} + b$$
In this example, the $\hat{y} = \pmb{\omega}^{T} \pmb{x}$ part is a linear function. It has to pass through the origin. The addition of a vertical translation $b$ causes $\hat{y}$ to become an affine transformation, since it no longer has to pass through the origin.
I think the authors point out this distinction because it's technically incorrect from a purely mathematical point-of-view to describe $$\hat{y} = \pmb{\omega}^{T} \pmb{x} + b$$ as a linear function of the parameters $\pmb{x}$. It's actually an affine function, so "linear regression" may appear to be a misnomer if you have a strong background in linear algebra.
For what it's worth, I don't think this is very helpful information to include in the main text of an introductory chapter on machine learning. If you still don't understand, then just keep reading! Don't let this block you from progressing through the book. |
H: How to Determine Specific Activation Function from keras' .summary()
I'm following a tutorial where a particular model is provided in .h5 format. Of course, I can call model.summary() on this model after loading it with load_model(), however the output looks like this:
Layer (type) Output Shape Param #
=================================================================
conv1d_1 (Conv1D) (None, 400, 32) 1568
_________________________________________________________________
batch_normalization_1 (Batch (None, 400, 32) 128
_________________________________________________________________
activation_1 (Activation) (None, 400, 32) 0
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 100, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 100, 64) 32832
_________________________________________________________________
batch_normalization_2 (Batch (None, 100, 64) 256
_________________________________________________________________
activation_2 (Activation) (None, 100, 64) 0
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 25, 64) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 25, 128) 131200
_________________________________________________________________
batch_normalization_3 (Batch (None, 25, 128) 512
_________________________________________________________________
activation_3 (Activation) (None, 25, 128) 0
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 6, 128) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 768) 0
_________________________________________________________________
dense_1 (Dense) (None, 80) 61520
_________________________________________________________________
batch_normalization_4 (Batch (None, 80) 320
_________________________________________________________________
activation_4 (Activation) (None, 80) 0
_________________________________________________________________
dense_2 (Dense) (None, 80) 6480
_________________________________________________________________
batch_normalization_5 (Batch (None, 80) 320
_________________________________________________________________
activation_5 (Activation) (None, 80) 0
_________________________________________________________________
dense_3 (Dense) (None, 2) 162
_________________________________________________________________
activation_6 (Activation) (None, 2) 0
My question: is there a way to find out which activation function was used? e.g. relu, softmax, sigmoid, etc. using existing keras methods?
AI: You can check the type of activation in layer config.
model.layers[idx].get_config(), where idx is the index of desired layer. |
H: What is the geometrical representation of the y value in a 2-variable perceptron plot?
This figure represents a perceptron model with a 2 dimensional feature vector input.
The hypothesis space of the perceptron is defined by this set:
$\{y | y = w\cdot x + b\}$
What is the geometrical representation of the y value in this perceptron plot? Is it the perpendicular distance from a point to the separating hyperplane?
AI: in the context of binary classification, Y indicate the point at ($x_1,x_2$) belongs to positive(1) or negative(-1)
this piece of code is helpful to understand this
import numpy as np
import random
def generate_data(no_points):
X = np.zeros(shape=(no_points, 2))
Y = np.zeros(shape=no_points)
for ii in range(no_points):
X[ii][0] = random.randint(1,9)+0.5
X[ii][1] = random.randint(1,9)+0.5
Y[ii] = 1 if X[ii][0]+X[ii][1] >= 13 else -1
return X, Y
X, Y = generate_data(100)
plt.scatter(X[:,0],X[:,1],c=Y)
in this figure, positive(1) class is shown as yellow, and negative(-1) class is shown as purple. |
H: Ideal score of a model on training and cross validation data
The question is little bit broad, but I could not find any concrete explanation anywhere, hence decided to ask the experts here.
I have trained a classifier model for binary classification task. Now I am trying to fine tune the model. With different sets of hyperparameters I am getting different sets of accuracy on my train and test set. For example:
(1) Train set: 0.99 | Cross-validation set: 0.72
(2) Train set: 0.75 | Cross-validation set: 0.70
(3) Train set: 0.69 | Cross-validation set: 0.69
These are approximate numbers. But my point is - for certain set of hyperparameters I am getting more or less similar CV accuracy, while the accuracy on training data varies from overfit to not so much overfit.
My question is - which of these models will work best on future unseen data? What is the recommendation in this scenario, shall we choose the model with higher training accuracy or lower training accuracy, given that CV accuracy is similar in all cases above (in fact CV score is better in the overfitted model)?
AI: Accuracy on the training data basically doesn't count. I don't quite want to say to ignore it, because a train/test accuracy of 100/70 seems different to me than a train/test accuracy of 71/70, but you're mostly not interested in performance on the training data.
Using a test set mimics a real application of machine learning. Think about Siri or Alexa. The goal is to predict speech that it hasn't heard. There's no way to know how it will perform on such speech, so the next-best approach is to use some data where you know the answer but hide it from your model. After you train the model, assess how it performs on data where it has not seen the answer. If the model is accurate, then that's a good sign about its ability to perform on real speech recognition tasks.
Training data is like the practice questions or homework problems, while test data is the exam. |
H: Combine Pandas DataFrame Rows Based on Matching Data and Boolean
I have a Pandas DataFrame with sales data and columns for year, ISO week, price, quantity, and organic [boolean]. Because each row represents a different location, dates are repeated. I would like to combine rows with matching year, ISO week, and organic. Ideally, the combined row would have the average price and sum of total quantity. Any help is much appreciated!
AI: I believe what you need is agg from pandas. You can pass a dictionary of the different aggregations you need for each column:
import pandas as pd
df = pd.DataFrame({'year':['2017','2018','2019','2019'],
'ISO Week':[1,2,3,3],
'Price':[5,10,15,20],
'quantity':[1,2,3,4],
'organic':[True, False, True, True]})
ISO Week Price organic quantity year
0 1 5 True 1 2017
1 2 10 False 2 2018
2 3 15 True 3 2019 #<------ combine
3 3 20 True 4 2019 #<------ combine
df.groupby(['year','ISO Week','organic'], as_index=False).agg({'Price':'mean', 'quantity':'sum'})
year ISO Week organic Price quantity
0 2017 1 True 5.0 1
1 2018 2 False 10.0 2
2 2019 3 True 17.5 7 |
H: Loss and val loss too big
I have a random vector with known pdf x and i'm trying to predict some behaviour y. I created a dataset containing (X,y), where X is a vector of D realizations of the VA X and y = (y1,y2). That is, for each instance I have a vector D of input and the vector y for output. I tried:
from tensorflow import keras
model = keras.models.Sequential([
keras.layers.Dense(300, input_shape=(D,), activation="relu"),
keras.layers.Dense(300, input_shape=(D,), activation="relu"),
keras.layers.Dense(100, input_shape=(D,), activation="relu"),
#keras.layers.Dense(10, activation="relu")
keras.layers.Dense(2)
])
model.summary()
model.compile(loss='mean_squared_error', optimizer="adam")
history = model.fit(X_train, y_train, epochs=30, validation_data= (X_val, y_val))
And in training:
Epoch 2/252 \\
252/252 [==============================] - 0s 965us/sample - loss: 67222078.7937 - val_loss: 99721252.0000
Epoch 2/252
252/252 [==============================] - 0s 325us/sample - loss: 66736292.6984 - val_loss: 99545232.0000
Epoch 3/252
252/252 [==============================] - 0s 246us/sample - loss: 64592091.8095 - val_loss: 99156788.0000
...
Epoch 252/252
252/252 [==============================] - 0s 305us/sample - loss: 23.6345 - val_loss: 92329540.0000
80/80 [==============================] - 0s 143us/sample - loss: 29423635.9000
Loss and val_loss is too bad. Can anyone give me an insight?
AI: It could be normal to have such large loss value if your predicted values are far from the ground truth values (because of the square in the mean squared error loss).
Try to print the predicted values and ground truth with tf.Print to check that.
You can normalize your ground truth values so that their range is not too big. For example it should be easier for the network to predict values between 0 and 10 than values between 0 and 1000. |
H: Pandas dataframe groupby and then sum multi-columns sperately
I have a pandas dataframe as following:
df = pd.DataFrame({
'Date': ['2017-1-1', '2017-1-1', '2017-1-2', '2017-1-2', '2017-1-3'],
'Groups': ['one', 'one', 'one', 'two', 'two'],
'data1': range(1, 6),
'data2': range(10, 15})
Output:
Date Groups data1 data2
0 2017-1-1 one 1 10
1 2017-1-1 one 2 11
2 2017-1-2 one 3 12
3 2017-1-2 two 4 13
4 2017-1-3 two 5 15
I would like the output to look like this:
Date Groups sum of data1 sum of data2
0 2017-1-1 one 6 33
1 2017-1-2 two 9 28
I can groupby "Group" and agg.(sum) either data columns, but couldn't do 2 simultaneously. How can I do this?
Thanks.
AI: IIUC:
In [5]: (df.groupby("Groups", as_index=False)
.agg({"Date":"first", "data1":"sum", "data2":"sum"}))
Out[5]:
Groups Date data1 data2
0 one 2017-1-1 6 33
1 two 2017-1-2 9 27 |
H: What exactly is BatchNormalization() in Keras?
A month or two straight away into building image classifiers, I just sandwiched the BatchNormalization layer between conv2d. I wonder what it does, but I have seen my model learn faster in presence of these layers.
But I'm worried if there's any catch? I read somewhere that I don't need dropout layer if I'm using batch normalization! Is it true?
And also tell me in which manner should I use this layer. In which kind of problems I should and shouldn't use this layer?
Just write down anything you know about the layer that you think will help me!
AI: Batch Normalization is a layer that is put in between convolution and activation layers or sometimes after activation layers. It is used to normalize layer’s input to reduce the internal covariate shift problem.
This problem occurs by changing in distribution of the input data in the early layers, and because every layer depends on the input of the former layers, it becomes a problem for this layer since it requires repeatedly adjusting to new input distributions.
Batch Normalization is a technique for training very deep neural networks that standardizes the inputs to a layer for each mini-batch. This has the effect of stabilizing the learning process and dramatically reducing the number of training epochs required to train deep networks.
It can be implemented during training by calculating the mean and standard deviation of each input variable to a layer per mini-batch and using these statistics to perform the standardization.
You read more here, here or here.
Also, I can tell you that this layer has a good impact on the performance and the achieved scores of your model.
Hope this helps you understand the importance of BN. |
H: What is the standard terminology for the output matrix of a hidden layer in a CNN?
The output of a hidden layer $h_i$ in a convolutional neural network is (generally) a 3D grid of values. These values are the outputs of the neurons of layer $h_i$. Is there a standard way to refer to this matrix of values?
AI: It is called a feature map.
This image might be helpful for terminology: |
H: nltk's stopwords returns "TypeError: argument of type 'LazyCorpusLoader' is not iterable"
While trying to remove stopwords using the nltk package, the following error occurred:
from tqdm import tqdm
import nltk
from nltk.corpus import stopwords
preprocessed_reviews = []
for sentance in tqdm(final["Text"].values):
sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords)
TypeError
Traceback (most recent call last)
<ipython-input-136-ac5c19fafd9c> in <module>()
---> 7 sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords)
8 preprocessed_reviews.append(sentance.strip())
9
TypeError: argument of type 'LazyCorpusLoader' is not iterable
AI: There are a couple of items that could be improved in your code:
nltk.corpus.stopwords is a nltk.corpus.util.LazyCorpusLoader. You might want stopwords.words('english'), a list of stop English words.
It can cause bugs to update a variable as you iterate through it, for example sentance in your code.
In your code preprocessed_reviews is not being updated.
You might want to tokenize instead of str.split().
Here is a revision version:
import nltk
from nltk.corpus import stopwords
from tqdm import tqdm
reviews_raw = ('The quick brown fox jumps over the lazy dog', "I have been doing what I should not been doing")
reviews_processed = []
for sentence in tqdm(reviews_raw):
reviews_processed.append(' '.join(token.lower() for token in nltk.word_tokenize(sentence) if token.lower() not in stopwords.words('english')))
assert reviews_processed == ['quick brown fox jumps lazy dog', ''] |
H: Error when checking target: expected dense_1 to have 4 dimensions, but got array with shape (3456, 1)
I have trained a model for CNN and I am getting the error on the dense layer.
Model code:
def model(input_img):
conv1 = Conv2D(5, (3, 3), padding='same')(input_img) #28 x 28 x 32
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) #14 x 14 x 32
relu1 = Activation('relu')(pool1)
drop1 = Dropout(rate = 0.5)(relu1)
conv2 = Conv2D(5, (3, 3), padding='same')(pool1) #14 x 14 x 64
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) #7 x 7 x 64
relu2 = Activation('relu')(pool2)
drop2 = Dropout(rate=0.5)(relu2)
dense = Dense(2, activation='softmax')(drop2) # 28 x 28 x 1
return dense
Model Summary:
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 242, 242, 1) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 242, 242, 5) 50
_________________________________________________________________
batch_normalization_4 (Batch (None, 242, 242, 5) 20
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 121, 121, 5) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 121, 121, 5) 230
_________________________________________________________________
batch_normalization_5 (Batch (None, 121, 121, 5) 20
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 60, 60, 5) 0
_________________________________________________________________
activation_4 (Activation) (None, 60, 60, 5) 0
_________________________________________________________________
dropout_4 (Dropout) (None, 60, 60, 5) 0
_________________________________________________________________
dense_1 (Dense) (None, 60, 60, 2) 12
=================================================================
Total params: 332
Trainable params: 312
Non-trainable params: 20
But when I am trying to train the model
model_train = model.fit(train_X, train_ground, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_ground))
I get the following error
ValueError Traceback (most recent call last)
<ipython-input-51-0fec3a3d04b9> in <module>()
----> 1 model_train = model.fit(train_X, train_ground,
batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X,
valid_ground))
2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit(self,
x, y, batch_size, epochs, verbose, callbacks, validation_split,
validation_data, shuffle, class_weight, sample_weight, initial_epoch,
steps_per_epoch, validation_steps, **kwargs)
950 sample_weight=sample_weight,
951 class_weight=class_weight,
--> 952 batch_size=batch_size)
953 # Prepare validation data.
954 do_validation = False
/ usr/local/lib/python3.6/dist-packages/keras/engine/training.py in
_standardize_user_data(self, x, y, sample_weight, class_weight,
check_array_lengths, batch_size)
787 feed_output_shapes,
788 check_batch_axis=False, # Don't enforce the batch size.
--> 789 exception_prefix='target')
790
791 # Generate sample-wise weight values given the
`sample_weight` and
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in
standardize_input_data(data, names, shapes, check_batch_axis,
exception_prefix)
126 ': expected ' + names[i] + ' to have ' +
127 str(len(shape)) + ' dimensions, but got array '
--> 128 'with shape ' + str(data_shape))
129 if not check_batch_axis:
130 data_shape = data_shape[1:]
ValueError: Error when checking target: expected dense_1 to have 4
dimensions, but got array with shape (3456, 1)
Any help will be greatly appreciated.
Thank you
AI: You need to flatten your tensor before feeding it to the Dense layer.
from keras.layers import Flatten
def model(input_img):
conv1 = Conv2D(5, (3, 3), padding='same')(input_img) #28 x 28 x 32
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) #14 x 14 x 32
relu1 = Activation('relu')(pool1)
drop1 = Dropout(rate = 0.5)(relu1)
conv2 = Conv2D(5, (3, 3), padding='same')(pool1) #14 x 14 x 64
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) #7 x 7 x 64
# Added this layer to flatten the input to Dense layer
flattened = Flatten()(pool2)
relu2 = Activation('relu')(flattened)
drop2 = Dropout(rate=0.5)(relu2)
dense = Dense(2, activation='softmax')(drop2) # 28 x 28 x 1
return dense
Also, I suppose your target data is categorical (ndim=2), hence the Dense of ouput=2? I'm just noting because this can also pose a problem if your target data has ndim=1 but your current model expects a target data of ndim=2. |
H: is perceptron based implementation of Logical conjunction (AND) regression or classification
A perceptron model could be used to implement a Logical conjunction (AND) operator.
In this case, is the problem regression or classification?
AI: AND operator gives you answer of True or False, it is binary in nature.
Hence, it is classification in nature. |
H: Is the activation function the only difference between logistic regression and perceptron?
As far as I know, logistic regression can be denoted as:
$$
f(x) = \sigma(w \cdot x + b)
$$
A perceptron can be denoted as:
$$
f(x) = \operatorname{sign} (w \cdot x + b)
$$
It seems that the only difference between logistic regression and a perceptron model is the activation function.
Is this correct?
AI: TL;DR: Yes and No; they're both similar decision function models but there's more to each model than their main formulation
One could use the logit function as the activation function of a perceptron and consider the output a probability. Yet, that value would likely need a probability calibration.
As with most ML models several things are very similar from model to model, on the other hand varying tiny parameters can result in a different model. Let's take on both sides:
Logit Regression and Perceprton similarities
The logit function is used in logit regression for its properties of being an S-curve, by default valued between 0 and 1. The sign activation function in the perceptron is also shaped like an "S-curve" (with very rough edges - so mathematically not an S-curve by its definition but with similar properties) valued "between" -1 and 1. Another activation function often used with perceptron is hyperbolic tangent (tanh), which is another S-curve - very similar to the sign function but with a rounded shape (and also valued between -1 and 1). We can say that tanh is similar to sign because:
$$
\texttt{sign}(x) \approx \texttt{tanh}(kx) \qquad \text{for k >> 0}\\
or\\
\texttt{sign}(x) = \texttt{tanh}(kx) \qquad k \to \infty
$$
So it makes sense to compare tanh with logit as an analogy to compare sign with logit.
Now the logit function is (the one used in logistic regression that is - not the statistical $ln(x/(1- x))$ one):
$$
\texttt{logit}(x) = \frac{L}{1 + e^{-k(x - b)}}
$$
Where $L$, $k$ and $b$ are parameters that we can steer. (Since $b$ is a bias term, an unknown constant, it does not matter if we write $- b$ or $b$)
And the hyperbolic tangent is:
$$
\texttt{tanh}(x) = \frac{e^{2x} - 1}{e^{2x} + 1}
$$
But wait, if we set the parameters of the logit as $L = 1$, $k = 1$ and $b = 0$, then:
$$
2 \cdot \texttt{logit}(2x) - 1 = 2 \cdot \frac{1}{1 + e^{-2x}} -1
= \frac{2e^{2x}}{e^{2x} + 1} - 1
= \frac{2e^{2x}}{e^{2x} + 1} - \frac{e^{2x} + 1}{e^{2x} + 1}
= \frac{2e^{2x} - e^{2x} - 1}{e^{2x} + 1}
= \frac{e^{2x} - 1}{e^{2x} + 1} = \texttt{tanh}(x)
$$
So tanh, which is more-or-less a rounded sign, is a scaled and shifted special case of logit. Not only the model differ by the activation function alone but the activation functions are very similar to each other.
Logit Regression and Perceprton differences
Log probabilities (this is probably the major difference and the most important one)
We looked at the logit function above but in reality the logit regression takes the logarithm of the logit instead of plain values for the probabilities. Or, more exactly, in statistics the logit is defined as the natural logarithm of what in most ML implementations is defined as a logit decision function. This is very different from the perceptron which performs the output directly from the activation function.
Regularization
Both models (logit regression and perceptron) are often trained with some form of gradient descent optimizer (be it SGD, have it momentum, or even something else). This training will optimize all parameters ($w$ in your representation or $k$ in mine for logit and the weights in a perceptron) but the model itself will also be given a bunch of hyper-parameters. And in terms of hyper-parameters things do differ:
The perceptron will be trained by optimizing the weights (including a bias) and can be regularized with $L_2$ or $L_1$ (or a combination of both). One may or may not add a bias term.
The actual model of the logistic regression will optimize the logarithm but there's more: most implementations will include a scaling hyper-parameter ($C$) which will multiply the log probabilities. One can use $C$ to regularize the model apart from $L_1$ and/or $L_2$.
Multi-class
The perceptron is always a binary classifier. One sets an output threshold and that works as the decision function (for the sign or tanh functions the threshold is often $0$). For multi-class classification one must build One-Vs-Rest or One-Vs-One groups of models.
Logistic Regression can be used as a binary classifier and in this case can be used for multi-class classification with One-Vs-Rest and One-Vs_one methods. But, there exist a formulation of logistic regression for direct multi-class classification:
$$
\texttt{multinomial logit}(x|y) = \frac{e^{-k_y(x - b)}}{\sum^{K}_{k=0} e^{-c(x - b)}} \qquad \text{for y in }\{0,1,\dots\}
$$
i.e. whether $x$ belongs to class $y$. This is performed for each class and the result can be passed to a softmax function. This multi-class formulation can be performed because logit regression deals with log probabilities instead of direct inputs (contrary to a perceptron). |
H: Is it a acceptable way to write a loss function in this form?
I found a loss function of a perceptron on a book is in this form
$$ L(w,b) = - \sum\limits_{x_i \in M}y_i(wx_i+b) $$
In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?
AI: It just means to sum over all $x_i$ in $M$. That is completely acceptable notation. |
H: Exponential Linear Units (ELU) vs $log(1+e^x)$ as the activation functions of deep learning
It seems ELU (Exponential Linear Units) is used as an activation function for deep learning. But its' graph is very similar to the graph of $log(1+e^x)$. So why has $log(1+e^x)$ not been used as the activation functions instead of ELU?
In other words what is the advantage of ELU over $log(1+e^x)$?
AI: ReLU and all its variants ( except ReLU-6 ) are linear i.e $ y = x $ for values greater than or equal to 0.
This gives ReLU and specifically ELU an advantage like:
Linearity means that the slope does not plateau or saturate when $x$ becomes larger. Hence, the vanishing gradient problem is solved.
Now, the graph $ y = log( 1 + e^x ) $ isn't linear for values > 0.
For larger negative values, the graph produces values which are very close to zero. This is also found in sigmoid where larger values produce a fully saturated activation. Hence, $ y = log( 1 + e^x ) $ can raise problems which sigmoid and tanh suffer.
About ELU:
ELU has a log curve for all negative values which is $ y = \alpha( e^x - 1 )$. It does not produce a saturated firing for some extent but saturates for larger negative values.
See here for more information.
Hence, $ y = log( 1 + e^x ) $ is not used because of early saturation
for negative values and also non linearity for values > 0. This may
produce problems and even bring down some features which ReLU and
variants exhibit. |
H: A substitute formula for MSE
I don't understand where this formula for Mean Squared Error is coming from.
How do we arrive at:
$$MSE = \frac{1}{m}||y' - y||_2^2$$
from:
$$MSE = \frac{1}{m}\cdot\sum_i(y'_{i} - y_{i})^2$$
(The source is deeplearningbook)
AI: We have $$\|x\|_2=\sqrt{\sum_{i=1}^n x_i^2}$$
Hence $$\|x\|_2^2=\sum_{i=1}^n x_i^2$$
Now let $x=y'-y$ and you obtain your formula. |
H: one-hot-encoding categorical data gives error
I am currently working on the Boston problem hosted on Kaggle. The dataset is nothing like the Titanic dataset. There are many categorical columns and I'm trying to one-hot-encode these columns. I've decided to go with the column MSZoning to get the approach working and work out a strategy to apply it to other categorical columns. This is a small snippet of the dataset:
Here are the different types of values present in MSZoning, so obviously integer encoding only would be a bad idea:
['RL' 'RM' 'C (all)' 'FV' 'RH']
Here is my attempt on Python to assign MSZoning with the new one-hot-encoded data. I do know that one-hot-encoding turns each value into a column of its own and assigns binary values to each of them so I realize that this isn't exactly a good idea. I wanted to try it anyways:
import pandas as pd
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv")
labelEncoder = LabelEncoder()
train['MSZoning'] = labelEncoder.fit_transform(train['MSZoning'])
train_OHE = OneHotEncoder(categorical_features=train['MSZoning'])
train['MSZoning'] = train_OHE.fit_transform(train['MSZoning']).toarray()
print(train['MSZoning'])
Which is giving me the following (obvious) error:
C:\Users\security\Anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py:392: DeprecationWarning: The 'categorical_features' keyword is deprecated in version 0.20 and will be removed in 0.22. You can use the ColumnTransformer instead.
"use the ColumnTransformer instead.", DeprecationWarning)
Traceback (most recent call last):
File "C:/Users/security/Downloads/AP/Boston-Kaggle/Boston.py", line 11, in <module>
train['MSZoning'] = train_OHE.fit_transform(train['MSZoning']).toarray()
File "C:\Users\security\Anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py", line 511, in fit_transform
self._handle_deprecations(X)
File "C:\Users\security\Anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py", line 394, in _handle_deprecations
n_features = X.shape[1]
IndexError: tuple index out of range
I did read through some Medium posts on this but they didn't exactly relate to what I was trying to do with my dataset as they were working with dummy data with a couple of categorical columns. What I want to know is, how do I make use of one-hot-encoding after the (attempted) step?
AI: First of all, I noticed you have loaded the same dataframe for both train and test. Change the code like this:
import numpy as np
import pandas as pd
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/test.csv")
At this point, one-hot encode each variable you want with pandas' get_dummies() function:
# Onhe-hot encode a given variable
OHE_MSZoning = pd.get_dummies(train['MSZoning'])
It will be returned as a pandas dataframe. In my Jupyter Notebook it looks like this:
OHE_MSZoning.head()
You can repeat the same command for all the variables you want to one-hot encode.
Hope this helps, otherwise let me know. |
H: How to create feature representation?
Let's say I've a dataset with 800 rows(40 entries for each of 20 users). The entries are user session log ( columns are - browser, os, time, date etc for a specific session).
Now each user has unique id (1-20). Let's say user_id=1 is special one and I need to detect it whenever new data comes.
So for new data I need to predict whether that session is of user_id=1 or not.
My question is How to do that ?
One way I thought is make a feature representation for each of the 20 users, and whenever new data comes, take the distance of the data from each of feature and see the minimum distance.
But the problem is when I make a unique feature representation for each user, how to deal with columns like browser, os - because a user can have used multiple browsers in all those 40 sessions ?
AI: If I understand correctly, you want to create features? there are a few ways to do this. I will first talk about Label Encoding and Hot Label Encoding(link is the first I found on Google). This depends on the model, I will explain below.
Since we want to use categorical data in a model, most models behave better with numerical data. So let's convert categorical -> numeric:
import pandas as pd
from sklearn.preprocessing import LabelEncoder
df = pd.DataFrame({'user':['adam','adam','susie','jane'],
'browser':['chrome','firefox','chrome','opera']})
browser user
0 chrome adam
1 firefox adam
2 chrome susie
3 opera jane
label_encoder = LabelEncoder()
# convert browser to numeric
df['browser_encoded'] = label_encoder.fit_transform(df['browser'])
browser user browser_encoded
0 chrome adam 0
1 firefox adam 1
2 chrome susie 0
3 opera jane 2
In the "real world" you would drop browser column as its now encoded. I left it to show you what's happening.
Now notice how our data is now numeric. But now there is a problem. If you're using something like multiple linear regression, you've just assigned a higher weight to the Opera browser since it's a higher number. (2>1 and 2>0). But since this is categorical, and Opera isn't "weightier" than other browsers in our model, we need to solve this. This is where One Hot Encoding comes in to play.
pd.concat([df,pd.get_dummies(df['browser'])], axis=1)
browser user browser_encoded chrome firefox opera
0 chrome adam 0 1 0 0
1 firefox adam 1 0 1 0
2 chrome susie 0 1 0 0
3 opera jane 2 0 0 1
Now we have our data in columns. the 1 value appears when someone has used that browser for that record. See the first record? where chrome column is 1, because adam used 'chrome' in the browser column?.
There is one last thing when it comes to label encoding! You don't want to fall in the dummy variable trap with certain models. The basic logic is: If you know the browser is not Chrome or Firefox, it must be Opera. If you know the browser is Chrome, you know that it is not Firefox or Opera. So you only need N-1. If you use all the columns, your model may not perform well. You need to drop only one of the columns. you can do that with drop_first=True
pd.concat([df,pd.get_dummies(df['browser'], drop_first=True)], axis=1)
browser user browser_encoded firefox opera
0 chrome adam 0 0 0
1 firefox adam 1 1 0
2 chrome susie 0 0 0
3 opera jane 2 0 1
Another way to create features is to make them yourself. You mentioned that a user can use more than one browser. I'm making something up for the sake of a simple example - but maybe the number of browsers they use can be a feature? You can create one like so:
df['num_of_browsers_used'] = df.groupby('user', as_index=False).transform('nunique')['browser']
browser user browser_encoded num_of_browsers_used
0 chrome adam 0 2
1 firefox adam 1 2
2 chrome susie 0 1
3 opera jane 2 1 |
H: Conditional Multivariate Gaussian Distribution - Section 2.3, equation 2.74
Please refer section 2.3 (page 86-87) in Pattern Recognition and Machine Learning - Bishop
$$\mathit{N}(\mathbf{x}|\mathbf{\mu}, \Sigma)$$
where
$
\mathbf{x} =
\begin{bmatrix}
x_{a} \\
x_{b}
\end{bmatrix}
$,
$
\mathbf{\mu} =
\begin{bmatrix}
\mu_{a} \\
\mu_{b}
\end{bmatrix}
$,
$
\mathbf{\Sigma} =
\begin{bmatrix}
\Sigma_{aa} && \Sigma_{ab}\\
\Sigma_{ba} && \Sigma_{bb}
\end{bmatrix}
$
The equation below expresses the Quadratic term of exponent in the Bivariate Gaussian probability distribution (ref. book eq. 2.70; ignoring $-\frac{1}{2}$):
$$(x - \mu)^{T}\Sigma^{-1}(x - \mu) = (x_a - \mu_a)^{T}\Lambda_{aa}(x_a - \mu_a)
+(x_a - \mu_a)^{T}\Lambda_{ab}(x_b - \mu_b)
+(x_b - \mu_b)^{T}\Lambda_{ba}(x_a - \mu_a)
+(x_b - \mu_b)^{T}\Lambda_{aa}(x_b - \mu_b)$$
Author states "..conditional distribution $p(x_{a} | x_{b})$ can be
evaluated from the the joint distribution $p(x) = p(x_{a}, x_{b})$ by
fixing $x_b$ to the observed value and normalizing the resulting
expression to obtain the valid probability distribution over $x_{a}$
.." and then filters out only the linear terms with $x_{a}$ and $x_{a}^{T}$
component. The result is presented in equation 2.74 as:
$$x_{a}^{T}\{\Lambda_{aa}\mu_{a} - \Lambda_{ab}(x_b - \mu_b)\}$$
I follow the steps, and see that above is due to $x_{a}^{T}$, $x_{a}$ in the first 2 terms of equation 2.70 - where "term(s)" refer to the components in 2.70 added using $+$ operator.
However, I seem to be getting two additions elements in $x_a$, in
excess of the those present in 2.74 - due to the linear $x_a$ factor
in $3^{rd}$ term (of equation 2.70). Excess term(s): $$ (x_{b} -
\mu_{b})^{T}\Lambda_{ba}x_a $$
Where am I going wrong? Please guide.
Note: I am taking the $x_a$ too to be linear component, whereas 2.74 is (seemingly)
considering only the $x_{a}^{T}$ as linear term. Seems like I am losing track
somewhere in the middle (perhaps in equation 2.71 - related extract,
below):
$$ -\frac{1}{2}(x - \mu)^{T}\Sigma^{-1}(x - \mu) = -\frac{1}{2}{x^{T}\Sigma^{-1}x} + {x^{T}\Sigma^{-1}\mu} + constant \ldots eq. 2.71$$
...where 'const' denotes terms which are independent of ${x}$, and we have made use of the symmetry of $\Sigma$.
AI: \begin{align}&-\frac12(x - \mu)^{T}\Sigma^{-1}(x - \mu) \\&= -\frac12(x_a - \mu_a)^{T}\Lambda_{aa}(x_a - \mu_a) \tag{1}
\\&-\frac12(x_a - \mu_a)^{T}\Lambda_{ab}(x_b - \mu_b) \tag{2}
\\&-\frac12(x_b - \mu_b)^{T}\Lambda_{ba}(x_a - \mu_a) \tag{3}
\\&-\frac12(x_b - \mu_b)^{T}\Lambda_{aa}(x_b - \mu_b) \tag{4}\end{align}
Now let's focus on $(1)$, the linear part in $x_a$ are
$$-\frac12 (-x_a^T\Lambda_{aa}\mu_a - \mu_a^T\Lambda_{aa}x_a)=x_a^T\Lambda_{aa}\mu_a \tag{5}$$
Now focus on $(2)$, the linear part in $x_a$ is
$$-\frac12x_a^T\Lambda_{ab}(x_b-\mu_b) \tag{6}$$
Now focus on $(3)$, the linear part in $x_a$ is
$$-\frac12(x_b-\mu_b)^T\Lambda_{ba}x_a=-\frac12x_a^T\Lambda_{ab}(x_b-\mu_b) \tag{7}$$
There is no linear term of $x_a$ in $(4)$,
Adding $(5)$ to $(7)$, we have
\begin{align}x_a^T\Lambda_{aa}\mu_a-\frac12x_a^T\Lambda_{ab}(x_b - \mu_b)-\frac12x_a^T\Lambda_{ab}(x_b - \mu_b)&=x_a^T\Lambda_{aa}\mu_a-x_a^T\Lambda_{ab}(x_b - \mu_b)\\
&=x_a^T(\Lambda_{aa}\mu_a -\Lambda_{ab}(x_b-\mu_b))\end{align} |
H: How to divide 7 non-continuous result values into 5 legend categories?
I'm making a map which displays score results for river reaches. The score results are integers from 1 to 7.
Now I want to make a legend and color code my results. However, I don't want to use too many color shades as the human eye/brain is not really able to distinguish clearly between more than +/- 5 shades.
So my question is: how do I divide my 7 (non-continuous) score results into 5 categories for my legend?
I thought of the following options:
legend 1:
result = 1 - 2
result = 3
result = 4
result = 5
result = 6 - 7
legend 2:
result <= 1.4
1.4 < result <= 2.8
2.8 < result <= 4.2
4.2 < result <= 5.6
result > 5.6
My issue with the first legend is that I would be visually tricking my audience into thinking that the extreme values are more prevalent than they actually are.
My issue with the second legend is that my result values are not continuous, so in reality the second legend looks like this:
result = 1
result = 2
result = 3 - 4
result = 5
result = 6 - 7
So again, my data will be visually skewed and deceiving my audience.
What are proper ways to handle this type of situation? Or any similar situation for that matter (for instance for 9 or 11 or 13 result values)?
Thanks!
AI: You do not have to use shades of the same colour. You can keep 7 colours if they have sufficient contrast between them.
The "Gradient" and "Indexed" colour schemes of ColorData may be be used to select the 7 colours.
Some examples include.
In Wolfram Language
BarLegend[{#, {0, 7}}, Range@7,
LegendLabel -> #,
LegendLayout -> "Row"] & /@
{"BrightBands", "DarkBands", "Rainbow", "LightTemperatureMap", "TemperatureMap", "ThermometerColors"} //
Multicolumn[#, 3] &
BarLegend[{#, {0, 7}}, Range@7,
LegendLabel -> Last@ColorData[#, "AlternateNames"] <> " (" <> ToString@# <> ")",
LegendLayout -> "Row"] & /@
{96, 109, 110, 111} //
Multicolumn[#, 2] &
The ColorFunctionScaling and ColorFunction options will need to be set to use the scheme.
BarChart[Range@7,
ColorFunctionScaling -> False,
ColorFunction -> ColorData[{"TemperatureMap", {0, 7}}]
]
For a large number of discrete values a "Gradient" colour scheme can be used without restricting the number of contours.
For example
Plot[x, {x, 0, 41},
ColorFunctionScaling -> False,
ColorFunction -> ColorData[{#, {0, 41}}],
PlotLabel -> #,
Filling -> Axis,
PlotLegends -> Automatic,
ImageSize -> Medium
] & /@ {"TemperatureMap", "ThermometerColors"} //
Column
Hope this helps. |
H: How to predict specific user from session logs?
Let's say I have a dataset with 800 rows (40 entries for each of 20 users). The entries are user session logs (columns are - browser, os, time, date etc for a specific session).
Now each user has unique id (1-20). Let's say user_id=1 is special one and I need to detect it whenever new data comes.
So for new data I need to predict whether that session is of user_id=1 or not.
My question is: how can I do that?
One way I thought of is to make a feature representation for each of the 20 users and whenever new data comes in, take the distance of the data from each of feature and see the minimum distance.
However, the problem is that when I make a unique feature representation for each user, how do I deal with the columns like browser, os - because a user can have used multiple browsers in all those 40 sessions?
AI: There are certainly different ways to approach the problem, I am giving you what I think.
The main key is to create as many as features as you can, well it depends on what algorithm you may end up using, but still more features won't hurt. After all you want to differentiate each user, so your feature space should be rich enough to be able to distinct them from each other. I think you are on the right track. For example, there are many features can be extracted from date (weekday, weekend, month etc.), or the browser you mentioned, if it is multiple browser you can create a new feature num_broswers (count number of distinct browser that user uses). You better know your data. Are sessions equally spaced on time or not? For instance, could be that user_one exits a session faster than user_two? Then total time spent for each user on each session could be another feature (engagement factor). You mentioned time in your columns, but I would not know what it is. You can even take time_of_day as another feature (morning, mid-day, afternoon, evening etc.).
You need also to clarify in your question whether number of users are fixed for a specific period of time, or constantly changing?
After addressing key points about data and features, I am thinking of two algorithmic approaches:
Similarity-based Method:
This is exactly what you mentioned, and it is a valid approach and perhaps the fastest and most straightforward one. As mentioned above, you can make new features like num_browsers for multiple browsers entries for a user in those 40 sessions, which was your major concern. I am more concerned that naturally you may have a mix of numerical and categorical features, and I would not be very sure if normal KNN (Euclidean metric) would work! Doesn't hurt trying though after encoding those categorical features or normalizing features are not continuous, but check this post, or another one, or at least be aware of pitfalls!!
Classification-based Method:
You could even turn this into a Multiclass Classification problem, i.e. classifying which user the new incoming user is based on features. The challenge is that the quality of your classifier strongly depends on your feature space. The choice of the algorithms does not matter (even kNN again can be used, but perhaps Decision Trees i.e. RandomForest would be better). If you won't have a good set of distinctive features, classifier can not do magic! This could be also true about kNN, but since kNN doesn't need training, you may get more reasonable results than a classifier.
P.S. This method would be more suitable "if number of users are fixed". Still it would be possible to train a classifier if number of users are NOT fixed. It is ONLY then you need to re-train whenever you have a new user or user_ids are changing, otherwise your algorithm have not seen such user and likely it is going to miss it!
I would personally go with the Similarity-based Method and try to enhance my features as much as I can. |
H: What should I study to find optimal value of best feature combinations in machine learning?
I would like to do production optimization with machine learning and/or optimization problem.
My goal is not to find minimizing loss in loss function only to give the best y value. My ultimate goal is to find optimal values of the best feature combination. In other words, 'I would like to find the best x values (feature values) that yields the best y (minimizing loss). For example, 30 (temp), 100 (Pressure), 50 (weight) gave the best product y. The actual best set of x feature values.
Supervised machine learning techniques do not provide the best-predicted x values to me. but I would like to find the x. Therefore, my question is as below:
What is the field for finding optimal x values for optimal y values called? Is continuous and discrete optimizations are ones finding the x values for y?
I found many studies but I am not sure if it is really about finding the x values (the optimal values of the best combination of features). Can someone please tell me what field I must study for this?
What kind of techniques should I use to find what I want?
AI: Welcome to DS.SE!
As you already mentioned, machine learning is not about finding the best values of features regarding a function. Also, loss functions usually measure the errors of predictions and not the quality of the feature values (e.g. " the best product y").
What you are looking for is called optimization or search algorithms but not learning or prediction. If the search space (i.e. the combination of all possible feature values) is not that large, you can just perform an exhaustive search (trying all combination). If the search space is large, then smarter approaches might be useful. These techniques need a function (usually called objective function) to optimize (finding the combination of input values that results in a min/max outcome).
In general, there are two approaches: mathematical modeling (i.e. linear programming) and metaheuristics (e.g., evolutionary algorithms and swarm intelligence). Mathematical models can be fast and guarantee to find the optimum when the objective function is well-defined and the underlying assumption are met (you may need to simplify your constraints sometimes). If these conditions are not met, then metaheuristics usually can find a set of approximate solutions. |
H: what is the difference between multilabel and multilabel-multiclass classification?
I am trying to classify news articles into their required category. However I am confused by the above(multilabel and multilabel-multiclass) terminologies. My dataset consists of 2 csv files. The first file contains content of the news article. The second file in that same corresponding row number consists of labels associated with it. Here is the sample entries in second file i.e. labels file:
So for row number 1 the first csv file contains news article which is labelled as sport and football. Similarly for row number 2, the article in first csv has labels sport and media. My question here is, is this dataset multi-label or multilabel-multiclass? what is the difference between them?
AI: Multiclass classification means a classification task with more than two classes; e.g., classify a set of images of fruits which may be oranges, apples, or pears. Multiclass classification makes the assumption that each sample is assigned to one and only one label: a fruit can be either an apple or a pear but not both at the same time.
Multilabel classification assigns to each sample a set of target labels. This can be thought as predicting properties of a data-point that are not mutually exclusive, such as topics that are relevant for a document. A text might be about any of religion, politics, finance or education at the same time or none of these.
Multioutput-multiclass (or multilabel-multiclass) classification means that a single estimator has to handle several joint classification tasks. This is both a generalization of the multi-label classification task, which only considers binary classification, as well as a generalization of the multi-class classification task. The output format is a 2d numpy array or sparse matrix.
As each object of your data can be classified in one to multiple labels at the same time, it is a simple multilabel classification problem
Taken from : https://scikit-learn.org/stable/modules/multiclass.html |
H: Machine learning model using keywords for binary decision
I have a some experience from Uni with convolutional NN and edge detection, but haven't much explored the other types of machine learning models.
I was wondering if there might be one that is suited for being able to use certain, textual keywords (for example, words that were parsed from internet news sources) in order to arrive at some binary, yes/no decision?
AI: The most obvious binary classification task coming from text data is sentiment analysis. Kaggle is plenty of datasets such as this one on movie reviews.
Sentiment classifiers can be either very simple, like logistic regressions on naive Bayes classifiers, or very complex such as the ones based on RNNs and/or part-of-speech tagging models.
If you are interested in news specifically, similar models can be used to estimate news polarization. |
H: Classification for 'not something' Neural Networks
This is a very basic question about neural networks in general.
How do you have a classification for 'not something'?
Example: Say that you have to determine the accuracy of a certain simulation by comparing it to the actual scenario (in this case, comparing simulated and actual galaxy images). Theoretically, I can make a CNN and classify these images as 'galaxies' and 'not galaxies'. But what kind of images do I need to use for 'not galaxies'? I only have images for true galaxies.
AI: Your problem resembles the learning task of 'one-class classification', otherwise known as anomaly detection. Essentially, you have a set of images all belonging to one class (galaxies), and you want a model to tell you if a new image is a galaxy or not, without providing any counter-examples to train with.
You should try a recently developed method called Deep Support Vector Data Description. In this method, a transformation is learned by a neural network to map the inputs to points contained in a small hypersphere in latent space. Then, examples falling inside the hypersphere can be considered 'galaxies', and outside can be considered 'not galaxies'.
You can read more about it here (quite technical), or just try to use the authors' PyTorch implementation here. |
H: Partial Derivative - Pattern Recognition Bishop section 2.3
Please refer to page $93$, Pattern Recognition book by Bishop.
This question is related to partial derivative of a term, the proof refers to appendix C.19 which states:
$$\frac{\partial({a^{T}x})}{\partial{x}} = \frac{\partial({x^{T}a})}{\partial{x}} = a$$
and then moves on to take partial of normal probability distribution w.r.t. $\mu$ to get (eqn 2.120):
$$\frac{\partial(x_{n} - \mu)^{T}\Sigma^{-1}(x_{n} - \mu)}{\partial{\mu}} = (x_{n} - \mu)$$
I am unable to see how we get the term. I tried checking elsewhere,
but the explanation "consider $(x_{n}-\mu)$ to be scalar and
$(x_{n}-\mu)^{T}$ as vector" does not resonate, as per my understanding both are
vectors (as $x_{n}$ is vector) - it's their dot product that results in scalar.
I tried solving it step-by-step:
Expand the term, ignoring $\Sigma^{-1}$ (contains no $\mu$ term) which gives:
$$\frac{\partial{(x_{n}^{T}x_{n} - x_{n}^{T}\mu - \mu^{T}x_{n} + \mu^{T}\mu)}}{\partial{\mu}}$$
As per my understanding $\frac{\partial}{\partial{\mu}}$ of: $x_{n}^{T}x_{n} = 0$; of $-x_{n}^{T}\mu = -\mu^{T}x_{n} = -\mu$ but, not sure about $\mu^{T}\mu$
Given the context, I request help on partials :
Is $\frac{\partial}{\partial{\mu}}$[ $x_{n}^{T}x_{n} = 0$, $(-x_{n}^{T}\mu = -\mu^{T}x_{n}) = -\mu$]
$\frac{\partial}{\partial{\mu}}(\mu^{T}\mu)$
I have some understanding of calculus, so guide to minimum that'll help me derive equation 2.120.
Magnus & Neudecker will take time (450+ pages) and so I may have to gallop over another partial on page 94 $$ \frac{\partial{(x_{n}-\mu)^{T}\Sigma^{-1}(x_{n}-\mu)}}{{\partial{\Sigma}}}$$
AI: $$\frac{\partial}{\partial \mu} (x_n^Tx_n)=0, $$$$\frac{\partial}{\partial \mu}( -\mu^Tx_n)=-x_n$$
$$\frac{\partial }{\partial \mu}(\mu^T\mu) = 2\mu$$
Note that if $A$ is symmetric, then $$\frac{\partial }{\partial y}(y^TAy)=2Ay.$$
\begin{align}\frac{\partial }{\partial \mu}\left(-\frac12 (x_n-\mu)^T\Sigma^{-1}(x_n-\mu) \right)&=-\frac12\frac{\partial }{\partial \mu}\left( (\mu-x_n)^T\Sigma^{-1}(\mu-x_n) \right)\\
&=-\frac12\left(2\Sigma^{-1}(\mu-x_n) \right)\\
&= \Sigma^{-1}(x_n-\mu)\end{align} |
H: How to determine input shape in keras?
I am having difficulty finding where my error is while building deep learning models, but I typically have issues when setting the input layer input shape.
This is my model:
model = Sequential([
Dense(32, activation='relu', input_shape=(1461, 75)),
Dense(32, activation='relu'),
Dense(ytrain.size),])
It is returning the following error:
ValueError: Error when checking input: expected dense_1_input to have 3
dimensions, but got array with shape (1461, 75)
The array is the training set from the kaggle housing price competition and my dataset has 75 columns and 1461 rows. My array is 2 dimensional, so why are 3 dimensions expected? I have tried adding a redundant 3rd dimension of 1 or flattening the array before the first dense layer but the error simply becomes:
ValueError: Input 0 is incompatible with layer flatten_1: expected
min_ndim=3, found ndim=2
How do you determine what the input size should be and why do the dimensions it expects seem so arbitrary?
For reference, I attached the rest of my code:
xtrain = pd.read_csv("pricetrain.csv")
test = pd.read_csv("pricetest.csv")
xtrain.fillna(xtrain.mean(), inplace=True)
xtrain.drop(["Alley"], axis=1, inplace=True)
xtrain.drop(["PoolQC"], axis=1, inplace=True)
xtrain.drop(["Fence"], axis=1, inplace=True)
xtrain.drop(["MiscFeature"], axis=1, inplace=True)
xtrain.drop(["PoolArea"], axis=1, inplace=True)
columns = list(xtrain)
for i in columns:
if xtrain[i].dtypes == 'object':
xtrain[i] = pd.Categorical(pd.factorize(xtrain[i])[0])
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
for i in columns:
if xtrain[i].dtypes == 'object':
xtrain[i] = le.fit_transform(xtrain[i])
ytrain = xtrain["SalePrice"]
xtrain.drop(["SalePrice"], axis=1, inplace=True)
ytrain = ytrain.values
xtrain = xtrain.values
ytrain.astype("float32")
size = xtrain.size
print(ytrain)
model = Sequential(
[Flatten(),
Dense(32, activation='relu', input_shape=(109575,)),
Dense(32, activation='relu'),
Dense(ytrain.size),
])
model.compile(loss='mse', optimizer='adam')
model.fit(xtrain, ytrain, epochs=10, verbose=1)
Any advice would be incredibly helpful!
Thank you.
AI: The number of rows in your training data is not part of the input shape of the network because the training process feeds the network one sample per batch (or, more precisely, batch_size samples per batch). And in input_shape, the batch dimension is not included for the first layer. You can read more on this here.
So, the input shape for your problem will be:
input_shape=(75, ) |
H: Which plot to use for data spanned on multiple years?
I've a dataset of - Number of accidents that happens in each state. The dataset spans from 2001 to 2014.
Which plot should I use to represent the information - number of accidents in each state per year, in 1 plot.
AI: For time data, you should often use a simple Line Chart, with time on the X axis and the other variable dependent of time (here the number of accident) on the Y axis. Since you have several states to plot, you can simply draw one line for each state. to differentiate them, you can use a visualization variable : color, width, pattern ... Color is probably the most relevant here. You can as well write the name of the state near each line. |
H: How to automate the encoding process?
I am working on the Boston challenge hosted on Kaggle and I'm still refining my features. Looking at the dataset, I realize that some columns need to be encoded in binary, some encoded in decimals (ranking them out of a scale of n) and some need to be one-hot-encoded. I've collected these columns and categorized them in distinct lists (at least based on my judgement on how their data should be encoded):
categorical_columns = ['MSSubClass', 'MSZoning', 'Alley', 'LandContour', 'Neighborhood', 'Condition1', 'Condition2',
'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'MasVnrType', 'Foundation', 'Heating',
'Functional', 'GarageType', 'PavedDrive', 'SaleType', 'SaleCondition']
binary_columns = ['Street', 'CentralAir']
ranked_columns = ['LotShape', 'Utilities', 'LandSlope', 'ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GareFinish', 'GarageQual', 'GarageCond', 'PoolQC', 'Fence', 'MiscFeature']
One fellow stackexchange user suggested that I use pandas.get_dummies() method to one-hot-encode categorical variables like MSZoning and attach it to a variable like this:
OHE_MSZoning = pd.get_dummies(train['MSZoning'])
I'd like to know how I can automate this process using functions and control-flow statements like for-loop.
AI: I'm the fellow Stackexchange user, hi! I wrote the function that iterates the one-hot encoding on all your categorical_columns:
def serial_OHE(df, categorical_columns):
# iterate on each categorical column
for col in categorical_columns:
# take one-hot encoding
OHE_sdf = pd.get_dummies(df[col])
# drop the old categorical column from original df
df.drop(col, axis=1, inplace=True)
# attach one-hot encoded columns to original dataframe
df = pd.concat([df, OHE_sdf], axis = 1, ignore_index = True)
return df
So you can call it like this:
df = serial_OHE(df, categorical_columns)
Let me know it there are any problems. |
H: Transfer value of one column to another column into a new column based on condition
I have a dataframe with 2 relevant columns.
+-------------+---------------+
| Bezeichnung | Artikelgruppe |
+-------------+---------------+
| A | 1 |
| B | 2 |
| C | 3 |
| D | 4 |
+-------------+---------------+
I want to paste the value of Bezeichnung into a new column (new_col) for all values of the column Artikelgruppe that are equal to 0.
AI: This can be solved using a number of methods. One of the method is:
df['new_col']=df['Bezeichnung'][df['Artikelgruppe']==0]
This would result in a new column with the values of column Bezeichnung where values of column Artikelgruppe are 0 and the other values will be NaN. The NaN values could be easily replaced at any time of point.
+-------------+---------------+---------+
| Bezeichnung | Artikelgruppe | new_col |
+-------------+---------------+---------+
| A | 1 | NaN |
| B | 2 | NaN |
| C | 0 | C |
| D | 4 | Nan |
+-------------+---------------+---------+ |
H: How to scale exponential data for a regression problem?
I understand that I should be scaling features between (0, 1) before feeding them into a neural network. However, what happens if future data could be larger than my current training data? For instance, if I am training a RNN on time-series data to perform demand forecasting, the product I am forecasting may be in a growth stage that will yield higher demand numbers in the future months than the past months. Is there a way to normalize data but still allow for larger values in the future?
AI: Yes, there is. Instead of Min-Max scaling, that shrinks any distribution in the [0, 1] interval, you can Scale the variables (in statistics, they are called Z-scores). The formula is:
(x - mean(x)) / stdev(x)
This will zero-center the distribution, and scale all the variables to a standard deviation = 1.
I strongly prefer this technique to Min-Max Scaling, since the latter is too sensitive to outlier observations and generates problems unseen, out-of-scale datapoints.
Additionally, if your analysis allows for that, consider taking the log() of a data distribution that grows exponentially. That would coerce it to a more linear growth. (I'm not sure this is what you need, it's just a possibility.) |
H: What happens when scikit-learn does a Lasso Model?
I have started an MLS course. As a beginner and non-mathematician it has been hard.
I am trying to understand the exercise about Lasso Models.
I have done Lasso models on R-cran, but this is my first time with Python.
I have a dataset of cars with seven variables. The exercise consists on doing a Lasso Model to predict the gasoline consumption of the cars, the dependent variable, so $x$ is a table with the rest of variables and $y$ is the consumption.
Then, if I launch Lasso on scikitlearn:
modelLasso = Lasso(alpha=0.1).fit(x, y)
I do not understand the result. Has it generated a prediction on $y$ of every row in the table?
If so, how can I access to the array of predictions of the model and how do I use the model to predict the consumption given new $x$ values?
AI: Here is a little lasso example using the Boston Housing Data. The code also shows how to:
choose optimal alpha,
display data and predictions,
display estimated coefficients,
and how to make a prediction by hand.
# Import toy data
from sklearn.datasets import load_boston
import pandas as pd
# Load toy data
boston = load_boston(return_X_y=False)
# Make data a pandas data frame
boston = pd.DataFrame(boston.data)
# Print head of toy data
#print(boston.head())
# Define x,y (only take few columns for x)
y = boston[[12]] # choose column 12
x = boston[[10,11]] # only column 10,11
from sklearn.linear_model import Lasso
from sklearn.linear_model import LassoCV
lasso = Lasso(max_iter=10000)
# Perform lasso CV to get the best parameter alpha for regulation
# usually use scaled data mean = 0, sd = 1 by applying scale()
# I don't do this here for brevity
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html
lassocv = LassoCV(alphas=None, cv=10, max_iter=10000)
lassocv.fit(x, y.values.ravel())
# Fit lasso using the best alpha
lasso.set_params(alpha=lassocv.alpha_)
lasso.fit(x, y)
# Show the estimated coefficients/intercept
print('Intercept (Lasso): \n', lasso.intercept_)
print('Coefficients (Lasso): \n', lasso.coef_)
# Predict lasso / print first predicted value
preds = lasso.predict(x)
print('Lasso prediction: \n', preds[0])
# Print first x values
print('x-values: \n', x.head(1))
# Predict "by hand"
print(2.89333694+1.00035566*15.3 - 0.02439867*396.9)
Output is:
Intercept (Lasso):
[2.89333694]
Coefficients (Lasso):
[ 1.00035566 -0.02439867]
Lasso prediction:
8.514947307107153
x-values:
10 11
0 15.3 396.9
8.514946415000002 |
H: Y-axis output is not order
I am currently attempting to output some data based on user agent strings. The strings are too messy to place into a plot similar to the one shown below, so I plotted their index instead. Why is my plot outputting out of order? I would like to have my $y$-axis be in ascending order.
For context, I am plotting 6 plots with a for loop and none of them are producing the y-axis in order.
Edit: I added 'uas_id' dplyr code with solution.
gbuas_large<-gbuas_large%>%
mutate(uas_id = sort(strtoi(rownames(gbuas_large))))%>%
select(user_agent_string,uas_id,everything())
col<-names(gbuas_focus[,5:10])
for(i in col) {
print(i)
ua_p<-ggplot(gbuas_focus
)+
geom_point(aes_string(
x=i,
y='uas_id'),
color="#70a6ff",
size=3
)+
labs(x='Variable of Score',
y='User Agent Strings',
title="Understanding Misclassifying Variables"
)+
facet_wrap(~classification)
print(ua_p)
}
AI: It is in order, but not in the order that you want.
It is currently treating the indices as a string.
To get your desired index as numerics, here are some codes that hopefully can help you.
> c = c("1", "2", "13")
> c = sort(c)
> c
[1] "1" "13" "2"
> c = strtoi(c)
> c
[1] 1 13 2
> sort(c)
[1] 1 2 13 |
H: In the Keras Tokenizer class, what exactly does word_index signify?
I'm trying to really understand Tokenizing and Vectorizing text in machine learning, and am looking really hard into the Keras Tokenizer class. I get the mechanics of how it's used, but I'd like to really know more about it. So for example, there's this common usage:
tokens = Tokenizer(num_words=SOME_NUMBER)
tokens.fit_on_texts(texts)
tokens returns a word_index, which maps words to some number.
Are the words all words in texts, or are they maxed at SOME_NUMBER?
And are the dict values for word_index the frequency of each word, or just the order of the word?
AI: Are the words all words in texts, or are they maxed at SOME_NUMBER?
Yes, it will be maxed at the most common SOME_NUMBER-1 words.
And are the dict values for word_index the frequency of each word, or just the order of the word?
It is just the index of the word in the dictionary.
You can read more here in the documentation. |
H: How to scale or standardize data that is mostly 0 (ranges from 0-1)?
I am relatively new to data science and big data munging in general. I currently have various columns of data that range from $0-1$, but most of the values in each column are $0$. The data represents certain attributes about a customer, including the proportion of a certain customer's purchases in a particular category (so that the sum of each of these proportions is $1$), in addition to other data like the number of visits, the time between visits, etc. A set of sample rows could look like:
columns = [other_stuff,'C1','C2','C3','C4','C5','C6']
row = [1945,0.45, 0, 0, 0, 0.3, 0.25]
another_row = [438,0, 0.24, 0, 0.01, 0.5, 0.25]
A sample histogram of one of the "proportional" variables looks like:
As of now, I am struggling to find ways to scale the data to make it usable for clustering with other data with much different units/orders of magnitude. Should I:
Scale all the other variables to a 0-1 range using a Min-Max method and keep the "proportional" variables the same
Scale all variables using the variables' means and standard deviations, even though the "proportional variables" clearly do not follow a normal distribution (also cannot apply a log transformation since most of the values are indeed 0)
Keep everything as is, but perform dimensionality reduction on all variables and perform clustering based off the principal components (if Principal Component Analysis is used)
None of the above; use a completely different set of algorithms/methods
I am currently using option 3 using sckit-learn and Python 3.7. If there are packages in R that could also help, please throw them my way. Thank you.
AI: You don't specify, but assuming you are doing K-means clustering, the way the algorithm calculates similarity, or more technically distance, using Euclidean Distance, the distances on the dimension with the greatest magnitude will overwhelm distances of those where the values are orders of magnitude smaller.
To overcome this I would normalize the data. I am not sure the method you use is such a big deal. Use min-max if you are comfortable with it. Be careful though because it is possible, if I remember correctly, in sk-learn to scale the whole data set in one go which doesn't fix the magnitude differences. Scale each column individually or even possibly just the one that is vastly different if all the others are nominally between 0 and 1. |
H: Binary encoding and its interpretation in Python
I have a column named Street that has 2 values: Paved and Gravel. Here is what print(train[binary_columns[0]].unique().tolist()) gives me:
['Pave', 'Grvl']
I want to encode these values in binary like this:
df['Street'] = df['Street'].replace(['Pave', 'Grvl'], [1, 0])
But I wonder if this is a good idea. Wouldn't the computer interpret this as Pave > Grvl? How does the computer differentiate between binary and integer encoding?
AI: Your categorical variable has two levels, so there is no actual difference between dummy-coding vs. simply entering the variable into the analysis. That is, to dummy code you would create one new variable with two values but your original variable is already one variable with two values. Dummy-coding is important for variables with more than two possible values. So, in this case the computer won't consider Pave > Grvl.
But if you have more than two variables then you should use dummy variables.
For your data, you can use pandas.get_dummies() or sklearn's one hot encoder to achieve your result. |
H: How to delete a row if a values in a column is not NaN
In the given code it displays only the rows without NaN values but i want only the rows with NaN values in a provided column everything has to be removed
df1 = df1[pd.notnull(df1['column_name'])]
AI: If I am correct, you are looking to keep only the NaN values. You can use pd.isnull() to check which rows are NaN and then only keep those. Try this:
df1 = df1[pd.isnull(df1['column_name'])] |
H: how to export the data edited to a new excel file
The provided code imports the excel file and removes the not NaN values and store it in df2. Now, How do i save the changes to an another file.
df1 = pd.read_excel('file_name.xlsx')
df2 = df1[pd.isnull(df1['column_name'])]
df2
AI: You could simply use:
df2.to_excel("output.xlsx")
I suggest you take some time to look at the various functionalities pandas offers. Will be great help to you in future! |
H: Standardization After PCA for Kmean clustering
I want to apply Kmean for clustering after PCA dimensionality reduction. I have standardized data with StandardScaler before the PCA, then I want to train Kmeans for finding clusters. However, the variance among the PCA components could not be of the same order of magnitude.
It is a good practice to standardize the PCA components before the clustering?
AI: If the variables you are using for k-Means clustering are on different scales the variables with the higher variance will dominate the algorithm, by driving the convergence of the k centroids.
Is this something that you can allow, based on your research goals? If, instead, you want all the factors to have equal weight in the clustering, then you should scale them. |
H: how to iterate over certain rows of a certain column an update their value. by applying condition
I want to update the value of opening stock and closing stock by adding 20 when dcsDep2 < 3.028512.
here is the snapshot of dataset:
I tried with this code:
for(index_label,row_series) in dep1.iterrows():
while row_series["Closing Stock"]<3.028512:
dep1.at[index_label,"Closing Stock"]=row_series["Closing Stock"]+22
but no output is shown.
AI: You can use df.loc to make the selection and then set the values. Take a look at this example and modify according to your dataset.
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'a':[1,2,3], 'b':[0.1,0.3,0.9], 'c':[4,5,6]})
In [3]: df
Out[3]:
a b c
0 1 0.1 4
1 2 0.3 5
2 3 0.9 6
In [4]: df.loc[df['b'] < 0.5, ['a', 'c']] += 20
In [5]: df
Out[5]:
a b c
0 21 0.1 24
1 22 0.3 25
2 3 0.9 6
Also, in future try to provide a sample from your dataset as a table of values and not an image so that it becomes easier to reproduce an example. |
H: Features used by MAUI key phrase extraction tool
I have been trying out keyphrase extraction for a while and I want to know what are all the features that MAUI MAUI github uses for training the keyphrase extraction?
Having read this its hard for me to understand every feature that she uses can someone help me give a brief about the features that she is using? and some references for the same if any.
AI: The references are in the link that you give: here you can find references to two papers which describe the method:
Domain-Specic Keyphrase Extraction
KEA: Practical Automatic Keyphrase Extraction
Fyi it seems there's no neural nets involved (the question is tagged with neural-network) |
H: Correlation feature selection followed by regression
I have quarterly results data for a company with around 100 variables. Total 60 quarters results are available (total records 60).
sample data: (only few columns & 10 rows)
I would like know following,
which ML algorithms / methods to be used to get the information about most important variable(s) affecting the price movement
how much each variable is contributing towards price movement (+ve or -ve)
predict, if a variable value is changed then how it'll affect the price movement
Thanks!
AI: I think this is a case for linear regression with a lasso/ridge penalty. The lasso/ridge does „shrink“ features/variables, so that it is easy to see which features are important. Since you have 100 variables, you could opt for lasso, since lasso can also „automatically“ exclude features. Here is a lasso example in Python: https://datascience.stackexchange.com/a/53639/71442. |
H: XGBoost: Can the features in test data be a subset of the features used to train the model?
Is it a problem if the test data only has a subset of the features that are used to train the xgboost model? All my predictor variables (except 1) are factors, so one hot encoding is done before converting it into xgb.DMatrix. So the different levels of the factor variables become the features and my test doesn't have all of these features, only a subset of it.
At the moment, while running my model on test data in R, I 'm running into an error saying that "Features names stored in object and newdata are different!".
I'm new in the field, so any help would be much appreciated. Thanks!
AI: All the variables used to train the model must be present in the test set.
This is because you used all the variables to create the rules. Hence we would need those to score them.
If you are using python to do one hot encoding using fit or fit_transform functions in sklearn you will use the same object to transform the test set using transform function.
This will ensure the variables to be consistent in test and train. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.